Sample records for stochastic point process

  1. Research in Stochastic Processes

    DTIC Science & Technology

    1988-08-31

    stationary sequence, Stochastic Proc. Appl. 29, 1988, 155-169 T. Hsing, J. Husler and M.R. Leadbetter, On the exceedance point process for a stationary...Nandagopalan, On exceedance point processes for "regular" sample functions, Proc. Volume, Oberxolfach Conf. on Extreme Value Theory, J. Husler and R. Reiss...exceedance point processes for stationary sequences under mild oscillation restrictions, Apr. 88. Obermotfach Conf. on Extremal Value Theory. Ed. J. HUsler

  2. 1/f Noise from nonlinear stochastic differential equations.

    PubMed

    Ruseckas, J; Kaulakys, B

    2010-03-01

    We consider a class of nonlinear stochastic differential equations, giving the power-law behavior of the power spectral density in any desirably wide range of frequency. Such equations were obtained starting from the point process models of 1/fbeta noise. In this article the power-law behavior of spectrum is derived directly from the stochastic differential equations, without using the point process models. The analysis reveals that the power spectrum may be represented as a sum of the Lorentzian spectra. Such a derivation provides additional justification of equations, expands the class of equations generating 1/fbeta noise, and provides further insights into the origin of 1/fbeta noise.

  3. Stochastic transformation of points in polygons according to the Voronoi tessellation: microstructural description.

    PubMed

    Di Vito, Alessia; Fanfoni, Massimo; Tomellini, Massimo

    2010-12-01

    Starting from a stochastic two-dimensional process we studied the transformation of points in disks and squares following a protocol according to which at any step the island size increases proportionally to the corresponding Voronoi tessera. Two interaction mechanisms among islands have been dealt with: coalescence and impingement. We studied the evolution of the island density and of the island size distribution functions, in dependence on island collision mechanisms for both Poissonian and correlated spatial distributions of points. The island size distribution functions have been found to be invariant with the fraction of transformed phase for a given stochastic process. The n(Θ) curve describing the island decay has been found to be independent of the shape (apart from high correlation degrees) and interaction mechanism.

  4. Discrimination of shot-noise-driven Poisson processes by external dead time - Application of radioluminescence from glass

    NASA Technical Reports Server (NTRS)

    Saleh, B. E. A.; Tavolacci, J. T.; Teich, M. C.

    1981-01-01

    Ways in which dead time can be used to constructively enhance or diminish the effects of point processes that display bunching in the shot-noise-driven doubly stochastic Poisson point process (SNDP) are discussed. Interrelations between photocount bunching arising in the SNDP and the antibunching character arising from dead-time effects are investigated. It is demonstrated that the dead-time-modified count mean and variance for an arbitrary doubly stochastic Poisson point process can be obtained from the Laplace transform of the single-fold and joint-moment-generating functions for the driving rate process. The theory is in good agreement with experimental values for radioluminescence radiation in fused silica, quartz, and glass, and the process has many applications in pulse, particle, and photon detection.

  5. Doubly stochastic Poisson process models for precipitation at fine time-scales

    NASA Astrophysics Data System (ADS)

    Ramesh, Nadarajah I.; Onof, Christian; Xie, Dichao

    2012-09-01

    This paper considers a class of stochastic point process models, based on doubly stochastic Poisson processes, in the modelling of rainfall. We examine the application of this class of models, a neglected alternative to the widely-known Poisson cluster models, in the analysis of fine time-scale rainfall intensity. These models are mainly used to analyse tipping-bucket raingauge data from a single site but an extension to multiple sites is illustrated which reveals the potential of this class of models to study the temporal and spatial variability of precipitation at fine time-scales.

  6. Uncertainty Reduction for Stochastic Processes on Complex Networks

    NASA Astrophysics Data System (ADS)

    Radicchi, Filippo; Castellano, Claudio

    2018-05-01

    Many real-world systems are characterized by stochastic dynamical rules where a complex network of interactions among individual elements probabilistically determines their state. Even with full knowledge of the network structure and of the stochastic rules, the ability to predict system configurations is generally characterized by a large uncertainty. Selecting a fraction of the nodes and observing their state may help to reduce the uncertainty about the unobserved nodes. However, choosing these points of observation in an optimal way is a highly nontrivial task, depending on the nature of the stochastic process and on the structure of the underlying interaction pattern. In this paper, we introduce a computationally efficient algorithm to determine quasioptimal solutions to the problem. The method leverages network sparsity to reduce computational complexity from exponential to almost quadratic, thus allowing the straightforward application of the method to mid-to-large-size systems. Although the method is exact only for equilibrium stochastic processes defined on trees, it turns out to be effective also for out-of-equilibrium processes on sparse loopy networks.

  7. Transversal Fluctuations of the ASEP, Stochastic Six Vertex Model, and Hall-Littlewood Gibbsian Line Ensembles

    NASA Astrophysics Data System (ADS)

    Corwin, Ivan; Dimitrov, Evgeni

    2018-05-01

    We consider the ASEP and the stochastic six vertex model started with step initial data. After a long time, T, it is known that the one-point height function fluctuations for these systems are of order T 1/3. We prove the KPZ prediction of T 2/3 scaling in space. Namely, we prove tightness (and Brownian absolute continuity of all subsequential limits) as T goes to infinity of the height function with spatial coordinate scaled by T 2/3 and fluctuations scaled by T 1/3. The starting point for proving these results is a connection discovered recently by Borodin-Bufetov-Wheeler between the stochastic six vertex height function and the Hall-Littlewood process (a certain measure on plane partitions). Interpreting this process as a line ensemble with a Gibbsian resampling invariance, we show that the one-point tightness of the top curve can be propagated to the tightness of the entire curve.

  8. Explore Stochastic Instabilities of Periodic Points by Transition Path Theory

    NASA Astrophysics Data System (ADS)

    Cao, Yu; Lin, Ling; Zhou, Xiang

    2016-06-01

    We consider the noise-induced transitions from a linearly stable periodic orbit consisting of T periodic points in randomly perturbed discrete logistic map. Traditional large deviation theory and asymptotic analysis at small noise limit cannot distinguish the quantitative difference in noise-induced stochastic instabilities among the T periodic points. To attack this problem, we generalize the transition path theory to the discrete-time continuous-space stochastic process. In our first criterion to quantify the relative instability among T periodic points, we use the distribution of the last passage location related to the transitions from the whole periodic orbit to a prescribed disjoint set. This distribution is related to individual contributions to the transition rate from each periodic points. The second criterion is based on the competency of the transition paths associated with each periodic point. Both criteria utilize the reactive probability current in the transition path theory. Our numerical results for the logistic map reveal the transition mechanism of escaping from the stable periodic orbit and identify which periodic point is more prone to lose stability so as to make successful transitions under random perturbations.

  9. A stochastic model for eye movements during fixation on a stationary target.

    NASA Technical Reports Server (NTRS)

    Vasudevan, R.; Phatak, A. V.; Smith, J. D.

    1971-01-01

    A stochastic model describing small eye movements occurring during steady fixation on a stationary target is presented. Based on eye movement data for steady gaze, the model has a hierarchical structure; the principal level represents the random motion of the image point within a local area of fixation, while the higher level mimics the jump processes involved in transitions from one local area to another. Target image motion within a local area is described by a Langevin-like stochastic differential equation taking into consideration the microsaccadic jumps pictured as being due to point processes and the high frequency muscle tremor, represented as a white noise. The transform of the probability density function for local area motion is obtained, leading to explicit expressions for their means and moments. Evaluation of these moments based on the model is comparable with experimental results.

  10. Memristor-based neural networks: Synaptic versus neuronal stochasticity

    NASA Astrophysics Data System (ADS)

    Naous, Rawan; AlShedivat, Maruan; Neftci, Emre; Cauwenberghs, Gert; Salama, Khaled Nabil

    2016-11-01

    In neuromorphic circuits, stochasticity in the cortex can be mapped into the synaptic or neuronal components. The hardware emulation of these stochastic neural networks are currently being extensively studied using resistive memories or memristors. The ionic process involved in the underlying switching behavior of the memristive elements is considered as the main source of stochasticity of its operation. Building on its inherent variability, the memristor is incorporated into abstract models of stochastic neurons and synapses. Two approaches of stochastic neural networks are investigated. Aside from the size and area perspective, the impact on the system performance, in terms of accuracy, recognition rates, and learning, among these two approaches and where the memristor would fall into place are the main comparison points to be considered.

  11. Estimation of correlation functions by stochastic approximation.

    NASA Technical Reports Server (NTRS)

    Habibi, A.; Wintz, P. A.

    1972-01-01

    Consideration of the autocorrelation function of a zero-mean stationary random process. The techniques are applicable to processes with nonzero mean provided the mean is estimated first and subtracted. Two recursive techniques are proposed, both of which are based on the method of stochastic approximation and assume a functional form for the correlation function that depends on a number of parameters that are recursively estimated from successive records. One technique uses a standard point estimator of the correlation function to provide estimates of the parameters that minimize the mean-square error between the point estimates and the parametric function. The other technique provides estimates of the parameters that maximize a likelihood function relating the parameters of the function to the random process. Examples are presented.

  12. Using stochastic models to incorporate spatial and temporal variability [Exercise 14

    Treesearch

    Carolyn Hull Sieg; Rudy M. King; Fred Van Dyke

    2003-01-01

    To this point, our analysis of population processes and viability in the western prairie fringed orchid has used only deterministic models. In this exercise, we conduct a similar analysis, using a stochastic model instead. This distinction is of great importance to population biology in general and to conservation biology in particular. In deterministic models,...

  13. A stochastic model for stationary dynamics of prices in real estate markets. A case of random intensity for Poisson moments of prices changes

    NASA Astrophysics Data System (ADS)

    Rusakov, Oleg; Laskin, Michael

    2017-06-01

    We consider a stochastic model of changes of prices in real estate markets. We suppose that in a book of prices the changes happen in points of jumps of a Poisson process with a random intensity, i.e. moments of changes sequently follow to a random process of the Cox process type. We calculate cumulative mathematical expectations and variances for the random intensity of this point process. In the case that the process of random intensity is a martingale the cumulative variance has a linear grows. We statistically process a number of observations of real estate prices and accept hypotheses of a linear grows for estimations as well for cumulative average, as for cumulative variance both for input and output prises that are writing in the book of prises.

  14. Detecting determinism from point processes.

    PubMed

    Andrzejak, Ralph G; Mormann, Florian; Kreuz, Thomas

    2014-12-01

    The detection of a nonrandom structure from experimental data can be crucial for the classification, understanding, and interpretation of the generating process. We here introduce a rank-based nonlinear predictability score to detect determinism from point process data. Thanks to its modular nature, this approach can be adapted to whatever signature in the data one considers indicative of deterministic structure. After validating our approach using point process signals from deterministic and stochastic model dynamics, we show an application to neuronal spike trains recorded in the brain of an epilepsy patient. While we illustrate our approach in the context of temporal point processes, it can be readily applied to spatial point processes as well.

  15. Unsupervised Detection of Planetary Craters by a Marked Point Process

    NASA Technical Reports Server (NTRS)

    Troglio, G.; Benediktsson, J. A.; Le Moigne, J.; Moser, G.; Serpico, S. B.

    2011-01-01

    With the launch of several planetary missions in the last decade, a large amount of planetary images is being acquired. Preferably, automatic and robust processing techniques need to be used for data analysis because of the huge amount of the acquired data. Here, the aim is to achieve a robust and general methodology for crater detection. A novel technique based on a marked point process is proposed. First, the contours in the image are extracted. The object boundaries are modeled as a configuration of an unknown number of random ellipses, i.e., the contour image is considered as a realization of a marked point process. Then, an energy function is defined, containing both an a priori energy and a likelihood term. The global minimum of this function is estimated by using reversible jump Monte-Carlo Markov chain dynamics and a simulated annealing scheme. The main idea behind marked point processes is to model objects within a stochastic framework: Marked point processes represent a very promising current approach in the stochastic image modeling and provide a powerful and methodologically rigorous framework to efficiently map and detect objects and structures in an image with an excellent robustness to noise. The proposed method for crater detection has several feasible applications. One such application area is image registration by matching the extracted features.

  16. Toward the Darwinian transition: Switching between distributed and speciated states in a simple model of early life.

    PubMed

    Arnoldt, Hinrich; Strogatz, Steven H; Timme, Marc

    2015-01-01

    It has been hypothesized that in the era just before the last universal common ancestor emerged, life on earth was fundamentally collective. Ancient life forms shared their genetic material freely through massive horizontal gene transfer (HGT). At a certain point, however, life made a transition to the modern era of individuality and vertical descent. Here we present a minimal model for stochastic processes potentially contributing to this hypothesized "Darwinian transition." The model suggests that HGT-dominated dynamics may have been intermittently interrupted by selection-driven processes during which genotypes became fitter and decreased their inclination toward HGT. Stochastic switching in the population dynamics with three-point (hypernetwork) interactions may have destabilized the HGT-dominated collective state and essentially contributed to the emergence of vertical descent and the first well-defined species in early evolution. A systematic nonlinear analysis of the stochastic model dynamics covering key features of evolutionary processes (such as selection, mutation, drift and HGT) supports this view. Our findings thus suggest a viable direction out of early collective evolution, potentially enabling the start of individuality and vertical Darwinian evolution.

  17. Isolating intrinsic noise sources in a stochastic genetic switch.

    PubMed

    Newby, Jay M

    2012-01-01

    The stochastic mutual repressor model is analysed using perturbation methods. This simple model of a gene circuit consists of two genes and three promotor states. Either of the two protein products can dimerize, forming a repressor molecule that binds to the promotor of the other gene. When the repressor is bound to a promotor, the corresponding gene is not transcribed and no protein is produced. Either one of the promotors can be repressed at any given time or both can be unrepressed, leaving three possible promotor states. This model is analysed in its bistable regime in which the deterministic limit exhibits two stable fixed points and an unstable saddle, and the case of small noise is considered. On small timescales, the stochastic process fluctuates near one of the stable fixed points, and on large timescales, a metastable transition can occur, where fluctuations drive the system past the unstable saddle to the other stable fixed point. To explore how different intrinsic noise sources affect these transitions, fluctuations in protein production and degradation are eliminated, leaving fluctuations in the promotor state as the only source of noise in the system. The process without protein noise is then compared to the process with weak protein noise using perturbation methods and Monte Carlo simulations. It is found that some significant differences in the random process emerge when the intrinsic noise source is removed.

  18. Digital hardware implementation of a stochastic two-dimensional neuron model.

    PubMed

    Grassia, F; Kohno, T; Levi, T

    2016-11-01

    This study explores the feasibility of stochastic neuron simulation in digital systems (FPGA), which realizes an implementation of a two-dimensional neuron model. The stochasticity is added by a source of current noise in the silicon neuron using an Ornstein-Uhlenbeck process. This approach uses digital computation to emulate individual neuron behavior using fixed point arithmetic operation. The neuron model's computations are performed in arithmetic pipelines. It was designed in VHDL language and simulated prior to mapping in the FPGA. The experimental results confirmed the validity of the developed stochastic FPGA implementation, which makes the implementation of the silicon neuron more biologically plausible for future hybrid experiments. Copyright © 2017 Elsevier Ltd. All rights reserved.

  19. Investigation of the stochastic nature of temperature and humidity for energy management

    NASA Astrophysics Data System (ADS)

    Hadjimitsis, Evanthis; Demetriou, Evangelos; Sakellari, Katerina; Tyralis, Hristos; Iliopoulou, Theano; Koutsoyiannis, Demetris

    2017-04-01

    Atmospheric temperature and dew point, in addition to their role in atmospheric processes, influence the management of energy systems since they highly affect the energy demand and production. Both temperature and humidity depend on the climate conditions and geographical location. In this context, we analyze numerous of observations around the globe and we investigate the long-term behaviour and periodicities of the temperature and humidity processes. Also, we present and apply a parsimonious stochastic double-cyclostationary model for these processes to an island in the Aegean Sea and investigate their link to energy management. Acknowledgement: This research is conducted within the frame of the undergraduate course "Stochastic Methods in Water Resources" of the National Technical University of Athens (NTUA). The School of Civil Engineering of NTUA provided moral support for the participation of the students in the Assembly.

  20. Heart rate variability as determinism with jump stochastic parameters.

    PubMed

    Zheng, Jiongxuan; Skufca, Joseph D; Bollt, Erik M

    2013-08-01

    We use measured heart rate information (RR intervals) to develop a one-dimensional nonlinear map that describes short term deterministic behavior in the data. Our study suggests that there is a stochastic parameter with persistence which causes the heart rate and rhythm system to wander about a bifurcation point. We propose a modified circle map with a jump process noise term as a model which can qualitatively capture such this behavior of low dimensional transient determinism with occasional (stochastically defined) jumps from one deterministic system to another within a one parameter family of deterministic systems.

  1. Stochastic simulation by image quilting of process-based geological models

    NASA Astrophysics Data System (ADS)

    Hoffimann, Júlio; Scheidt, Céline; Barfod, Adrian; Caers, Jef

    2017-09-01

    Process-based modeling offers a way to represent realistic geological heterogeneity in subsurface models. The main limitation lies in conditioning such models to data. Multiple-point geostatistics can use these process-based models as training images and address the data conditioning problem. In this work, we further develop image quilting as a method for 3D stochastic simulation capable of mimicking the realism of process-based geological models with minimal modeling effort (i.e. parameter tuning) and at the same time condition them to a variety of data. In particular, we develop a new probabilistic data aggregation method for image quilting that bypasses traditional ad-hoc weighting of auxiliary variables. In addition, we propose a novel criterion for template design in image quilting that generalizes the entropy plot for continuous training images. The criterion is based on the new concept of voxel reuse-a stochastic and quilting-aware function of the training image. We compare our proposed method with other established simulation methods on a set of process-based training images of varying complexity, including a real-case example of stochastic simulation of the buried-valley groundwater system in Denmark.

  2. Improved PPP Ambiguity Resolution Considering the Stochastic Characteristics of Atmospheric Corrections from Regional Networks

    PubMed Central

    Li, Yihe; Li, Bofeng; Gao, Yang

    2015-01-01

    With the increased availability of regional reference networks, Precise Point Positioning (PPP) can achieve fast ambiguity resolution (AR) and precise positioning by assimilating the satellite fractional cycle biases (FCBs) and atmospheric corrections derived from these networks. In such processing, the atmospheric corrections are usually treated as deterministic quantities. This is however unrealistic since the estimated atmospheric corrections obtained from the network data are random and furthermore the interpolated corrections diverge from the realistic corrections. This paper is dedicated to the stochastic modelling of atmospheric corrections and analyzing their effects on the PPP AR efficiency. The random errors of the interpolated corrections are processed as two components: one is from the random errors of estimated corrections at reference stations, while the other arises from the atmospheric delay discrepancies between reference stations and users. The interpolated atmospheric corrections are then applied by users as pseudo-observations with the estimated stochastic model. Two data sets are processed to assess the performance of interpolated corrections with the estimated stochastic models. The results show that when the stochastic characteristics of interpolated corrections are properly taken into account, the successful fix rate reaches 93.3% within 5 min for a medium inter-station distance network and 80.6% within 10 min for a long inter-station distance network. PMID:26633400

  3. Improved PPP Ambiguity Resolution Considering the Stochastic Characteristics of Atmospheric Corrections from Regional Networks.

    PubMed

    Li, Yihe; Li, Bofeng; Gao, Yang

    2015-11-30

    With the increased availability of regional reference networks, Precise Point Positioning (PPP) can achieve fast ambiguity resolution (AR) and precise positioning by assimilating the satellite fractional cycle biases (FCBs) and atmospheric corrections derived from these networks. In such processing, the atmospheric corrections are usually treated as deterministic quantities. This is however unrealistic since the estimated atmospheric corrections obtained from the network data are random and furthermore the interpolated corrections diverge from the realistic corrections. This paper is dedicated to the stochastic modelling of atmospheric corrections and analyzing their effects on the PPP AR efficiency. The random errors of the interpolated corrections are processed as two components: one is from the random errors of estimated corrections at reference stations, while the other arises from the atmospheric delay discrepancies between reference stations and users. The interpolated atmospheric corrections are then applied by users as pseudo-observations with the estimated stochastic model. Two data sets are processed to assess the performance of interpolated corrections with the estimated stochastic models. The results show that when the stochastic characteristics of interpolated corrections are properly taken into account, the successful fix rate reaches 93.3% within 5 min for a medium inter-station distance network and 80.6% within 10 min for a long inter-station distance network.

  4. Stochastically gated local and occupation times of a Brownian particle

    NASA Astrophysics Data System (ADS)

    Bressloff, Paul C.

    2017-01-01

    We generalize the Feynman-Kac formula to analyze the local and occupation times of a Brownian particle moving in a stochastically gated one-dimensional domain. (i) The gated local time is defined as the amount of time spent by the particle in the neighborhood of a point in space where there is some target that only receives resources from (or detects) the particle when the gate is open; the target does not interfere with the motion of the Brownian particle. (ii) The gated occupation time is defined as the amount of time spent by the particle in the positive half of the real line, given that it can only cross the origin when a gate placed at the origin is open; in the closed state the particle is reflected. In both scenarios, the gate randomly switches between the open and closed states according to a two-state Markov process. We derive a stochastic, backward Fokker-Planck equation (FPE) for the moment-generating function of the two types of gated Brownian functional, given a particular realization of the stochastic gate, and analyze the resulting stochastic FPE using a moments method recently developed for diffusion processes in randomly switching environments. In particular, we obtain dynamical equations for the moment-generating function, averaged with respect to realizations of the stochastic gate.

  5. Stochastic Geometry and Quantum Gravity: Some Rigorous Results

    NASA Astrophysics Data System (ADS)

    Zessin, H.

    The aim of these lectures is a short introduction into some recent developments in stochastic geometry which have one of its origins in simplicial gravity theory (see Regge Nuovo Cimento 19: 558-571, 1961). The aim is to define and construct rigorously point processes on spaces of Euclidean simplices in such a way that the configurations of these simplices are simplicial complexes. The main interest then is concentrated on their curvature properties. We illustrate certain basic ideas from a mathematical point of view. An excellent representation of this area can be found in Schneider and Weil (Stochastic and Integral Geometry, Springer, Berlin, 2008. German edition: Stochastische Geometrie, Teubner, 2000). In Ambjørn et al. (Quantum Geometry Cambridge University Press, Cambridge, 1997) you find a beautiful account from the physical point of view. More recent developments in this direction can be found in Ambjørn et al. ("Quantum gravity as sum over spacetimes", Lect. Notes Phys. 807. Springer, Heidelberg, 2010). After an informal axiomatic introduction into the conceptual foundations of Regge's approach the first lecture recalls the concepts and notations used. It presents the fundamental zero-infinity law of stochastic geometry and the construction of cluster processes based on it. The second lecture presents the main mathematical object, i.e. Poisson-Delaunay surfaces possessing an intrinsic random metric structure. The third and fourth lectures discuss their ergodic behaviour and present the two-dimensional Regge model of pure simplicial quantum gravity. We terminate with the formulation of basic open problems. Proofs are given in detail only in a few cases. In general the main ideas are developed. Sufficiently complete references are given.

  6. Fundamental limitation of a two-dimensional description of magnetic reconnection

    NASA Astrophysics Data System (ADS)

    Firpo, Marie-Christine

    2014-10-01

    For magnetic reconnection to be possible, the electrons have at some point to ``get free from magnetic slavery,'' according to von Steiger's formulation. Stochasticity may be considered as one possible ingredient through which this may be realized in the magnetic reconnection process. It will be argued that non-ideal effects may be considered as a ``hidden'' way to introduce stochasticity. Then it will be shown that there exists a generic intrinsic stochasticity of magnetic field lines that does not require the invocation of non-ideal effects but cannot show up in effective two-dimensional models of magnetic reconnection. Possible implications will be discussed in the frame of tokamak sawteeth that form a laboratory prototype of magnetic reconnection.

  7. Stochastic description of quantum Brownian dynamics

    NASA Astrophysics Data System (ADS)

    Yan, Yun-An; Shao, Jiushu

    2016-08-01

    Classical Brownian motion has well been investigated since the pioneering work of Einstein, which inspired mathematicians to lay the theoretical foundation of stochastic processes. A stochastic formulation for quantum dynamics of dissipative systems described by the system-plus-bath model has been developed and found many applications in chemical dynamics, spectroscopy, quantum transport, and other fields. This article provides a tutorial review of the stochastic formulation for quantum dissipative dynamics. The key idea is to decouple the interaction between the system and the bath by virtue of the Hubbard-Stratonovich transformation or Itô calculus so that the system and the bath are not directly entangled during evolution, rather they are correlated due to the complex white noises introduced. The influence of the bath on the system is thereby defined by an induced stochastic field, which leads to the stochastic Liouville equation for the system. The exact reduced density matrix can be calculated as the stochastic average in the presence of bath-induced fields. In general, the plain implementation of the stochastic formulation is only useful for short-time dynamics, but not efficient for long-time dynamics as the statistical errors go very fast. For linear and other specific systems, the stochastic Liouville equation is a good starting point to derive the master equation. For general systems with decomposable bath-induced processes, the hierarchical approach in the form of a set of deterministic equations of motion is derived based on the stochastic formulation and provides an effective means for simulating the dissipative dynamics. A combination of the stochastic simulation and the hierarchical approach is suggested to solve the zero-temperature dynamics of the spin-boson model. This scheme correctly describes the coherent-incoherent transition (Toulouse limit) at moderate dissipation and predicts a rate dynamics in the overdamped regime. Challenging problems such as the dynamical description of quantum phase transition (local- ization) and the numerical stability of the trace-conserving, nonlinear stochastic Liouville equation are outlined.

  8. On the stability and dynamics of stochastic spiking neuron models: Nonlinear Hawkes process and point process GLMs

    PubMed Central

    Truccolo, Wilson

    2017-01-01

    Point process generalized linear models (PP-GLMs) provide an important statistical framework for modeling spiking activity in single-neurons and neuronal networks. Stochastic stability is essential when sampling from these models, as done in computational neuroscience to analyze statistical properties of neuronal dynamics and in neuro-engineering to implement closed-loop applications. Here we show, however, that despite passing common goodness-of-fit tests, PP-GLMs estimated from data are often unstable, leading to divergent firing rates. The inclusion of absolute refractory periods is not a satisfactory solution since the activity then typically settles into unphysiological rates. To address these issues, we derive a framework for determining the existence and stability of fixed points of the expected conditional intensity function (CIF) for general PP-GLMs. Specifically, in nonlinear Hawkes PP-GLMs, the CIF is expressed as a function of the previous spike history and exogenous inputs. We use a mean-field quasi-renewal (QR) approximation that decomposes spike history effects into the contribution of the last spike and an average of the CIF over all spike histories prior to the last spike. Fixed points for stationary rates are derived as self-consistent solutions of integral equations. Bifurcation analysis and the number of fixed points predict that the original models can show stable, divergent, and metastable (fragile) dynamics. For fragile models, fluctuations of the single-neuron dynamics predict expected divergence times after which rates approach unphysiologically high values. This metric can be used to estimate the probability of rates to remain physiological for given time periods, e.g., for simulation purposes. We demonstrate the use of the stability framework using simulated single-neuron examples and neurophysiological recordings. Finally, we show how to adapt PP-GLM estimation procedures to guarantee model stability. Overall, our results provide a stability framework for data-driven PP-GLMs and shed new light on the stochastic dynamics of state-of-the-art statistical models of neuronal spiking activity. PMID:28234899

  9. On the stability and dynamics of stochastic spiking neuron models: Nonlinear Hawkes process and point process GLMs.

    PubMed

    Gerhard, Felipe; Deger, Moritz; Truccolo, Wilson

    2017-02-01

    Point process generalized linear models (PP-GLMs) provide an important statistical framework for modeling spiking activity in single-neurons and neuronal networks. Stochastic stability is essential when sampling from these models, as done in computational neuroscience to analyze statistical properties of neuronal dynamics and in neuro-engineering to implement closed-loop applications. Here we show, however, that despite passing common goodness-of-fit tests, PP-GLMs estimated from data are often unstable, leading to divergent firing rates. The inclusion of absolute refractory periods is not a satisfactory solution since the activity then typically settles into unphysiological rates. To address these issues, we derive a framework for determining the existence and stability of fixed points of the expected conditional intensity function (CIF) for general PP-GLMs. Specifically, in nonlinear Hawkes PP-GLMs, the CIF is expressed as a function of the previous spike history and exogenous inputs. We use a mean-field quasi-renewal (QR) approximation that decomposes spike history effects into the contribution of the last spike and an average of the CIF over all spike histories prior to the last spike. Fixed points for stationary rates are derived as self-consistent solutions of integral equations. Bifurcation analysis and the number of fixed points predict that the original models can show stable, divergent, and metastable (fragile) dynamics. For fragile models, fluctuations of the single-neuron dynamics predict expected divergence times after which rates approach unphysiologically high values. This metric can be used to estimate the probability of rates to remain physiological for given time periods, e.g., for simulation purposes. We demonstrate the use of the stability framework using simulated single-neuron examples and neurophysiological recordings. Finally, we show how to adapt PP-GLM estimation procedures to guarantee model stability. Overall, our results provide a stability framework for data-driven PP-GLMs and shed new light on the stochastic dynamics of state-of-the-art statistical models of neuronal spiking activity.

  10. Large-deviation properties of Brownian motion with dry friction.

    PubMed

    Chen, Yaming; Just, Wolfram

    2014-10-01

    We investigate piecewise-linear stochastic models with regard to the probability distribution of functionals of the stochastic processes, a question that occurs frequently in large deviation theory. The functionals that we are looking into in detail are related to the time a stochastic process spends at a phase space point or in a phase space region, as well as to the motion with inertia. For a Langevin equation with discontinuous drift, we extend the so-called backward Fokker-Planck technique for non-negative support functionals to arbitrary support functionals, to derive explicit expressions for the moments of the functional. Explicit solutions for the moments and for the distribution of the so-called local time, the occupation time, and the displacement are derived for the Brownian motion with dry friction, including quantitative measures to characterize deviation from Gaussian behavior in the asymptotic long time limit.

  11. Using Nonlinear Stochastic Evolutionary Game Strategy to Model an Evolutionary Biological Network of Organ Carcinogenesis Under a Natural Selection Scheme

    PubMed Central

    Chen, Bor-Sen; Tsai, Kun-Wei; Li, Cheng-Wei

    2015-01-01

    Molecular biologists have long recognized carcinogenesis as an evolutionary process that involves natural selection. Cancer is driven by the somatic evolution of cell lineages. In this study, the evolution of somatic cancer cell lineages during carcinogenesis was modeled as an equilibrium point (ie, phenotype of attractor) shifting, the process of a nonlinear stochastic evolutionary biological network. This process is subject to intrinsic random fluctuations because of somatic genetic and epigenetic variations, as well as extrinsic disturbances because of carcinogens and stressors. In order to maintain the normal function (ie, phenotype) of an evolutionary biological network subjected to random intrinsic fluctuations and extrinsic disturbances, a network robustness scheme that incorporates natural selection needs to be developed. This can be accomplished by selecting certain genetic and epigenetic variations to modify the network structure to attenuate intrinsic fluctuations efficiently and to resist extrinsic disturbances in order to maintain the phenotype of the evolutionary biological network at an equilibrium point (attractor). However, during carcinogenesis, the remaining (or neutral) genetic and epigenetic variations accumulate, and the extrinsic disturbances become too large to maintain the normal phenotype at the desired equilibrium point for the nonlinear evolutionary biological network. Thus, the network is shifted to a cancer phenotype at a new equilibrium point that begins a new evolutionary process. In this study, the natural selection scheme of an evolutionary biological network of carcinogenesis was derived from a robust negative feedback scheme based on the nonlinear stochastic Nash game strategy. The evolvability and phenotypic robustness criteria of the evolutionary cancer network were also estimated by solving a Hamilton–Jacobi inequality – constrained optimization problem. The simulation revealed that the phenotypic shift of the lung cancer-associated cell network takes 54.5 years from a normal state to stage I cancer, 1.5 years from stage I to stage II cancer, and 2.5 years from stage II to stage III cancer, with a reasonable match for the statistical result of the average age of lung cancer. These results suggest that a robust negative feedback scheme, based on a stochastic evolutionary game strategy, plays a critical role in an evolutionary biological network of carcinogenesis under a natural selection scheme. PMID:26244004

  12. A stochastic method for computing hadronic matrix elements

    DOE PAGES

    Alexandrou, Constantia; Constantinou, Martha; Dinter, Simon; ...

    2014-01-24

    In this study, we present a stochastic method for the calculation of baryon 3-point functions which is an alternative to the typically used sequential method offering more versatility. We analyze the scaling of the error of the stochastically evaluated 3-point function with the lattice volume and find a favorable signal to noise ratio suggesting that the stochastic method can be extended to large volumes providing an efficient approach to compute hadronic matrix elements and form factors.

  13. Random Walks in a One-Dimensional Lévy Random Environment

    NASA Astrophysics Data System (ADS)

    Bianchi, Alessandra; Cristadoro, Giampaolo; Lenci, Marco; Ligabò, Marilena

    2016-04-01

    We consider a generalization of a one-dimensional stochastic process known in the physical literature as Lévy-Lorentz gas. The process describes the motion of a particle on the real line in the presence of a random array of marked points, whose nearest-neighbor distances are i.i.d. and long-tailed (with finite mean but possibly infinite variance). The motion is a continuous-time, constant-speed interpolation of a symmetric random walk on the marked points. We first study the quenched random walk on the point process, proving the CLT and the convergence of all the accordingly rescaled moments. Then we derive the quenched and annealed CLTs for the continuous-time process.

  14. Stochastic investigation of temperature process for climatic variability identification

    NASA Astrophysics Data System (ADS)

    Lerias, Eleutherios; Kalamioti, Anna; Dimitriadis, Panayiotis; Markonis, Yannis; Iliopoulou, Theano; Koutsoyiannis, Demetris

    2016-04-01

    The temperature process is considered as the most characteristic hydrometeorological process and has been thoroughly examined in the climate-change framework. We use a dataset comprising hourly temperature and dew point records to identify statistical variability with emphasis on the last period. Specifically, we investigate the occurrence of mean, maximum and minimum values and we estimate statistical properties such as marginal probability distribution function and the type of decay of the climacogram (i.e., mean process variance vs. scale) for various time periods. Acknowledgement: This research is conducted within the frame of the undergraduate course "Stochastic Methods in Water Resources" of the National Technical University of Athens (NTUA). The School of Civil Engineering of NTUA provided moral support for the participation of the students in the Assembly.

  15. Machine learning from computer simulations with applications in rail vehicle dynamics

    NASA Astrophysics Data System (ADS)

    Taheri, Mehdi; Ahmadian, Mehdi

    2016-05-01

    The application of stochastic modelling for learning the behaviour of a multibody dynamics (MBD) models is investigated. Post-processing data from a simulation run are used to train the stochastic model that estimates the relationship between model inputs (suspension relative displacement and velocity) and the output (sum of suspension forces). The stochastic model can be used to reduce the computational burden of the MBD model by replacing a computationally expensive subsystem in the model (suspension subsystem). With minor changes, the stochastic modelling technique is able to learn the behaviour of a physical system and integrate its behaviour within MBD models. The technique is highly advantageous for MBD models where real-time simulations are necessary, or with models that have a large number of repeated substructures, e.g. modelling a train with a large number of railcars. The fact that the training data are acquired prior to the development of the stochastic model discards the conventional sampling plan strategies like Latin Hypercube sampling plans where simulations are performed using the inputs dictated by the sampling plan. Since the sampling plan greatly influences the overall accuracy and efficiency of the stochastic predictions, a sampling plan suitable for the process is developed where the most space-filling subset of the acquired data with ? number of sample points that best describes the dynamic behaviour of the system under study is selected as the training data.

  16. Preferential sampling and Bayesian geostatistics: Statistical modeling and examples.

    PubMed

    Cecconi, Lorenzo; Grisotto, Laura; Catelan, Dolores; Lagazio, Corrado; Berrocal, Veronica; Biggeri, Annibale

    2016-08-01

    Preferential sampling refers to any situation in which the spatial process and the sampling locations are not stochastically independent. In this paper, we present two examples of geostatistical analysis in which the usual assumption of stochastic independence between the point process and the measurement process is violated. To account for preferential sampling, we specify a flexible and general Bayesian geostatistical model that includes a shared spatial random component. We apply the proposed model to two different case studies that allow us to highlight three different modeling and inferential aspects of geostatistical modeling under preferential sampling: (1) continuous or finite spatial sampling frame; (2) underlying causal model and relevant covariates; and (3) inferential goals related to mean prediction surface or prediction uncertainty. © The Author(s) 2016.

  17. Universal ideal behavior and macroscopic work relation of linear irreversible stochastic thermodynamics

    NASA Astrophysics Data System (ADS)

    Ma, Yi-An; Qian, Hong

    2015-06-01

    We revisit the Ornstein-Uhlenbeck (OU) process as the fundamental mathematical description of linear irreversible phenomena, with fluctuations, near an equilibrium. By identifying the underlying circulating dynamics in a stationary process as the natural generalization of classical conservative mechanics, a bridge between a family of OU processes with equilibrium fluctuations and thermodynamics is established through the celebrated Helmholtz theorem. The Helmholtz theorem provides an emergent macroscopic ‘equation of state’ of the entire system, which exhibits a universal ideal thermodynamic behavior. Fluctuating macroscopic quantities are studied from the stochastic thermodynamic point of view and a non-equilibrium work relation is obtained in the macroscopic picture, which may facilitate experimental study and application of the equalities due to Jarzynski, Crooks, and Hatano and Sasa.

  18. Dynamics and Physiological Roles of Stochastic Firing Patterns Near Bifurcation Points

    NASA Astrophysics Data System (ADS)

    Jia, Bing; Gu, Huaguang

    2017-06-01

    Different stochastic neural firing patterns or rhythms that appeared near polarization or depolarization resting states were observed in biological experiments on three nervous systems, and closely matched those simulated near bifurcation points between stable equilibrium point and limit cycle in a theoretical model with noise. The distinct dynamics of spike trains and interspike interval histogram (ISIH) of these stochastic rhythms were identified and found to build a relationship to the coexisting behaviors or fixed firing frequency of four different types of bifurcations. Furthermore, noise evokes coherence resonances near bifurcation points and plays important roles in enhancing information. The stochastic rhythms corresponding to Hopf bifurcation points with fixed firing frequency exhibited stronger coherence degree and a sharper peak in the power spectrum of the spike trains than those corresponding to saddle-node bifurcation points without fixed firing frequency. Moreover, the stochastic firing patterns changed to a depolarization resting state as the extracellular potassium concentration increased for the injured nerve fiber related to pathological pain or static blood pressure level increased for aortic depressor nerve fiber, and firing frequency decreased, which were different from the physiological viewpoint that firing frequency increased with increasing pressure level or potassium concentration. This shows that rhythms or firing patterns can reflect pressure or ion concentration information related to pathological pain information. Our results present the dynamics of stochastic firing patterns near bifurcation points, which are helpful for the identification of both dynamics and physiological roles of complex neural firing patterns or rhythms, and the roles of noise.

  19. Changes in assembly processes in soil bacterial communities following a wildfire disturbance.

    PubMed

    Ferrenberg, Scott; O'Neill, Sean P; Knelman, Joseph E; Todd, Bryan; Duggan, Sam; Bradley, Daniel; Robinson, Taylor; Schmidt, Steven K; Townsend, Alan R; Williams, Mark W; Cleveland, Cory C; Melbourne, Brett A; Jiang, Lin; Nemergut, Diana R

    2013-06-01

    Although recent work has shown that both deterministic and stochastic processes are important in structuring microbial communities, the factors that affect the relative contributions of niche and neutral processes are poorly understood. The macrobiological literature indicates that ecological disturbances can influence assembly processes. Thus, we sampled bacterial communities at 4 and 16 weeks following a wildfire and used null deviation analysis to examine the role that time since disturbance has in community assembly. Fire dramatically altered bacterial community structure and diversity as well as soil chemistry for both time-points. Community structure shifted between 4 and 16 weeks for both burned and unburned communities. Community assembly in burned sites 4 weeks after fire was significantly more stochastic than in unburned sites. After 16 weeks, however, burned communities were significantly less stochastic than unburned communities. Thus, we propose a three-phase model featuring shifts in the relative importance of niche and neutral processes as a function of time since disturbance. Because neutral processes are characterized by a decoupling between environmental parameters and community structure, we hypothesize that a better understanding of community assembly may be important in determining where and when detailed studies of community composition are valuable for predicting ecosystem function.

  20. Changes in assembly processes in soil bacterial communities following a wildfire disturbance

    PubMed Central

    Ferrenberg, Scott; O'Neill, Sean P; Knelman, Joseph E; Todd, Bryan; Duggan, Sam; Bradley, Daniel; Robinson, Taylor; Schmidt, Steven K; Townsend, Alan R; Williams, Mark W; Cleveland, Cory C; Melbourne, Brett A; Jiang, Lin; Nemergut, Diana R

    2013-01-01

    Although recent work has shown that both deterministic and stochastic processes are important in structuring microbial communities, the factors that affect the relative contributions of niche and neutral processes are poorly understood. The macrobiological literature indicates that ecological disturbances can influence assembly processes. Thus, we sampled bacterial communities at 4 and 16 weeks following a wildfire and used null deviation analysis to examine the role that time since disturbance has in community assembly. Fire dramatically altered bacterial community structure and diversity as well as soil chemistry for both time-points. Community structure shifted between 4 and 16 weeks for both burned and unburned communities. Community assembly in burned sites 4 weeks after fire was significantly more stochastic than in unburned sites. After 16 weeks, however, burned communities were significantly less stochastic than unburned communities. Thus, we propose a three-phase model featuring shifts in the relative importance of niche and neutral processes as a function of time since disturbance. Because neutral processes are characterized by a decoupling between environmental parameters and community structure, we hypothesize that a better understanding of community assembly may be important in determining where and when detailed studies of community composition are valuable for predicting ecosystem function. PMID:23407312

  1. Stochastic modeling of neurobiological time series: Power, coherence, Granger causality, and separation of evoked responses from ongoing activity

    NASA Astrophysics Data System (ADS)

    Chen, Yonghong; Bressler, Steven L.; Knuth, Kevin H.; Truccolo, Wilson A.; Ding, Mingzhou

    2006-06-01

    In this article we consider the stochastic modeling of neurobiological time series from cognitive experiments. Our starting point is the variable-signal-plus-ongoing-activity model. From this model a differentially variable component analysis strategy is developed from a Bayesian perspective to estimate event-related signals on a single trial basis. After subtracting out the event-related signal from recorded single trial time series, the residual ongoing activity is treated as a piecewise stationary stochastic process and analyzed by an adaptive multivariate autoregressive modeling strategy which yields power, coherence, and Granger causality spectra. Results from applying these methods to local field potential recordings from monkeys performing cognitive tasks are presented.

  2. BOOK REVIEW: Statistical Mechanics of Turbulent Flows

    NASA Astrophysics Data System (ADS)

    Cambon, C.

    2004-10-01

    This is a handbook for a computational approach to reacting flows, including background material on statistical mechanics. In this sense, the title is somewhat misleading with respect to other books dedicated to the statistical theory of turbulence (e.g. Monin and Yaglom). In the present book, emphasis is placed on modelling (engineering closures) for computational fluid dynamics. The probabilistic (pdf) approach is applied to the local scalar field, motivated first by the nonlinearity of chemical source terms which appear in the transport equations of reacting species. The probabilistic and stochastic approaches are also used for the velocity field and particle position; nevertheless they are essentially limited to Lagrangian models for a local vector, with only single-point statistics, as for the scalar. Accordingly, conventional techniques, such as single-point closures for RANS (Reynolds-averaged Navier-Stokes) and subgrid-scale models for LES (large-eddy simulations), are described and in some cases reformulated using underlying Langevin models and filtered pdfs. Even if the theoretical approach to turbulence is not discussed in general, the essentials of probabilistic and stochastic-processes methods are described, with a useful reminder concerning statistics at the molecular level. The book comprises 7 chapters. Chapter 1 briefly states the goals and contents, with a very clear synoptic scheme on page 2. Chapter 2 presents definitions and examples of pdfs and related statistical moments. Chapter 3 deals with stochastic processes, pdf transport equations, from Kramer-Moyal to Fokker-Planck (for Markov processes), and moments equations. Stochastic differential equations are introduced and their relationship to pdfs described. This chapter ends with a discussion of stochastic modelling. The equations of fluid mechanics and thermodynamics are addressed in chapter 4. Classical conservation equations (mass, velocity, internal energy) are derived from their counterparts at the molecular level. In addition, equations are given for multicomponent reacting systems. The chapter ends with miscellaneous topics, including DNS, (idea of) the energy cascade, and RANS. Chapter 5 is devoted to stochastic models for the large scales of turbulence. Langevin-type models for velocity (and particle position) are presented, and their various consequences for second-order single-point corelations (Reynolds stress components, Kolmogorov constant) are discussed. These models are then presented for the scalar. The chapter ends with compressible high-speed flows and various models, ranging from k-epsilon to hybrid RANS-pdf. Stochastic models for small-scale turbulence are addressed in chapter 6. These models are based on the concept of a filter density function (FDF) for the scalar, and a more conventional SGS (sub-grid-scale model) for the velocity in LES. The final chapter, chapter 7, is entitled `The unification of turbulence models' and aims at reconciling large-scale and small-scale modelling. This book offers a timely survey of techniques in modern computational fluid mechanics for turbulent flows with reacting scalars. It should be of interest to engineers, while the discussion of the underlying tools, namely pdfs, stochastic and statistical equations should also be attractive to applied mathematicians and physicists. The book's emphasis on local pdfs and stochastic Langevin models gives a consistent structure to the book and allows the author to cover almost the whole spectrum of practical modelling in turbulent CFD. On the other hand, one might regret that non-local issues are not mentioned explicitly, or even briefly. These problems range from the presence of pressure-strain correlations in the Reynolds stress transport equations to the presence of two-point pdfs in the single-point pdf equation derived from the Navier--Stokes equations. (One may recall that, even without scalar transport, a general closure problem for turbulence statistics results from both non-linearity and non-locality of Navier-Stokes equations, the latter coming from, e.g., the nonlocal relationship of velocity and pressure in the quasi-incompressible case. These two aspects are often intricately linked. It is well known that non-linearity alone is not responsible for the `problem', as evidenced by 1D turbulence without pressure (`Burgulence' from the Burgers equation) and probably 3D (cosmological gas). A local description in terms of pdf for the velocity can resolve the `non-linear' problem, which instead yields an infinite hierarchy of equations in terms of moments. On the other hand, non-locality yields a hierarchy of unclosed equations, with the single-point pdf equation for velocity derived from NS incompressible equations involving a two-point pdf, and so on. The general relationship was given by Lundgren (1967, Phys. Fluids 10 (5), 969-975), with the equation for pdf at n points involving the pdf at n+1 points. The nonlocal problem appears in various statistical models which are not discussed in the book. The simplest example is full RST or ASM models, in which the closure of pressure-strain correlations is pivotal (their counterpart ought to be identified and discussed in equations (5-21) and the following ones). The book does not address more sophisticated non-local approaches, such as two-point (or spectral) non-linear closure theories and models, `rapid distortion theory' for linear regimes, not to mention scaling and intermittency based on two-point structure functions, etc. The book sometimes mixes theoretical modelling and pure empirical relationships, the empirical character coming from the lack of a nonlocal (two-point) approach.) In short, the book is orientated more towards applications than towards turbulence theory; it is written clearly and concisely and should be useful to a large community, interested either in the underlying stochastic formalism or in CFD applications.

  3. Optical EVPA rotations in blazars: testing a stochastic variability model with RoboPol data

    NASA Astrophysics Data System (ADS)

    Kiehlmann, S.; Blinov, D.; Pearson, T. J.; Liodakis, I.

    2017-12-01

    We identify rotations of the polarization angle in a sample of blazars observed for three seasons with the RoboPol instrument. A simplistic stochastic variability model is tested against this sample of rotation events. The model is capable of producing samples of rotations with parameters similar to the observed ones, but fails to reproduce the polarization fraction at the same time. Even though we can neither accept nor conclusively reject the model, we point out various aspects of the observations that are fully consistent with a random walk process.

  4. Stochastic optimal control of ultradiffusion processes with application to dynamic portfolio management

    NASA Astrophysics Data System (ADS)

    Marcozzi, Michael D.

    2008-12-01

    We consider theoretical and approximation aspects of the stochastic optimal control of ultradiffusion processes in the context of a prototype model for the selling price of a European call option. Within a continuous-time framework, the dynamic management of a portfolio of assets is effected through continuous or point control, activation costs, and phase delay. The performance index is derived from the unique weak variational solution to the ultraparabolic Hamilton-Jacobi equation; the value function is the optimal realization of the performance index relative to all feasible portfolios. An approximation procedure based upon a temporal box scheme/finite element method is analyzed; numerical examples are presented in order to demonstrate the viability of the approach.

  5. Numerical simulations of piecewise deterministic Markov processes with an application to the stochastic Hodgkin-Huxley model.

    PubMed

    Ding, Shaojie; Qian, Min; Qian, Hong; Zhang, Xuejuan

    2016-12-28

    The stochastic Hodgkin-Huxley model is one of the best-known examples of piecewise deterministic Markov processes (PDMPs), in which the electrical potential across a cell membrane, V(t), is coupled with a mesoscopic Markov jump process representing the stochastic opening and closing of ion channels embedded in the membrane. The rates of the channel kinetics, in turn, are voltage-dependent. Due to this interdependence, an accurate and efficient sampling of the time evolution of the hybrid stochastic systems has been challenging. The current exact simulation methods require solving a voltage-dependent hitting time problem for multiple path-dependent intensity functions with random thresholds. This paper proposes a simulation algorithm that approximates an alternative representation of the exact solution by fitting the log-survival function of the inter-jump dwell time, H(t), with a piecewise linear one. The latter uses interpolation points that are chosen according to the time evolution of the H(t), as the numerical solution to the coupled ordinary differential equations of V(t) and H(t). This computational method can be applied to all PDMPs. Pathwise convergence of the approximated sample trajectories to the exact solution is proven, and error estimates are provided. Comparison with a previous algorithm that is based on piecewise constant approximation is also presented.

  6. Asymmetric and Stochastic Behavior in Magnetic Vortices Studied by Soft X-ray Microscopy

    NASA Astrophysics Data System (ADS)

    Im, Mi-Young

    Asymmetry and stochasticity in spin processes are not only long-standing fundamental issues but also highly relevant to technological applications of nanomagnetic structures to memory and storage nanodevices. Those nontrivial phenomena have been studied by direct imaging of spin structures in magnetic vortices utilizing magnetic transmission soft x-ray microscopy (BL6.1.2 at ALS). Magnetic vortices have attracted enormous scientific interests due to their fascinating spin structures consisting of circularity rotating clockwise (c = + 1) or counter-clockwise (c = -1) and polarity pointing either up (p = + 1) or down (p = -1). We observed a symmetry breaking in the formation process of vortex structures in circular permalloy (Ni80Fe20) disks. The generation rates of two different vortex groups with the signature of cp = + 1 and cp =-1 are completely asymmetric. The asymmetric nature was interpreted to be triggered by ``intrinsic'' Dzyaloshinskii-Moriya interaction (DMI) arising from the spin-orbit coupling due to the lack of inversion symmetry near the disk surface and ``extrinsic'' factors such as roughness and defects. We also investigated the stochastic behavior of vortex creation in the arrays of asymmetric disks. The stochasticity was found to be very sensitive to the geometry of disk arrays, particularly interdisk distance. The experimentally observed phenomenon couldn't be explained by thermal fluctuation effect, which has been considered as a main reason for the stochastic behavior in spin processes. We demonstrated for the first time that the ultrafast dynamics at the early stage of vortex creation, which has a character of classical chaos significantly affects the stochastic nature observed at the steady state in asymmetric disks. This work provided the new perspective of dynamics as a critical factor contributing to the stochasticity in spin processes and also the possibility for the control of the intrinsic stochastic nature by optimizing the design of asymmetric disk arrays. This work was supported by the Director, Office of Science, Office of Basic Energy Sciences, of the U.S. Department of Energy under Contract No. DE-AC02-05CH11231, by Leading Foreign Research Institute Recruitment Program through the NRF.

  7. A Rigorous Temperature-Dependent Stochastic Modelling and Testing for MEMS-Based Inertial Sensor Errors.

    PubMed

    El-Diasty, Mohammed; Pagiatakis, Spiros

    2009-01-01

    In this paper, we examine the effect of changing the temperature points on MEMS-based inertial sensor random error. We collect static data under different temperature points using a MEMS-based inertial sensor mounted inside a thermal chamber. Rigorous stochastic models, namely Autoregressive-based Gauss-Markov (AR-based GM) models are developed to describe the random error behaviour. The proposed AR-based GM model is initially applied to short stationary inertial data to develop the stochastic model parameters (correlation times). It is shown that the stochastic model parameters of a MEMS-based inertial unit, namely the ADIS16364, are temperature dependent. In addition, field kinematic test data collected at about 17 °C are used to test the performance of the stochastic models at different temperature points in the filtering stage using Unscented Kalman Filter (UKF). It is shown that the stochastic model developed at 20 °C provides a more accurate inertial navigation solution than the ones obtained from the stochastic models developed at -40 °C, -20 °C, 0 °C, +40 °C, and +60 °C. The temperature dependence of the stochastic model is significant and should be considered at all times to obtain optimal navigation solution for MEMS-based INS/GPS integration.

  8. Scalable domain decomposition solvers for stochastic PDEs in high performance computing

    DOE PAGES

    Desai, Ajit; Khalil, Mohammad; Pettit, Chris; ...

    2017-09-21

    Stochastic spectral finite element models of practical engineering systems may involve solutions of linear systems or linearized systems for non-linear problems with billions of unknowns. For stochastic modeling, it is therefore essential to design robust, parallel and scalable algorithms that can efficiently utilize high-performance computing to tackle such large-scale systems. Domain decomposition based iterative solvers can handle such systems. And though these algorithms exhibit excellent scalabilities, significant algorithmic and implementational challenges exist to extend them to solve extreme-scale stochastic systems using emerging computing platforms. Intrusive polynomial chaos expansion based domain decomposition algorithms are extended here to concurrently handle high resolutionmore » in both spatial and stochastic domains using an in-house implementation. Sparse iterative solvers with efficient preconditioners are employed to solve the resulting global and subdomain level local systems through multi-level iterative solvers. We also use parallel sparse matrix–vector operations to reduce the floating-point operations and memory requirements. Numerical and parallel scalabilities of these algorithms are presented for the diffusion equation having spatially varying diffusion coefficient modeled by a non-Gaussian stochastic process. Scalability of the solvers with respect to the number of random variables is also investigated.« less

  9. Scalable domain decomposition solvers for stochastic PDEs in high performance computing

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Desai, Ajit; Khalil, Mohammad; Pettit, Chris

    Stochastic spectral finite element models of practical engineering systems may involve solutions of linear systems or linearized systems for non-linear problems with billions of unknowns. For stochastic modeling, it is therefore essential to design robust, parallel and scalable algorithms that can efficiently utilize high-performance computing to tackle such large-scale systems. Domain decomposition based iterative solvers can handle such systems. And though these algorithms exhibit excellent scalabilities, significant algorithmic and implementational challenges exist to extend them to solve extreme-scale stochastic systems using emerging computing platforms. Intrusive polynomial chaos expansion based domain decomposition algorithms are extended here to concurrently handle high resolutionmore » in both spatial and stochastic domains using an in-house implementation. Sparse iterative solvers with efficient preconditioners are employed to solve the resulting global and subdomain level local systems through multi-level iterative solvers. We also use parallel sparse matrix–vector operations to reduce the floating-point operations and memory requirements. Numerical and parallel scalabilities of these algorithms are presented for the diffusion equation having spatially varying diffusion coefficient modeled by a non-Gaussian stochastic process. Scalability of the solvers with respect to the number of random variables is also investigated.« less

  10. The probability density function (PDF) of Lagrangian Turbulence

    NASA Astrophysics Data System (ADS)

    Birnir, B.

    2012-12-01

    The statistical theory of Lagrangian turbulence is derived from the stochastic Navier-Stokes equation. Assuming that the noise in fully-developed turbulence is a generic noise determined by the general theorems in probability, the central limit theorem and the large deviation principle, we are able to formulate and solve the Kolmogorov-Hopf equation for the invariant measure of the stochastic Navier-Stokes equations. The intermittency corrections to the scaling exponents of the structure functions require a multiplicative (multipling the fluid velocity) noise in the stochastic Navier-Stokes equation. We let this multiplicative noise, in the equation, consists of a simple (Poisson) jump process and then show how the Feynmann-Kac formula produces the log-Poissonian processes, found by She and Leveque, Waymire and Dubrulle. These log-Poissonian processes give the intermittency corrections that agree with modern direct Navier-Stokes simulations (DNS) and experiments. The probability density function (PDF) plays a key role when direct Navier-Stokes simulations or experimental results are compared to theory. The statistical theory of turbulence is determined, including the scaling of the structure functions of turbulence, by the invariant measure of the Navier-Stokes equation and the PDFs for the various statistics (one-point, two-point, N-point) can be obtained by taking the trace of the corresponding invariant measures. Hopf derived in 1952 a functional equation for the characteristic function (Fourier transform) of the invariant measure. In distinction to the nonlinear Navier-Stokes equation, this is a linear functional differential equation. The PDFs obtained from the invariant measures for the velocity differences (two-point statistics) are shown to be the four parameter generalized hyperbolic distributions, found by Barndorff-Nilsen. These PDF have heavy tails and a convex peak at the origin. A suitable projection of the Kolmogorov-Hopf equations is the differential equation determining the generalized hyperbolic distributions. Then we compare these PDFs with DNS results and experimental data.

  11. A guide to differences between stochastic point-source and stochastic finite-fault simulations

    USGS Publications Warehouse

    Atkinson, G.M.; Assatourians, K.; Boore, D.M.; Campbell, K.; Motazedian, D.

    2009-01-01

    Why do stochastic point-source and finite-fault simulation models not agree on the predicted ground motions for moderate earthquakes at large distances? This question was posed by Ken Campbell, who attempted to reproduce the Atkinson and Boore (2006) ground-motion prediction equations for eastern North America using the stochastic point-source program SMSIM (Boore, 2005) in place of the finite-source stochastic program EXSIM (Motazedian and Atkinson, 2005) that was used by Atkinson and Boore (2006) in their model. His comparisons suggested that a higher stress drop is needed in the context of SMSIM to produce an average match, at larger distances, with the model predictions of Atkinson and Boore (2006) based on EXSIM; this is so even for moderate magnitudes, which should be well-represented by a point-source model. Why? The answer to this question is rooted in significant differences between point-source and finite-source stochastic simulation methodologies, specifically as implemented in SMSIM (Boore, 2005) and EXSIM (Motazedian and Atkinson, 2005) to date. Point-source and finite-fault methodologies differ in general in several important ways: (1) the geometry of the source; (2) the definition and application of duration; and (3) the normalization of finite-source subsource summations. Furthermore, the specific implementation of the methods may differ in their details. The purpose of this article is to provide a brief overview of these differences, their origins, and implications. This sets the stage for a more detailed companion article, "Comparing Stochastic Point-Source and Finite-Source Ground-Motion Simulations: SMSIM and EXSIM," in which Boore (2009) provides modifications and improvements in the implementations of both programs that narrow the gap and result in closer agreement. These issues are important because both SMSIM and EXSIM have been widely used in the development of ground-motion prediction equations and in modeling the parameters that control observed ground motions.

  12. Modeling the lake eutrophication stochastic ecosystem and the research of its stability.

    PubMed

    Wang, Bo; Qi, Qianqian

    2018-06-01

    In the reality, the lake system will be disturbed by stochastic factors including the external and internal factors. By adding the additive noise and the multiplicative noise to the right-hand sides of the model equation, the additive stochastic model and the multiplicative stochastic model are established respectively in order to reduce model errors induced by the absence of some physical processes. For both the two kinds of stochastic ecosystems, the authors studied the bifurcation characteristics with the FPK equation and the Lyapunov exponent method based on the Stratonovich-Khasminiskii stochastic average principle. Results show that, for the additive stochastic model, when control parameter (i.e., nutrient loading rate) falls into the interval [0.388644, 0.66003825], there exists bistability for the ecosystem and the additive noise intensities cannot make the bifurcation point drift. In the region of the bistability, the external stochastic disturbance which is one of the main triggers causing the lake eutrophication, may make the ecosystem unstable and induce a transition. When control parameter (nutrient loading rate) falls into the interval (0,  0.388644) and (0.66003825,  1.0), there only exists a stable equilibrium state and the additive noise intensity could not change it. For the multiplicative stochastic model, there exists more complex bifurcation performance and the multiplicative ecosystem will be broken by the multiplicative noise. Also, the multiplicative noise could reduce the extent of the bistable region, ultimately, the bistable region vanishes for sufficiently large noise. What's more, both the nutrient loading rate and the multiplicative noise will make the ecosystem have a regime shift. On the other hand, for the two kinds of stochastic ecosystems, the authors also discussed the evolution of the ecological variable in detail by using the Four-stage Runge-Kutta method of strong order γ=1.5. The numerical method was found to be capable of effectively explaining the regime shift theory and agreed with the realistic analyze. These conclusions also confirms the two paths for the system to move from one stable state to another proposed by Beisner et al. [3], which may help understand the occurrence mechanism related to the lake eutrophication from the view point of the stochastic model and mathematical analysis. Copyright © 2018 Elsevier Inc. All rights reserved.

  13. Minimized state complexity of quantum-encoded cryptic processes

    NASA Astrophysics Data System (ADS)

    Riechers, Paul M.; Mahoney, John R.; Aghamohammadi, Cina; Crutchfield, James P.

    2016-05-01

    The predictive information required for proper trajectory sampling of a stochastic process can be more efficiently transmitted via a quantum channel than a classical one. This recent discovery allows quantum information processing to drastically reduce the memory necessary to simulate complex classical stochastic processes. It also points to a new perspective on the intrinsic complexity that nature must employ in generating the processes we observe. The quantum advantage increases with codeword length: the length of process sequences used in constructing the quantum communication scheme. In analogy with the classical complexity measure, statistical complexity, we use this reduced communication cost as an entropic measure of state complexity in the quantum representation. Previously difficult to compute, the quantum advantage is expressed here in closed form using spectral decomposition. This allows for efficient numerical computation of the quantum-reduced state complexity at all encoding lengths, including infinite. Additionally, it makes clear how finite-codeword reduction in state complexity is controlled by the classical process's cryptic order, and it allows asymptotic analysis of infinite-cryptic-order processes.

  14. Backward-stochastic-differential-equation approach to modeling of gene expression

    NASA Astrophysics Data System (ADS)

    Shamarova, Evelina; Chertovskih, Roman; Ramos, Alexandre F.; Aguiar, Paulo

    2017-03-01

    In this article, we introduce a backward method to model stochastic gene expression and protein-level dynamics. The protein amount is regarded as a diffusion process and is described by a backward stochastic differential equation (BSDE). Unlike many other SDE techniques proposed in the literature, the BSDE method is backward in time; that is, instead of initial conditions it requires the specification of end-point ("final") conditions, in addition to the model parametrization. To validate our approach we employ Gillespie's stochastic simulation algorithm (SSA) to generate (forward) benchmark data, according to predefined gene network models. Numerical simulations show that the BSDE method is able to correctly infer the protein-level distributions that preceded a known final condition, obtained originally from the forward SSA. This makes the BSDE method a powerful systems biology tool for time-reversed simulations, allowing, for example, the assessment of the biological conditions (e.g., protein concentrations) that preceded an experimentally measured event of interest (e.g., mitosis, apoptosis, etc.).

  15. Backward-stochastic-differential-equation approach to modeling of gene expression.

    PubMed

    Shamarova, Evelina; Chertovskih, Roman; Ramos, Alexandre F; Aguiar, Paulo

    2017-03-01

    In this article, we introduce a backward method to model stochastic gene expression and protein-level dynamics. The protein amount is regarded as a diffusion process and is described by a backward stochastic differential equation (BSDE). Unlike many other SDE techniques proposed in the literature, the BSDE method is backward in time; that is, instead of initial conditions it requires the specification of end-point ("final") conditions, in addition to the model parametrization. To validate our approach we employ Gillespie's stochastic simulation algorithm (SSA) to generate (forward) benchmark data, according to predefined gene network models. Numerical simulations show that the BSDE method is able to correctly infer the protein-level distributions that preceded a known final condition, obtained originally from the forward SSA. This makes the BSDE method a powerful systems biology tool for time-reversed simulations, allowing, for example, the assessment of the biological conditions (e.g., protein concentrations) that preceded an experimentally measured event of interest (e.g., mitosis, apoptosis, etc.).

  16. Stochastic DT-MRI connectivity mapping on the GPU.

    PubMed

    McGraw, Tim; Nadar, Mariappan

    2007-01-01

    We present a method for stochastic fiber tract mapping from diffusion tensor MRI (DT-MRI) implemented on graphics hardware. From the simulated fibers we compute a connectivity map that gives an indication of the probability that two points in the dataset are connected by a neuronal fiber path. A Bayesian formulation of the fiber model is given and it is shown that the inversion method can be used to construct plausible connectivity. An implementation of this fiber model on the graphics processing unit (GPU) is presented. Since the fiber paths can be stochastically generated independently of one another, the algorithm is highly parallelizable. This allows us to exploit the data-parallel nature of the GPU fragment processors. We also present a framework for the connectivity computation on the GPU. Our implementation allows the user to interactively select regions of interest and observe the evolving connectivity results during computation. Results are presented from the stochastic generation of over 250,000 fiber steps per iteration at interactive frame rates on consumer-grade graphics hardware.

  17. Schrödinger problem, Lévy processes, and noise in relativistic quantum mechanics

    NASA Astrophysics Data System (ADS)

    Garbaczewski, Piotr; Klauder, John R.; Olkiewicz, Robert

    1995-05-01

    The main purpose of the paper is an essentially probabilistic analysis of relativistic quantum mechanics. It is based on the assumption that whenever probability distributions arise, there exists a stochastic process that is either responsible for the temporal evolution of a given measure or preserves the measure in the stationary case. Our departure point is the so-called Schrödinger problem of probabilistic evolution, which provides for a unique Markov stochastic interpolation between any given pair of boundary probability densities for a process covering a fixed, finite duration of time, provided we have decided a priori what kind of primordial dynamical semigroup transition mechanism is involved. In the nonrelativistic theory, including quantum mechanics, Feynman-Kac-like kernels are the building blocks for suitable transition probability densities of the process. In the standard ``free'' case (Feynman-Kac potential equal to zero) the familiar Wiener noise is recovered. In the framework of the Schrödinger problem, the ``free noise'' can also be extended to any infinitely divisible probability law, as covered by the Lévy-Khintchine formula. Since the relativistic Hamiltonians ||∇|| and √-Δ+m2 -m are known to generate such laws, we focus on them for the analysis of probabilistic phenomena, which are shown to be associated with the relativistic wave (D'Alembert) and matter-wave (Klein-Gordon) equations, respectively. We show that such stochastic processes exist and are spatial jump processes. In general, in the presence of external potentials, they do not share the Markov property, except for stationary situations. A concrete example of the pseudodifferential Cauchy-Schrödinger evolution is analyzed in detail. The relativistic covariance of related wave equations is exploited to demonstrate how the associated stochastic jump processes comply with the principles of special relativity.

  18. Comparison between stochastic and machine learning methods for hydrological multi-step ahead forecasting: All forecasts are wrong!

    NASA Astrophysics Data System (ADS)

    Papacharalampous, Georgia; Tyralis, Hristos; Koutsoyiannis, Demetris

    2017-04-01

    Machine learning (ML) is considered to be a promising approach to hydrological processes forecasting. We conduct a comparison between several stochastic and ML point estimation methods by performing large-scale computational experiments based on simulations. The purpose is to provide generalized results, while the respective comparisons in the literature are usually based on case studies. The stochastic methods used include simple methods, models from the frequently used families of Autoregressive Moving Average (ARMA), Autoregressive Fractionally Integrated Moving Average (ARFIMA) and Exponential Smoothing models. The ML methods used are Random Forests (RF), Support Vector Machines (SVM) and Neural Networks (NN). The comparison refers to the multi-step ahead forecasting properties of the methods. A total of 20 methods are used, among which 9 are the ML methods. 12 simulation experiments are performed, while each of them uses 2 000 simulated time series of 310 observations. The time series are simulated using stochastic processes from the families of ARMA and ARFIMA models. Each time series is split into a fitting (first 300 observations) and a testing set (last 10 observations). The comparative assessment of the methods is based on 18 metrics, that quantify the methods' performance according to several criteria related to the accurate forecasting of the testing set, the capturing of its variation and the correlation between the testing and forecasted values. The most important outcome of this study is that there is not a uniformly better or worse method. However, there are methods that are regularly better or worse than others with respect to specific metrics. It appears that, although a general ranking of the methods is not possible, their classification based on their similar or contrasting performance in the various metrics is possible to some extent. Another important conclusion is that more sophisticated methods do not necessarily provide better forecasts compared to simpler methods. It is pointed out that the ML methods do not differ dramatically from the stochastic methods, while it is interesting that the NN, RF and SVM algorithms used in this study offer potentially very good performance in terms of accuracy. It should be noted that, although this study focuses on hydrological processes, the results are of general scientific interest. Another important point in this study is the use of several methods and metrics. Using fewer methods and fewer metrics would have led to a very different overall picture, particularly if those fewer metrics corresponded to fewer criteria. For this reason, we consider that the proposed methodology is appropriate for the evaluation of forecasting methods.

  19. Temperature variation effects on stochastic characteristics for low-cost MEMS-based inertial sensor error

    NASA Astrophysics Data System (ADS)

    El-Diasty, M.; El-Rabbany, A.; Pagiatakis, S.

    2007-11-01

    We examine the effect of varying the temperature points on MEMS inertial sensors' noise models using Allan variance and least-squares spectral analysis (LSSA). Allan variance is a method of representing root-mean-square random drift error as a function of averaging times. LSSA is an alternative to the classical Fourier methods and has been applied successfully by a number of researchers in the study of the noise characteristics of experimental series. Static data sets are collected at different temperature points using two MEMS-based IMUs, namely MotionPakII and Crossbow AHRS300CC. The performance of the two MEMS inertial sensors is predicted from the Allan variance estimation results at different temperature points and the LSSA is used to study the noise characteristics and define the sensors' stochastic model parameters. It is shown that the stochastic characteristics of MEMS-based inertial sensors can be identified using Allan variance estimation and LSSA and the sensors' stochastic model parameters are temperature dependent. Also, the Kaiser window FIR low-pass filter is used to investigate the effect of de-noising stage on the stochastic model. It is shown that the stochastic model is also dependent on the chosen cut-off frequency.

  20. Marked point process for modelling seismic activity (case study in Sumatra and Java)

    NASA Astrophysics Data System (ADS)

    Pratiwi, Hasih; Sulistya Rini, Lia; Wayan Mangku, I.

    2018-05-01

    Earthquake is a natural phenomenon that is random, irregular in space and time. Until now the forecast of earthquake occurrence at a location is still difficult to be estimated so that the development of earthquake forecast methodology is still carried out both from seismology aspect and stochastic aspect. To explain the random nature phenomena, both in space and time, a point process approach can be used. There are two types of point processes: temporal point process and spatial point process. The temporal point process relates to events observed over time as a sequence of time, whereas the spatial point process describes the location of objects in two or three dimensional spaces. The points on the point process can be labelled with additional information called marks. A marked point process can be considered as a pair (x, m) where x is the point of location and m is the mark attached to the point of that location. This study aims to model marked point process indexed by time on earthquake data in Sumatra Island and Java Island. This model can be used to analyse seismic activity through its intensity function by considering the history process up to time before t. Based on data obtained from U.S. Geological Survey from 1973 to 2017 with magnitude threshold 5, we obtained maximum likelihood estimate for parameters of the intensity function. The estimation of model parameters shows that the seismic activity in Sumatra Island is greater than Java Island.

  1. Universal statistics of terminal dynamics before collapse

    NASA Astrophysics Data System (ADS)

    Lenner, Nicolas; Eule, Stephan; Wolf, Fred

    Recent biological developments have both drastically increased the precision as well as amount of generated data, allowing for a switching from pure mean value characterization of the process under consideration to an analysis of the whole ensemble, exploiting the stochastic nature of biology. We focus on the general class of non-equilibrium processes with distinguished terminal points as can be found in cell fate decision, check points or cognitive neuroscience. Aligning the data to a terminal point (e.g. represented as an absorbing boundary) allows to device a general methodology to characterize and reverse engineer the terminating history. Using a small noise approximation we derive mean variance and covariance of the aligned data for general finite time singularities.

  2. Two new algorithms to combine kriging with stochastic modelling

    NASA Astrophysics Data System (ADS)

    Venema, Victor; Lindau, Ralf; Varnai, Tamas; Simmer, Clemens

    2010-05-01

    Two main groups of statistical methods used in the Earth sciences are geostatistics and stochastic modelling. Geostatistical methods, such as various kriging algorithms, aim at estimating the mean value for every point as well as possible. In case of sparse measurements, such fields have less variability at small scales and a narrower distribution as the true field. This can lead to biases if a nonlinear process is simulated driven by such a kriged field. Stochastic modelling aims at reproducing the statistical structure of the data in space and time. One of the stochastic modelling methods, the so-called surrogate data approach, replicates the value distribution and power spectrum of a certain data set. While stochastic methods reproduce the statistical properties of the data, the location of the measurement is not considered. This requires the use of so-called constrained stochastic models. Because radiative transfer through clouds is a highly nonlinear process, it is essential to model the distribution (e.g. of optical depth, extinction, liquid water content or liquid water path) accurately. In addition, the correlations within the cloud field are important, especially because of horizontal photon transport. This explains the success of surrogate cloud fields for use in 3D radiative transfer studies. Up to now, however, we could only achieve good results for the radiative properties averaged over the field, but not for a radiation measurement located at a certain position. Therefore we have developed a new algorithm that combines the accuracy of stochastic (surrogate) modelling with the positioning capabilities of kriging. In this way, we can automatically profit from the large geostatistical literature and software. This algorithm is similar to the standard iterative amplitude adjusted Fourier transform (IAAFT) algorithm, but has an additional iterative step in which the surrogate field is nudged towards the kriged field. The nudging strength is gradually reduced to zero during successive iterations. A second algorithm, which we call step-wise kriging, pursues the same aim. Each time the kriging algorithm estimates a value, noise is added to it, after which this new point is accounted for in the estimation of all the later points. In this way, the autocorrelation of the step-krigged field is close to that found in the pseudo measurements. The amount of noise is determined by the kriging uncertainty. The algorithms are tested on cloud fields from large eddy simulations (LES). On these clouds, a measurement is simulated. From these pseudo-measurements, we estimated the power spectrum for the surrogates, the semi-variogram for the (stepwise) kriging and the distribution. Furthermore, the pseudo-measurement is kriged. Because we work with LES clouds and the truth is known, we can validate the algorithm by performing 3D radiative transfer calculations on the original LES clouds and on the two new types of stochastic clouds. For comparison, also the radiative properties of the kriged fields and standard surrogate fields are computed. Preliminary results show that both algorithms reproduce the structure of the original clouds well, and the minima and maxima are located where the pseudo-measurements see them. The main problem for the quality of the structure and the root mean square error is the amount of data, which is especially very limited in case of just one zenith pointing measurement.

  3. Gaussian random bridges and a geometric model for information equilibrium

    NASA Astrophysics Data System (ADS)

    Mengütürk, Levent Ali

    2018-03-01

    The paper introduces a class of conditioned stochastic processes that we call Gaussian random bridges (GRBs) and proves some of their properties. Due to the anticipative representation of any GRB as the sum of a random variable and a Gaussian (T , 0) -bridge, GRBs can model noisy information processes in partially observed systems. In this spirit, we propose an asset pricing model with respect to what we call information equilibrium in a market with multiple sources of information. The idea is to work on a topological manifold endowed with a metric that enables us to systematically determine an equilibrium point of a stochastic system that can be represented by multiple points on that manifold at each fixed time. In doing so, we formulate GRB-based information diversity over a Riemannian manifold and show that it is pinned to zero over the boundary determined by Dirac measures. We then define an influence factor that controls the dominance of an information source in determining the best estimate of a signal in the L2-sense. When there are two sources, this allows us to construct information equilibrium as a functional of a geodesic-valued stochastic process, which is driven by an equilibrium convergence rate representing the signal-to-noise ratio. This leads us to derive price dynamics under what can be considered as an equilibrium probability measure. We also provide a semimartingale representation of Markovian GRBs associated with Gaussian martingales and a non-anticipative representation of fractional Brownian random bridges that can incorporate degrees of information coupling in a given system via the Hurst exponent.

  4. Random function representation of stationary stochastic vector processes for probability density evolution analysis of wind-induced structures

    NASA Astrophysics Data System (ADS)

    Liu, Zhangjun; Liu, Zenghui

    2018-06-01

    This paper develops a hybrid approach of spectral representation and random function for simulating stationary stochastic vector processes. In the proposed approach, the high-dimensional random variables, included in the original spectral representation (OSR) formula, could be effectively reduced to only two elementary random variables by introducing the random functions that serve as random constraints. Based on this, a satisfactory simulation accuracy can be guaranteed by selecting a small representative point set of the elementary random variables. The probability information of the stochastic excitations can be fully emerged through just several hundred of sample functions generated by the proposed approach. Therefore, combined with the probability density evolution method (PDEM), it could be able to implement dynamic response analysis and reliability assessment of engineering structures. For illustrative purposes, a stochastic turbulence wind velocity field acting on a frame-shear-wall structure is simulated by constructing three types of random functions to demonstrate the accuracy and efficiency of the proposed approach. Careful and in-depth studies concerning the probability density evolution analysis of the wind-induced structure have been conducted so as to better illustrate the application prospects of the proposed approach. Numerical examples also show that the proposed approach possesses a good robustness.

  5. Trends in modern system theory

    NASA Technical Reports Server (NTRS)

    Athans, M.

    1976-01-01

    The topics considered are related to linear control system design, adaptive control, failure detection, control under failure, system reliability, and large-scale systems and decentralized control. It is pointed out that the design of a linear feedback control system which regulates a process about a desirable set point or steady-state condition in the presence of disturbances is a very important problem. The linearized dynamics of the process are used for design purposes. The typical linear-quadratic design involving the solution of the optimal control problem of a linear time-invariant system with respect to a quadratic performance criterion is considered along with gain reduction theorems and the multivariable phase margin theorem. The stumbling block in many adaptive design methodologies is associated with the amount of real time computation which is necessary. Attention is also given to the desperate need to develop good theories for large-scale systems, the beginning of a microprocessor revolution, the translation of the Wiener-Hopf theory into the time domain, and advances made in dynamic team theory, dynamic stochastic games, and finite memory stochastic control.

  6. Chatter detection in turning using persistent homology

    NASA Astrophysics Data System (ADS)

    Khasawneh, Firas A.; Munch, Elizabeth

    2016-03-01

    This paper describes a new approach for ascertaining the stability of stochastic dynamical systems in their parameter space by examining their time series using topological data analysis (TDA). We illustrate the approach using a nonlinear delayed model that describes the tool oscillations due to self-excited vibrations in turning. Each time series is generated using the Euler-Maruyama method and a corresponding point cloud is obtained using the Takens embedding. The point cloud can then be analyzed using a tool from TDA known as persistent homology. The results of this study show that the described approach can be used for analyzing datasets of delay dynamical systems generated both from numerical simulation and experimental data. The contributions of this paper include presenting for the first time a topological approach for investigating the stability of a class of nonlinear stochastic delay equations, and introducing a new application of TDA to machining processes.

  7. Research in Stochastic Processes

    DTIC Science & Technology

    1988-10-10

    To appear in Proceedings Volume, Oberwolfach Conf. on Extremal Value Theory, Ed. J. HUsler and R. Reiss, Springer. 4. M.R. Leadbetter. The exceedance...Hsing, J. Husler and M.R. Leadbetter, On the exceedance point process for a stationary sequence, Probability Theor. Rel. Fields, 20, 1988, 97-112 Z.J...Oberwotfach Conf. on Extreme Value Theory. J. Husler and R. Reiss. eds.. Springer. to appear V. Mandrekar, On a limit theorem and invariance

  8. Martingales, detrending data, and the efficient market hypothesis

    NASA Astrophysics Data System (ADS)

    McCauley, Joseph L.; Bassler, Kevin E.; Gunaratne, Gemunu H.

    2008-01-01

    We discuss martingales, detrending data, and the efficient market hypothesis (EMH) for stochastic processes x( t) with arbitrary diffusion coefficients D( x, t). Beginning with x-independent drift coefficients R( t) we show that martingale stochastic processes generate uncorrelated, generally non-stationary increments. Generally, a test for a martingale is therefore a test for uncorrelated increments. A detrended process with an x-dependent drift coefficient is generally not a martingale, and so we extend our analysis to include the class of ( x, t)-dependent drift coefficients of interest in finance. We explain why martingales look Markovian at the level of both simple averages and 2-point correlations. And while a Markovian market has no memory to exploit and presumably cannot be beaten systematically, it has never been shown that martingale memory cannot be exploited in 3-point or higher correlations to beat the market. We generalize our Markov scaling solutions presented earlier, and also generalize the martingale formulation of the EMH to include ( x, t)-dependent drift in log returns. We also use the analysis of this paper to correct a misstatement of the ‘fair game’ condition in terms of serial correlations in Fama's paper on the EMH. We end with a discussion of Levy's characterization of Brownian motion and prove that an arbitrary martingale is topologically inequivalent to a Wiener process.

  9. Geotechnical parameter spatial distribution stochastic analysis based on multi-precision information assimilation

    NASA Astrophysics Data System (ADS)

    Wang, C.; Rubin, Y.

    2014-12-01

    Spatial distribution of important geotechnical parameter named compression modulus Es contributes considerably to the understanding of the underlying geological processes and the adequate assessment of the Es mechanics effects for differential settlement of large continuous structure foundation. These analyses should be derived using an assimilating approach that combines in-situ static cone penetration test (CPT) with borehole experiments. To achieve such a task, the Es distribution of stratum of silty clay in region A of China Expo Center (Shanghai) is studied using the Bayesian-maximum entropy method. This method integrates rigorously and efficiently multi-precision of different geotechnical investigations and sources of uncertainty. Single CPT samplings were modeled as a rational probability density curve by maximum entropy theory. Spatial prior multivariate probability density function (PDF) and likelihood PDF of the CPT positions were built by borehole experiments and the potential value of the prediction point, then, preceding numerical integration on the CPT probability density curves, the posterior probability density curve of the prediction point would be calculated by the Bayesian reverse interpolation framework. The results were compared between Gaussian Sequential Stochastic Simulation and Bayesian methods. The differences were also discussed between single CPT samplings of normal distribution and simulated probability density curve based on maximum entropy theory. It is shown that the study of Es spatial distributions can be improved by properly incorporating CPT sampling variation into interpolation process, whereas more informative estimations are generated by considering CPT Uncertainty for the estimation points. Calculation illustrates the significance of stochastic Es characterization in a stratum, and identifies limitations associated with inadequate geostatistical interpolation techniques. This characterization results will provide a multi-precision information assimilation method of other geotechnical parameters.

  10. The Stochastic Evolutionary Game for a Population of Biological Networks Under Natural Selection

    PubMed Central

    Chen, Bor-Sen; Ho, Shih-Ju

    2014-01-01

    In this study, a population of evolutionary biological networks is described by a stochastic dynamic system with intrinsic random parameter fluctuations due to genetic variations and external disturbances caused by environmental changes in the evolutionary process. Since information on environmental changes is unavailable and their occurrence is unpredictable, they can be considered as a game player with the potential to destroy phenotypic stability. The biological network needs to develop an evolutionary strategy to improve phenotypic stability as much as possible, so it can be considered as another game player in the evolutionary process, ie, a stochastic Nash game of minimizing the maximum network evolution level caused by the worst environmental disturbances. Based on the nonlinear stochastic evolutionary game strategy, we find that some genetic variations can be used in natural selection to construct negative feedback loops, efficiently improving network robustness. This provides larger genetic robustness as a buffer against neutral genetic variations, as well as larger environmental robustness to resist environmental disturbances and maintain a network phenotypic traits in the evolutionary process. In this situation, the robust phenotypic traits of stochastic biological networks can be more frequently selected by natural selection in evolution. However, if the harbored neutral genetic variations are accumulated to a sufficiently large degree, and environmental disturbances are strong enough that the network robustness can no longer confer enough genetic robustness and environmental robustness, then the phenotype robustness might break down. In this case, a network phenotypic trait may be pushed from one equilibrium point to another, changing the phenotypic trait and starting a new phase of network evolution through the hidden neutral genetic variations harbored in network robustness by adaptive evolution. Further, the proposed evolutionary game is extended to an n-tuple evolutionary game of stochastic biological networks with m players (competitive populations) and k environmental dynamics. PMID:24558296

  11. Physical Models of Cognition

    NASA Technical Reports Server (NTRS)

    Zak, Michail

    1994-01-01

    This paper presents and discusses physical models for simulating some aspects of neural intelligence, and, in particular, the process of cognition. The main departure from the classical approach here is in utilization of a terminal version of classical dynamics introduced by the author earlier. Based upon violations of the Lipschitz condition at equilibrium points, terminal dynamics attains two new fundamental properties: it is spontaneous and nondeterministic. Special attention is focused on terminal neurodynamics as a particular architecture of terminal dynamics which is suitable for modeling of information flows. Terminal neurodynamics possesses a well-organized probabilistic structure which can be analytically predicted, prescribed, and controlled, and therefore which presents a powerful tool for modeling real-life uncertainties. Two basic phenomena associated with random behavior of neurodynamic solutions are exploited. The first one is a stochastic attractor ; a stable stationary stochastic process to which random solutions of a closed system converge. As a model of the cognition process, a stochastic attractor can be viewed as a universal tool for generalization and formation of classes of patterns. The concept of stochastic attractor is applied to model a collective brain paradigm explaining coordination between simple units of intelligence which perform a collective task without direct exchange of information. The second fundamental phenomenon discussed is terminal chaos which occurs in open systems. Applications of terminal chaos to information fusion as well as to explanation and modeling of coordination among neurons in biological systems are discussed. It should be emphasized that all the models of terminal neurodynamics are implementable in analog devices, which means that all the cognition processes discussed in the paper are reducible to the laws of Newtonian mechanics.

  12. Space-time-modulated stochastic processes

    NASA Astrophysics Data System (ADS)

    Giona, Massimiliano

    2017-10-01

    Starting from the physical problem associated with the Lorentzian transformation of a Poisson-Kac process in inertial frames, the concept of space-time-modulated stochastic processes is introduced for processes possessing finite propagation velocity. This class of stochastic processes provides a two-way coupling between the stochastic perturbation acting on a physical observable and the evolution of the physical observable itself, which in turn influences the statistical properties of the stochastic perturbation during its evolution. The definition of space-time-modulated processes requires the introduction of two functions: a nonlinear amplitude modulation, controlling the intensity of the stochastic perturbation, and a time-horizon function, which modulates its statistical properties, providing irreducible feedback between the stochastic perturbation and the physical observable influenced by it. The latter property is the peculiar fingerprint of this class of models that makes them suitable for extension to generic curved-space times. Considering Poisson-Kac processes as prototypical examples of stochastic processes possessing finite propagation velocity, the balance equations for the probability density functions associated with their space-time modulations are derived. Several examples highlighting the peculiarities of space-time-modulated processes are thoroughly analyzed.

  13. Quantum learning of classical stochastic processes: The completely positive realization problem

    NASA Astrophysics Data System (ADS)

    Monràs, Alex; Winter, Andreas

    2016-01-01

    Among several tasks in Machine Learning, a specially important one is the problem of inferring the latent variables of a system and their causal relations with the observed behavior. A paradigmatic instance of this is the task of inferring the hidden Markov model underlying a given stochastic process. This is known as the positive realization problem (PRP), [L. Benvenuti and L. Farina, IEEE Trans. Autom. Control 49(5), 651-664 (2004)] and constitutes a central problem in machine learning. The PRP and its solutions have far-reaching consequences in many areas of systems and control theory, and is nowadays an important piece in the broad field of positive systems theory. We consider the scenario where the latent variables are quantum (i.e., quantum states of a finite-dimensional system) and the system dynamics is constrained only by physical transformations on the quantum system. The observable dynamics is then described by a quantum instrument, and the task is to determine which quantum instrument — if any — yields the process at hand by iterative application. We take as a starting point the theory of quasi-realizations, whence a description of the dynamics of the process is given in terms of linear maps on state vectors and probabilities are given by linear functionals on the state vectors. This description, despite its remarkable resemblance with the hidden Markov model, or the iterated quantum instrument, is however devoid of any stochastic or quantum mechanical interpretation, as said maps fail to satisfy any positivity conditions. The completely positive realization problem then consists in determining whether an equivalent quantum mechanical description of the same process exists. We generalize some key results of stochastic realization theory, and show that the problem has deep connections with operator systems theory, giving possible insight to the lifting problem in quotient operator systems. Our results have potential applications in quantum machine learning, device-independent characterization and reverse-engineering of stochastic processes and quantum processors, and more generally, of dynamical processes with quantum memory [M. Guţă, Phys. Rev. A 83(6), 062324 (2011); M. Guţă and N. Yamamoto, e-print arXiv:1303.3771(2013)].

  14. Robust Algorithms for Detecting a Change in a Stochastic Process with Infinite Memory

    DTIC Science & Technology

    1988-03-01

    breakdown point and the additional assumption of 0-mixing on the nominal meas- influence function . The structure of the optimal algorithm ures. Then Huber’s...are i.i.d. sequences of Gaus- For the breakdown point and the influence function sian random variables, with identical variance o2 . Let we will use...algebraic sign for i=0,1. Here z will be chosen such = f nthat it leads to worst case or earliest breakdown. i (14) Next, the influence function measures

  15. Hermite-Hadamard type inequality for φ{sub h}-convex stochastic processes

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sarıkaya, Mehmet Zeki, E-mail: sarikayamz@gmail.com; Kiriş, Mehmet Eyüp, E-mail: kiris@aku.edu.tr; Çelik, Nuri, E-mail: ncelik@bartin.edu.tr

    2016-04-18

    The main aim of the present paper is to introduce φ{sub h}-convex stochastic processes and we investigate main properties of these mappings. Moreover, we prove the Hadamard-type inequalities for φ{sub h}-convex stochastic processes. We also give some new general inequalities for φ{sub h}-convex stochastic processes.

  16. Stochastic Dynamical Model of a Growing Citation Network Based on a Self-Exciting Point Process

    NASA Astrophysics Data System (ADS)

    Golosovsky, Michael; Solomon, Sorin

    2012-08-01

    We put under experimental scrutiny the preferential attachment model that is commonly accepted as a generating mechanism of the scale-free complex networks. To this end we chose a citation network of physics papers and traced the citation history of 40 195 papers published in one year. Contrary to common belief, we find that the citation dynamics of the individual papers follows the superlinear preferential attachment, with the exponent α=1.25-1.3. Moreover, we show that the citation process cannot be described as a memoryless Markov chain since there is a substantial correlation between the present and recent citation rates of a paper. Based on our findings we construct a stochastic growth model of the citation network, perform numerical simulations based on this model and achieve an excellent agreement with the measured citation distributions.

  17. The propagator of stochastic electrodynamics

    NASA Astrophysics Data System (ADS)

    Cavalleri, G.

    1981-01-01

    The "elementary propagator" for the position of a free charged particle subject to the zero-point electromagnetic field with Lorentz-invariant spectral density ~ω3 is obtained. The nonstationary process for the position is solved by the stationary process for the acceleration. The dispersion of the position elementary propagator is compared with that of quantum electrodynamics. Finally, the evolution of the probability density is obtained starting from an initial distribution confined in a small volume and with a Gaussian distribution in the velocities. The resulting probability density for the position turns out to be equal, to within radiative corrections, to ψψ* where ψ is the Kennard wave packet. If the radiative corrections are retained, the present result is new since the corresponding expression in quantum electrodynamics has not yet been found. Besides preceding quantum electrodynamics for this problem, no renormalization is required in stochastic electrodynamics.

  18. A multi-site stochastic weather generator of daily precipitation and temperature

    USDA-ARS?s Scientific Manuscript database

    Stochastic weather generators are used to generate time series of climate variables that have statistical properties similar to those of observed data. Most stochastic weather generators work for a single site, and can only generate climate data at a single point, or independent time series at sever...

  19. Capturing rogue waves by multi-point statistics

    NASA Astrophysics Data System (ADS)

    Hadjihosseini, A.; Wächter, Matthias; Hoffmann, N. P.; Peinke, J.

    2016-01-01

    As an example of a complex system with extreme events, we investigate ocean wave states exhibiting rogue waves. We present a statistical method of data analysis based on multi-point statistics which for the first time allows the grasping of extreme rogue wave events in a highly satisfactory statistical manner. The key to the success of the approach is mapping the complexity of multi-point data onto the statistics of hierarchically ordered height increments for different time scales, for which we can show that a stochastic cascade process with Markov properties is governed by a Fokker-Planck equation. Conditional probabilities as well as the Fokker-Planck equation itself can be estimated directly from the available observational data. With this stochastic description surrogate data sets can in turn be generated, which makes it possible to work out arbitrary statistical features of the complex sea state in general, and extreme rogue wave events in particular. The results also open up new perspectives for forecasting the occurrence probability of extreme rogue wave events, and even for forecasting the occurrence of individual rogue waves based on precursory dynamics.

  20. A Ballistic Model of Choice Response Time

    ERIC Educational Resources Information Center

    Brown, Scott; Heathcote, Andrew

    2005-01-01

    Almost all models of response time (RT) use a stochastic accumulation process. To account for the benchmark RT phenomena, researchers have found it necessary to include between-trial variability in the starting point and/or the rate of accumulation, both in linear (R. Ratcliff & J. N. Rouder, 1998) and nonlinear (M. Usher & J. L. McClelland, 2001)…

  1. A new computer code for discrete fracture network modelling

    NASA Astrophysics Data System (ADS)

    Xu, Chaoshui; Dowd, Peter

    2010-03-01

    The authors describe a comprehensive software package for two- and three-dimensional stochastic rock fracture simulation using marked point processes. Fracture locations can be modelled by a Poisson, a non-homogeneous, a cluster or a Cox point process; fracture geometries and properties are modelled by their respective probability distributions. Virtual sampling tools such as plane, window and scanline sampling are included in the software together with a comprehensive set of statistical tools including histogram analysis, probability plots, rose diagrams and hemispherical projections. The paper describes in detail the theoretical basis of the implementation and provides a case study in rock fracture modelling to demonstrate the application of the software.

  2. Diffusion limit of Lévy-Lorentz gas is Brownian motion

    NASA Astrophysics Data System (ADS)

    Magdziarz, Marcin; Szczotka, Wladyslaw

    2018-07-01

    In this paper we analyze asymptotic behaviour of a stochastic process called Lévy-Lorentz gas. This process is aspecial kind of continuous-time random walk in which walker moves in the fixed environment composed of scattering points. Upon each collision the walker performs a flight to the nearest scattering point. This type of dynamics is observed in Lévy glasses or long quenched polymers. We show that the diffusion limit of Lévy-Lorentz gas with finite mean distance between scattering centers is the standard Brownian motion. Thus, for long times the behaviour of the Lévy-Lorentz gas is close to the diffusive regime.

  3. Quantum stochastic calculus associated with quadratic quantum noises

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ji, Un Cig, E-mail: uncigji@chungbuk.ac.kr; Sinha, Kalyan B., E-mail: kbs-jaya@yahoo.co.in

    2016-02-15

    We first study a class of fundamental quantum stochastic processes induced by the generators of a six dimensional non-solvable Lie †-algebra consisting of all linear combinations of the generalized Gross Laplacian and its adjoint, annihilation operator, creation operator, conservation, and time, and then we study the quantum stochastic integrals associated with the class of fundamental quantum stochastic processes, and the quantum Itô formula is revisited. The existence and uniqueness of solution of a quantum stochastic differential equation is proved. The unitarity conditions of solutions of quantum stochastic differential equations associated with the fundamental processes are examined. The quantum stochastic calculusmore » extends the Hudson-Parthasarathy quantum stochastic calculus.« less

  4. On the impact of a refined stochastic model for airborne LiDAR measurements

    NASA Astrophysics Data System (ADS)

    Bolkas, Dimitrios; Fotopoulos, Georgia; Glennie, Craig

    2016-09-01

    Accurate topographic information is critical for a number of applications in science and engineering. In recent years, airborne light detection and ranging (LiDAR) has become a standard tool for acquiring high quality topographic information. The assessment of airborne LiDAR derived DEMs is typically based on (i) independent ground control points and (ii) forward error propagation utilizing the LiDAR geo-referencing equation. The latter approach is dependent on the stochastic model information of the LiDAR observation components. In this paper, the well-known statistical tool of variance component estimation (VCE) is implemented for a dataset in Houston, Texas, in order to refine the initial stochastic information. Simulations demonstrate the impact of stochastic-model refinement for two practical applications, namely coastal inundation mapping and surface displacement estimation. Results highlight scenarios where erroneous stochastic information is detrimental. Furthermore, the refined stochastic information provides insights on the effect of each LiDAR measurement in the airborne LiDAR error budget. The latter is important for targeting future advancements in order to improve point cloud accuracy.

  5. Multiscale study on stochastic reconstructions of shale samples

    NASA Astrophysics Data System (ADS)

    Lili, J.; Lin, M.; Jiang, W. B.

    2016-12-01

    Shales are known to have multiscale pore systems, composed of macroscale fractures, micropores, and nanoscale pores within gas or oil-producing organic material. Also, shales are fissile and laminated, and the heterogeneity in horizontal is quite different from that in vertical. Stochastic reconstructions are extremely useful in situations where three-dimensional information is costly and time consuming. Thus the purpose of our paper is to reconstruct stochastically equiprobable 3D models containing information from several scales. In this paper, macroscale and microscale images of shale structure in the Lower Silurian Longmaxi are obtained by X-ray microtomography and nanoscale images are obtained by scanning electron microscopy. Each image is representative for all given scales and phases. Especially, the macroscale is four times coarser than the microscale, which in turn is four times lower in resolution than the nanoscale image. Secondly, the cross correlation-based simulation method (CCSIM) and the three-step sampling method are combined together to generate stochastic reconstructions for each scale. It is important to point out that the boundary points of pore and matrix are selected based on multiple-point connectivity function in the sampling process, and thus the characteristics of the reconstructed image can be controlled indirectly. Thirdly, all images with the same resolution are developed through downscaling and upscaling by interpolation, and then we merge multiscale categorical spatial data into a single 3D image with predefined resolution (the microscale image). 30 realizations using the given images and the proposed method are generated. The result reveals that the proposed method is capable of preserving the multiscale pore structure, both vertically and horizontally, which is necessary for accurate permeability prediction. The variogram curves and pore-size distribution for both original 3D sample and the generated 3D realizations are compared. The result indicates that the agreement between the original 3D sample and the generated stochastic realizations is excellent. This work is supported by "973" Program (2014CB239004), the Key Instrument Developing Project of the CAS (ZDYZ2012-1-08-02) and the National Natural Science Foundation of China (Grant No. 41574129).

  6. Stochastic oscillations in models of epidemics on a network of cities

    NASA Astrophysics Data System (ADS)

    Rozhnova, G.; Nunes, A.; McKane, A. J.

    2011-11-01

    We carry out an analytic investigation of stochastic oscillations in a susceptible-infected-recovered model of disease spread on a network of n cities. In the model a fraction fjk of individuals from city k commute to city j, where they may infect, or be infected by, others. Starting from a continuous-time Markov description of the model the deterministic equations, which are valid in the limit when the population of each city is infinite, are recovered. The stochastic fluctuations about the fixed point of these equations are derived by use of the van Kampen system-size expansion. The fixed point structure of the deterministic equations is remarkably simple: A unique nontrivial fixed point always exists and has the feature that the fraction of susceptible, infected, and recovered individuals is the same for each city irrespective of its size. We find that the stochastic fluctuations have an analogously simple dynamics: All oscillations have a single frequency, equal to that found in the one-city case. We interpret this phenomenon in terms of the properties of the spectrum of the matrix of the linear approximation of the deterministic equations at the fixed point.

  7. Stochastic Modeling of Direct Radiation Transmission in Particle-Laden Turbulent Flows

    NASA Astrophysics Data System (ADS)

    Banko, Andrew; Villafane, Laura; Kim, Ji Hoon; Esmaily Moghadam, Mahdi; Eaton, John K.

    2017-11-01

    Direct radiation transmission in turbulent flows laden with heavy particles plays a fundamental role in systems such as clouds, spray combustors, and particle-solar-receivers. Owing to their inertia, the particles preferentially concentrate and the resulting voids and clusters lead to deviations in mean transmission from the classical Beer-Lambert law for exponential extinction. Additionally, the transmission fluctuations can exceed those of Poissonian media by an order of magnitude, which implies a gross misprediction in transmission statistics if the correlations in particle positions are neglected. On the other hand, tracking millions of particles in a turbulence simulation can be prohibitively expensive. This work presents stochastic processes as computationally cheap reduced order models for the instantaneous particle number density field and radiation transmission therein. Results from the stochastic processes are compared to Monte Carlo Ray Tracing (MCRT) simulations using the particle positions obtained from the point-particle DNS of isotropic turbulence at a Taylor Reynolds number of 150. Accurate transmission statistics are predicted with respect to MCRT by matching the mean, variance, and correlation length of DNS number density fields. Funded by the U.S. Department of Energy under Grant No. DE-NA0002373-1 and the National Science Foundation under Grant No. DGE-114747.

  8. Predictions of Experimentally Observed Stochastic Ground Vibrations Induced by Blasting

    PubMed Central

    Kostić, Srđan; Perc, Matjaž; Vasović, Nebojša; Trajković, Slobodan

    2013-01-01

    In the present paper, we investigate the blast induced ground motion recorded at the limestone quarry “Suva Vrela” near Kosjerić, which is located in the western part of Serbia. We examine the recorded signals by means of surrogate data methods and a determinism test, in order to determine whether the recorded ground velocity is stochastic or deterministic in nature. Longitudinal, transversal and the vertical ground motion component are analyzed at three monitoring points that are located at different distances from the blasting source. The analysis reveals that the recordings belong to a class of stationary linear stochastic processes with Gaussian inputs, which could be distorted by a monotonic, instantaneous, time-independent nonlinear function. Low determinism factors obtained with the determinism test further confirm the stochastic nature of the recordings. Guided by the outcome of time series analysis, we propose an improved prediction model for the peak particle velocity based on a neural network. We show that, while conventional predictors fail to provide acceptable prediction accuracy, the neural network model with four main blast parameters as input, namely total charge, maximum charge per delay, distance from the blasting source to the measuring point, and hole depth, delivers significantly more accurate predictions that may be applicable on site. We also perform a sensitivity analysis, which reveals that the distance from the blasting source has the strongest influence on the final value of the peak particle velocity. This is in full agreement with previous observations and theory, thus additionally validating our methodology and main conclusions. PMID:24358140

  9. Filtering with Marked Point Process Observations via Poisson Chaos Expansion

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sun Wei, E-mail: wsun@mathstat.concordia.ca; Zeng Yong, E-mail: zengy@umkc.edu; Zhang Shu, E-mail: zhangshuisme@hotmail.com

    2013-06-15

    We study a general filtering problem with marked point process observations. The motivation comes from modeling financial ultra-high frequency data. First, we rigorously derive the unnormalized filtering equation with marked point process observations under mild assumptions, especially relaxing the bounded condition of stochastic intensity. Then, we derive the Poisson chaos expansion for the unnormalized filter. Based on the chaos expansion, we establish the uniqueness of solutions of the unnormalized filtering equation. Moreover, we derive the Poisson chaos expansion for the unnormalized filter density under additional conditions. To explore the computational advantage, we further construct a new consistent recursive numerical schememore » based on the truncation of the chaos density expansion for a simple case. The new algorithm divides the computations into those containing solely system coefficients and those including the observations, and assign the former off-line.« less

  10. Variance change point detection for fractional Brownian motion based on the likelihood ratio test

    NASA Astrophysics Data System (ADS)

    Kucharczyk, Daniel; Wyłomańska, Agnieszka; Sikora, Grzegorz

    2018-01-01

    Fractional Brownian motion is one of the main stochastic processes used for describing the long-range dependence phenomenon for self-similar processes. It appears that for many real time series, characteristics of the data change significantly over time. Such behaviour one can observe in many applications, including physical and biological experiments. In this paper, we present a new technique for the critical change point detection for cases where the data under consideration are driven by fractional Brownian motion with a time-changed diffusion coefficient. The proposed methodology is based on the likelihood ratio approach and represents an extension of a similar methodology used for Brownian motion, the process with independent increments. Here, we also propose a statistical test for testing the significance of the estimated critical point. In addition to that, an extensive simulation study is provided to test the performance of the proposed method.

  11. Stochastic models for inferring genetic regulation from microarray gene expression data.

    PubMed

    Tian, Tianhai

    2010-03-01

    Microarray expression profiles are inherently noisy and many different sources of variation exist in microarray experiments. It is still a significant challenge to develop stochastic models to realize noise in microarray expression profiles, which has profound influence on the reverse engineering of genetic regulation. Using the target genes of the tumour suppressor gene p53 as the test problem, we developed stochastic differential equation models and established the relationship between the noise strength of stochastic models and parameters of an error model for describing the distribution of the microarray measurements. Numerical results indicate that the simulated variance from stochastic models with a stochastic degradation process can be represented by a monomial in terms of the hybridization intensity and the order of the monomial depends on the type of stochastic process. The developed stochastic models with multiple stochastic processes generated simulations whose variance is consistent with the prediction of the error model. This work also established a general method to develop stochastic models from experimental information. 2009 Elsevier Ireland Ltd. All rights reserved.

  12. Tipping point analysis of atmospheric oxygen concentration

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Livina, V. N.; Forbes, A. B.; Vaz Martins, T. M.

    2015-03-15

    We apply tipping point analysis to nine observational oxygen concentration records around the globe, analyse their dynamics and perform projections under possible future scenarios, leading to oxygen deficiency in the atmosphere. The analysis is based on statistical physics framework with stochastic modelling, where we represent the observed data as a composition of deterministic and stochastic components estimated from the observed data using Bayesian and wavelet techniques.

  13. Point-source stochastic-method simulations of ground motions for the PEER NGA-East Project

    USGS Publications Warehouse

    Boore, David

    2015-01-01

    Ground-motions for the PEER NGA-East project were simulated using a point-source stochastic method. The simulated motions are provided for distances between of 0 and 1200 km, M from 4 to 8, and 25 ground-motion intensity measures: peak ground velocity (PGV), peak ground acceleration (PGA), and 5%-damped pseudoabsolute response spectral acceleration (PSA) for 23 periods ranging from 0.01 s to 10.0 s. Tables of motions are provided for each of six attenuation models. The attenuation-model-dependent stress parameters used in the stochastic-method simulations were derived from inversion of PSA data from eight earthquakes in eastern North America.

  14. Cox process representation and inference for stochastic reaction-diffusion processes

    NASA Astrophysics Data System (ADS)

    Schnoerr, David; Grima, Ramon; Sanguinetti, Guido

    2016-05-01

    Complex behaviour in many systems arises from the stochastic interactions of spatially distributed particles or agents. Stochastic reaction-diffusion processes are widely used to model such behaviour in disciplines ranging from biology to the social sciences, yet they are notoriously difficult to simulate and calibrate to observational data. Here we use ideas from statistical physics and machine learning to provide a solution to the inverse problem of learning a stochastic reaction-diffusion process from data. Our solution relies on a non-trivial connection between stochastic reaction-diffusion processes and spatio-temporal Cox processes, a well-studied class of models from computational statistics. This connection leads to an efficient and flexible algorithm for parameter inference and model selection. Our approach shows excellent accuracy on numeric and real data examples from systems biology and epidemiology. Our work provides both insights into spatio-temporal stochastic systems, and a practical solution to a long-standing problem in computational modelling.

  15. Feynman-Kac formula for stochastic hybrid systems.

    PubMed

    Bressloff, Paul C

    2017-01-01

    We derive a Feynman-Kac formula for functionals of a stochastic hybrid system evolving according to a piecewise deterministic Markov process. We first derive a stochastic Liouville equation for the moment generator of the stochastic functional, given a particular realization of the underlying discrete Markov process; the latter generates transitions between different dynamical equations for the continuous process. We then analyze the stochastic Liouville equation using methods recently developed for diffusion processes in randomly switching environments. In particular, we obtain dynamical equations for the moment generating function, averaged with respect to realizations of the discrete Markov process. The resulting Feynman-Kac formula takes the form of a differential Chapman-Kolmogorov equation. We illustrate the theory by calculating the occupation time for a one-dimensional velocity jump process on the infinite or semi-infinite real line. Finally, we present an alternative derivation of the Feynman-Kac formula based on a recent path-integral formulation of stochastic hybrid systems.

  16. Fast Quantum Algorithm for Predicting Descriptive Statistics of Stochastic Processes

    NASA Technical Reports Server (NTRS)

    Williams Colin P.

    1999-01-01

    Stochastic processes are used as a modeling tool in several sub-fields of physics, biology, and finance. Analytic understanding of the long term behavior of such processes is only tractable for very simple types of stochastic processes such as Markovian processes. However, in real world applications more complex stochastic processes often arise. In physics, the complicating factor might be nonlinearities; in biology it might be memory effects; and in finance is might be the non-random intentional behavior of participants in a market. In the absence of analytic insight, one is forced to understand these more complex stochastic processes via numerical simulation techniques. In this paper we present a quantum algorithm for performing such simulations. In particular, we show how a quantum algorithm can predict arbitrary descriptive statistics (moments) of N-step stochastic processes in just O(square root of N) time. That is, the quantum complexity is the square root of the classical complexity for performing such simulations. This is a significant speedup in comparison to the current state of the art.

  17. Time-Frequency Approach for Stochastic Signal Detection

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ghosh, Ripul; Akula, Aparna; Kumar, Satish

    2011-10-20

    The detection of events in a stochastic signal has been a subject of great interest. One of the oldest signal processing technique, Fourier Transform of a signal contains information regarding frequency content, but it cannot resolve the exact onset of changes in the frequency, all temporal information is contained in the phase of the transform. On the other hand, Spectrogram is better able to resolve temporal evolution of frequency content, but has a trade-off in time resolution versus frequency resolution in accordance with the uncertainty principle. Therefore, time-frequency representations are considered for energetic characterisation of the non-stationary signals. Wigner Villemore » Distribution (WVD) is the most prominent quadratic time-frequency signal representation and used for analysing frequency variations in signals.WVD allows for instantaneous frequency estimation at each data point, for a typical temporal resolution of fractions of a second. This paper through simulations describes the way time frequency models are applied for the detection of event in a stochastic signal.« less

  18. Time-Frequency Approach for Stochastic Signal Detection

    NASA Astrophysics Data System (ADS)

    Ghosh, Ripul; Akula, Aparna; Kumar, Satish; Sardana, H. K.

    2011-10-01

    The detection of events in a stochastic signal has been a subject of great interest. One of the oldest signal processing technique, Fourier Transform of a signal contains information regarding frequency content, but it cannot resolve the exact onset of changes in the frequency, all temporal information is contained in the phase of the transform. On the other hand, Spectrogram is better able to resolve temporal evolution of frequency content, but has a trade-off in time resolution versus frequency resolution in accordance with the uncertainty principle. Therefore, time-frequency representations are considered for energetic characterisation of the non-stationary signals. Wigner Ville Distribution (WVD) is the most prominent quadratic time-frequency signal representation and used for analysing frequency variations in signals.WVD allows for instantaneous frequency estimation at each data point, for a typical temporal resolution of fractions of a second. This paper through simulations describes the way time frequency models are applied for the detection of event in a stochastic signal.

  19. Nonclassical point of view of the Brownian motion generation via fractional deterministic model

    NASA Astrophysics Data System (ADS)

    Gilardi-Velázquez, H. E.; Campos-Cantón, E.

    In this paper, we present a dynamical system based on the Langevin equation without stochastic term and using fractional derivatives that exhibit properties of Brownian motion, i.e. a deterministic model to generate Brownian motion is proposed. The stochastic process is replaced by considering an additional degree of freedom in the second-order Langevin equation. Thus, it is transformed into a system of three first-order linear differential equations, additionally α-fractional derivative are considered which allow us to obtain better statistical properties. Switching surfaces are established as a part of fluctuating acceleration. The final system of three α-order linear differential equations does not contain a stochastic term, so the system generates motion in a deterministic way. Nevertheless, from the time series analysis, we found that the behavior of the system exhibits statistics properties of Brownian motion, such as, a linear growth in time of mean square displacement, a Gaussian distribution. Furthermore, we use the detrended fluctuation analysis to prove the Brownian character of this motion.

  20. Large Deviations for Nonlocal Stochastic Neural Fields

    PubMed Central

    2014-01-01

    We study the effect of additive noise on integro-differential neural field equations. In particular, we analyze an Amari-type model driven by a Q-Wiener process, and focus on noise-induced transitions and escape. We argue that proving a sharp Kramers’ law for neural fields poses substantial difficulties, but that one may transfer techniques from stochastic partial differential equations to establish a large deviation principle (LDP). Then we demonstrate that an efficient finite-dimensional approximation of the stochastic neural field equation can be achieved using a Galerkin method and that the resulting finite-dimensional rate function for the LDP can have a multiscale structure in certain cases. These results form the starting point for an efficient practical computation of the LDP. Our approach also provides the technical basis for further rigorous study of noise-induced transitions in neural fields based on Galerkin approximations. Mathematics Subject Classification (2000): 60F10, 60H15, 65M60, 92C20. PMID:24742297

  1. Political model of social evolution

    PubMed Central

    Acemoglu, Daron; Egorov, Georgy; Sonin, Konstantin

    2011-01-01

    Almost all democratic societies evolved socially and politically out of authoritarian and nondemocratic regimes. These changes not only altered the allocation of economic resources in society but also the structure of political power. In this paper, we develop a framework for studying the dynamics of political and social change. The society consists of agents that care about current and future social arrangements and economic allocations; allocation of political power determines who has the capacity to implement changes in economic allocations and future allocations of power. The set of available social rules and allocations at any point in time is stochastic. We show that political and social change may happen without any stochastic shocks or as a result of a shock destabilizing an otherwise stable social arrangement. Crucially, the process of social change is contingent (and history-dependent): the timing and sequence of stochastic events determine the long-run equilibrium social arrangements. For example, the extent of democratization may depend on how early uncertainty about the set of feasible reforms in the future is resolved. PMID:22198760

  2. Political model of social evolution.

    PubMed

    Acemoglu, Daron; Egorov, Georgy; Sonin, Konstantin

    2011-12-27

    Almost all democratic societies evolved socially and politically out of authoritarian and nondemocratic regimes. These changes not only altered the allocation of economic resources in society but also the structure of political power. In this paper, we develop a framework for studying the dynamics of political and social change. The society consists of agents that care about current and future social arrangements and economic allocations; allocation of political power determines who has the capacity to implement changes in economic allocations and future allocations of power. The set of available social rules and allocations at any point in time is stochastic. We show that political and social change may happen without any stochastic shocks or as a result of a shock destabilizing an otherwise stable social arrangement. Crucially, the process of social change is contingent (and history-dependent): the timing and sequence of stochastic events determine the long-run equilibrium social arrangements. For example, the extent of democratization may depend on how early uncertainty about the set of feasible reforms in the future is resolved.

  3. Quantum learning of classical stochastic processes: The completely positive realization problem

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Monràs, Alex; Centre for Quantum Technologies, National University of Singapore, 3 Science Drive 2, Singapore 117543; Winter, Andreas

    2016-01-15

    Among several tasks in Machine Learning, a specially important one is the problem of inferring the latent variables of a system and their causal relations with the observed behavior. A paradigmatic instance of this is the task of inferring the hidden Markov model underlying a given stochastic process. This is known as the positive realization problem (PRP), [L. Benvenuti and L. Farina, IEEE Trans. Autom. Control 49(5), 651–664 (2004)] and constitutes a central problem in machine learning. The PRP and its solutions have far-reaching consequences in many areas of systems and control theory, and is nowadays an important piece inmore » the broad field of positive systems theory. We consider the scenario where the latent variables are quantum (i.e., quantum states of a finite-dimensional system) and the system dynamics is constrained only by physical transformations on the quantum system. The observable dynamics is then described by a quantum instrument, and the task is to determine which quantum instrument — if any — yields the process at hand by iterative application. We take as a starting point the theory of quasi-realizations, whence a description of the dynamics of the process is given in terms of linear maps on state vectors and probabilities are given by linear functionals on the state vectors. This description, despite its remarkable resemblance with the hidden Markov model, or the iterated quantum instrument, is however devoid of any stochastic or quantum mechanical interpretation, as said maps fail to satisfy any positivity conditions. The completely positive realization problem then consists in determining whether an equivalent quantum mechanical description of the same process exists. We generalize some key results of stochastic realization theory, and show that the problem has deep connections with operator systems theory, giving possible insight to the lifting problem in quotient operator systems. Our results have potential applications in quantum machine learning, device-independent characterization and reverse-engineering of stochastic processes and quantum processors, and more generally, of dynamical processes with quantum memory [M. Guţă, Phys. Rev. A 83(6), 062324 (2011); M. Guţă and N. Yamamoto, e-print http://arxiv.org/abs/1303.3771 (2013)].« less

  4. Optimal Strategy for Integrated Dynamic Inventory Control and Supplier Selection in Unknown Environment via Stochastic Dynamic Programming

    NASA Astrophysics Data System (ADS)

    Sutrisno; Widowati; Solikhin

    2016-06-01

    In this paper, we propose a mathematical model in stochastic dynamic optimization form to determine the optimal strategy for an integrated single product inventory control problem and supplier selection problem where the demand and purchasing cost parameters are random. For each time period, by using the proposed model, we decide the optimal supplier and calculate the optimal product volume purchased from the optimal supplier so that the inventory level will be located at some point as close as possible to the reference point with minimal cost. We use stochastic dynamic programming to solve this problem and give several numerical experiments to evaluate the model. From the results, for each time period, the proposed model was generated the optimal supplier and the inventory level was tracked the reference point well.

  5. A Semiparametric Change-Point Regression Model for Longitudinal Observations.

    PubMed

    Xing, Haipeng; Ying, Zhiliang

    2012-12-01

    Many longitudinal studies involve relating an outcome process to a set of possibly time-varying covariates, giving rise to the usual regression models for longitudinal data. When the purpose of the study is to investigate the covariate effects when experimental environment undergoes abrupt changes or to locate the periods with different levels of covariate effects, a simple and easy-to-interpret approach is to introduce change-points in regression coefficients. In this connection, we propose a semiparametric change-point regression model, in which the error process (stochastic component) is nonparametric and the baseline mean function (functional part) is completely unspecified, the observation times are allowed to be subject-specific, and the number, locations and magnitudes of change-points are unknown and need to be estimated. We further develop an estimation procedure which combines the recent advance in semiparametric analysis based on counting process argument and multiple change-points inference, and discuss its large sample properties, including consistency and asymptotic normality, under suitable regularity conditions. Simulation results show that the proposed methods work well under a variety of scenarios. An application to a real data set is also given.

  6. A new algorithm combining geostatistics with the surrogate data approach to increase the accuracy of comparisons of point radiation measurements with cloud measurements

    NASA Astrophysics Data System (ADS)

    Venema, V. K. C.; Lindau, R.; Varnai, T.; Simmer, C.

    2009-04-01

    Two main groups of statistical methods used in the Earth sciences are geostatistics and stochastic modelling. Geostatistical methods, such as various kriging algorithms, aim at estimating the mean value for every point as well as possible. In case of sparse measurements, such fields have less variability at small scales and a narrower distribution as the true field. This can lead to biases if a nonlinear process is simulated on such a kriged field. Stochastic modelling aims at reproducing the structure of the data. One of the stochastic modelling methods, the so-called surrogate data approach, replicates the value distribution and power spectrum of a certain data set. However, while stochastic methods reproduce the statistical properties of the data, the location of the measurement is not considered. Because radiative transfer through clouds is a highly nonlinear process it is essential to model the distribution (e.g. of optical depth, extinction, liquid water content or liquid water path) accurately as well as the correlations in the cloud field because of horizontal photon transport. This explains the success of surrogate cloud fields for use in 3D radiative transfer studies. However, up to now we could only achieve good results for the radiative properties averaged over the field, but not for a radiation measurement located at a certain position. Therefore we have developed a new algorithm that combines the accuracy of stochastic (surrogate) modelling with the positioning capabilities of kriging. In this way, we can automatically profit from the large geostatistical literature and software. The algorithm is tested on cloud fields from large eddy simulations (LES). On these clouds a measurement is simulated. From the pseudo-measurement we estimated the distribution and power spectrum. Furthermore, the pseudo-measurement is kriged to a field the size of the final surrogate cloud. The distribution, spectrum and the kriged field are the inputs to the algorithm. This algorithm is similar to the standard iterative amplitude adjusted Fourier transform (IAAFT) algorithm, but has an additional iterative step in which the surrogate field is nudged towards the kriged field. The nudging strength is gradually reduced to zero. We work with four types of pseudo-measurements: one zenith pointing measurement (which together with the wind produces a line measurement), five zenith pointing measurements, a slow and a fast azimuth scan (which together with the wind produce spirals). Because we work with LES clouds and the truth is known, we can validate the algorithm by performing 3D radiative transfer calculations on the original LES clouds and on the new surrogate clouds. For comparison also the radiative properties of the kriged fields and standard surrogate fields are computed. Preliminary results already show that these new surrogate clouds reproduce the structure of the original clouds very well and the minima and maxima are located where the pseudo-measurements sees them. The main limitation seems to be the amount of data, which is especially very limited in case of just one zenith pointing measurement.

  7. Stochastic Community Assembly: Does It Matter in Microbial Ecology?

    PubMed

    Zhou, Jizhong; Ning, Daliang

    2017-12-01

    Understanding the mechanisms controlling community diversity, functions, succession, and biogeography is a central, but poorly understood, topic in ecology, particularly in microbial ecology. Although stochastic processes are believed to play nonnegligible roles in shaping community structure, their importance relative to deterministic processes is hotly debated. The importance of ecological stochasticity in shaping microbial community structure is far less appreciated. Some of the main reasons for such heavy debates are the difficulty in defining stochasticity and the diverse methods used for delineating stochasticity. Here, we provide a critical review and synthesis of data from the most recent studies on stochastic community assembly in microbial ecology. We then describe both stochastic and deterministic components embedded in various ecological processes, including selection, dispersal, diversification, and drift. We also describe different approaches for inferring stochasticity from observational diversity patterns and highlight experimental approaches for delineating ecological stochasticity in microbial communities. In addition, we highlight research challenges, gaps, and future directions for microbial community assembly research. Copyright © 2017 American Society for Microbiology.

  8. Stochastic Surface Mesh Reconstruction

    NASA Astrophysics Data System (ADS)

    Ozendi, M.; Akca, D.; Topan, H.

    2018-05-01

    A generic and practical methodology is presented for 3D surface mesh reconstruction from the terrestrial laser scanner (TLS) derived point clouds. It has two main steps. The first step deals with developing an anisotropic point error model, which is capable of computing the theoretical precisions of 3D coordinates of each individual point in the point cloud. The magnitude and direction of the errors are represented in the form of error ellipsoids. The following second step is focused on the stochastic surface mesh reconstruction. It exploits the previously determined error ellipsoids by computing a point-wise quality measure, which takes into account the semi-diagonal axis length of the error ellipsoid. The points only with the least errors are used in the surface triangulation. The remaining ones are automatically discarded.

  9. Effect of elongation in divertor tokamaks

    NASA Astrophysics Data System (ADS)

    Jones, Morgin; Ali, Halima; Punjabi, Alkesh

    2008-04-01

    Method of maps developed by Punjabi and Boozer [A. Punjabi, A. Verma, and A. Boozer, Phys.Rev. Lett. 69, 3322 (1992)] is used to calculate the effects of elongation on stochastic layer and magnetic footprint in divertor tokamaks. The parameters in the map are chosen such that the poloidal magnetic flux χSEP inside the ideal separatrix, the amplitude δ of magnetic perturbation, and the height H of the ideal separatrix surface are held fixed. The safety factor q for the flux surfaces that are nonchaotic as a function of normalized distance d from the O-point to the X-point is also held approximately constant. Under these conditions, the width W of the ideal separatrix surface in the midplane through the O-point is varied. The relative width w of stochastic layer near the X-point and the area A of magnetic footprint are then calculated. We find that the normalized width w of stochastic layer scales as W-7, and the area A of magnetic footprint on collector plate scales as W-10.

  10. A stochastic estimation procedure for intermittently-observed semi-Markov multistate models with back transitions.

    PubMed

    Aralis, Hilary; Brookmeyer, Ron

    2017-01-01

    Multistate models provide an important method for analyzing a wide range of life history processes including disease progression and patient recovery following medical intervention. Panel data consisting of the states occupied by an individual at a series of discrete time points are often used to estimate transition intensities of the underlying continuous-time process. When transition intensities depend on the time elapsed in the current state and back transitions between states are possible, this intermittent observation process presents difficulties in estimation due to intractability of the likelihood function. In this manuscript, we present an iterative stochastic expectation-maximization algorithm that relies on a simulation-based approximation to the likelihood function and implement this algorithm using rejection sampling. In a simulation study, we demonstrate the feasibility and performance of the proposed procedure. We then demonstrate application of the algorithm to a study of dementia, the Nun Study, consisting of intermittently-observed elderly subjects in one of four possible states corresponding to intact cognition, impaired cognition, dementia, and death. We show that the proposed stochastic expectation-maximization algorithm substantially reduces bias in model parameter estimates compared to an alternative approach used in the literature, minimal path estimation. We conclude that in estimating intermittently observed semi-Markov models, the proposed approach is a computationally feasible and accurate estimation procedure that leads to substantial improvements in back transition estimates.

  11. An application of information theory to stochastic classical gravitational fields

    NASA Astrophysics Data System (ADS)

    Angulo, J.; Angulo, J. C.; Angulo, J. M.

    2018-06-01

    The objective of this study lies on the incorporation of the concepts developed in the Information Theory (entropy, complexity, etc.) with the aim of quantifying the variation of the uncertainty associated with a stochastic physical system resident in a spatiotemporal region. As an example of application, a relativistic classical gravitational field has been considered, with a stochastic behavior resulting from the effect induced by one or several external perturbation sources. One of the key concepts of the study is the covariance kernel between two points within the chosen region. Using this concept and the appropriate criteria, a methodology is proposed to evaluate the change of uncertainty at a given spatiotemporal point, based on available information and efficiently applying the diverse methods that Information Theory provides. For illustration, a stochastic version of the Einstein equation with an added Gaussian Langevin term is analyzed.

  12. Recurrence plots of discrete-time Gaussian stochastic processes

    NASA Astrophysics Data System (ADS)

    Ramdani, Sofiane; Bouchara, Frédéric; Lagarde, Julien; Lesne, Annick

    2016-09-01

    We investigate the statistical properties of recurrence plots (RPs) of data generated by discrete-time stationary Gaussian random processes. We analytically derive the theoretical values of the probabilities of occurrence of recurrence points and consecutive recurrence points forming diagonals in the RP, with an embedding dimension equal to 1. These results allow us to obtain theoretical values of three measures: (i) the recurrence rate (REC) (ii) the percent determinism (DET) and (iii) RP-based estimation of the ε-entropy κ(ε) in the sense of correlation entropy. We apply these results to two Gaussian processes, namely first order autoregressive processes and fractional Gaussian noise. For these processes, we simulate a number of realizations and compare the RP-based estimations of the three selected measures to their theoretical values. These comparisons provide useful information on the quality of the estimations, such as the minimum required data length and threshold radius used to construct the RP.

  13. Stochastic architecture for Hopfield neural nets

    NASA Technical Reports Server (NTRS)

    Pavel, Sandy

    1992-01-01

    An expandable stochastic digital architecture for recurrent (Hopfield like) neural networks is proposed. The main features and basic principles of stochastic processing are presented. The stochastic digital architecture is based on a chip with n full interconnected neurons with a pipeline, bit processing structure. For large applications, a flexible way to interconnect many such chips is provided.

  14. Rogue waves in terms of multi-point statistics and nonequilibrium thermodynamics

    NASA Astrophysics Data System (ADS)

    Hadjihosseini, Ali; Lind, Pedro; Mori, Nobuhito; Hoffmann, Norbert P.; Peinke, Joachim

    2017-04-01

    Ocean waves, which lead to rogue waves, are investigated on the background of complex systems. In contrast to deterministic approaches based on the nonlinear Schroedinger equation or focusing effects, we analyze this system in terms of a noisy stochastic system. In particular we present a statistical method that maps the complexity of multi-point data into the statistics of hierarchically ordered height increments for different time scales. We show that the stochastic cascade process with Markov properties is governed by a Fokker-Planck equation. Conditional probabilities as well as the Fokker-Planck equation itself can be estimated directly from the available observational data. This stochastic description enables us to show several new aspects of wave states. Surrogate data sets can in turn be generated allowing to work out different statistical features of the complex sea state in general and extreme rogue wave events in particular. The results also open up new perspectives for forecasting the occurrence probability of extreme rogue wave events, and even for forecasting the occurrence of individual rogue waves based on precursory dynamics. As a new outlook the ocean wave states will be considered in terms of nonequilibrium thermodynamics, for which the entropy production of different wave heights will be considered. We show evidence that rogue waves are characterized by negative entropy production. The statistics of the entropy production can be used to distinguish different wave states.

  15. Doubly stochastic Poisson processes in artificial neural learning.

    PubMed

    Card, H C

    1998-01-01

    This paper investigates neuron activation statistics in artificial neural networks employing stochastic arithmetic. It is shown that a doubly stochastic Poisson process is an appropriate model for the signals in these circuits.

  16. Controllability of fractional higher order stochastic integrodifferential systems with fractional Brownian motion.

    PubMed

    Sathiyaraj, T; Balasubramaniam, P

    2017-11-30

    This paper presents a new set of sufficient conditions for controllability of fractional higher order stochastic integrodifferential systems with fractional Brownian motion (fBm) in finite dimensional space using fractional calculus, fixed point technique and stochastic analysis approach. In particular, we discuss the complete controllability for nonlinear fractional stochastic integrodifferential systems under the proved result of the corresponding linear fractional system is controllable. Finally, an example is presented to illustrate the efficiency of the obtained theoretical results. Copyright © 2017 ISA. Published by Elsevier Ltd. All rights reserved.

  17. A Stochastic Diffusion Process for the Dirichlet Distribution

    DOE PAGES

    Bakosi, J.; Ristorcelli, J. R.

    2013-03-01

    The method of potential solutions of Fokker-Planck equations is used to develop a transport equation for the joint probability ofNcoupled stochastic variables with the Dirichlet distribution as its asymptotic solution. To ensure a bounded sample space, a coupled nonlinear diffusion process is required: the Wiener processes in the equivalent system of stochastic differential equations are multiplicative with coefficients dependent on all the stochastic variables. Individual samples of a discrete ensemble, obtained from the stochastic process, satisfy a unit-sum constraint at all times. The process may be used to represent realizations of a fluctuating ensemble ofNvariables subject to a conservation principle.more » Similar to the multivariate Wright-Fisher process, whose invariant is also Dirichlet, the univariate case yields a process whose invariant is the beta distribution. As a test of the results, Monte Carlo simulations are used to evolve numerical ensembles toward the invariant Dirichlet distribution.« less

  18. Solving Constraint Satisfaction Problems with Networks of Spiking Neurons

    PubMed Central

    Jonke, Zeno; Habenschuss, Stefan; Maass, Wolfgang

    2016-01-01

    Network of neurons in the brain apply—unlike processors in our current generation of computer hardware—an event-based processing strategy, where short pulses (spikes) are emitted sparsely by neurons to signal the occurrence of an event at a particular point in time. Such spike-based computations promise to be substantially more power-efficient than traditional clocked processing schemes. However, it turns out to be surprisingly difficult to design networks of spiking neurons that can solve difficult computational problems on the level of single spikes, rather than rates of spikes. We present here a new method for designing networks of spiking neurons via an energy function. Furthermore, we show how the energy function of a network of stochastically firing neurons can be shaped in a transparent manner by composing the networks of simple stereotypical network motifs. We show that this design approach enables networks of spiking neurons to produce approximate solutions to difficult (NP-hard) constraint satisfaction problems from the domains of planning/optimization and verification/logical inference. The resulting networks employ noise as a computational resource. Nevertheless, the timing of spikes plays an essential role in their computations. Furthermore, networks of spiking neurons carry out for the Traveling Salesman Problem a more efficient stochastic search for good solutions compared with stochastic artificial neural networks (Boltzmann machines) and Gibbs sampling. PMID:27065785

  19. Connecting massive galaxies to dark matter haloes in BOSS - I. Is galaxy colour a stochastic process in high-mass haloes?

    NASA Astrophysics Data System (ADS)

    Saito, Shun; Leauthaud, Alexie; Hearin, Andrew P.; Bundy, Kevin; Zentner, Andrew R.; Behroozi, Peter S.; Reid, Beth A.; Sinha, Manodeep; Coupon, Jean; Tinker, Jeremy L.; White, Martin; Schneider, Donald P.

    2016-08-01

    We use subhalo abundance matching (SHAM) to model the stellar mass function (SMF) and clustering of the Baryon Oscillation Spectroscopic Survey (BOSS) `CMASS' sample at z ˜ 0.5. We introduce a novel method which accounts for the stellar mass incompleteness of CMASS as a function of redshift, and produce CMASS mock catalogues which include selection effects, reproduce the overall SMF, the projected two-point correlation function wp, the CMASS dn/dz, and are made publicly available. We study the effects of assembly bias above collapse mass in the context of `age matching' and show that these effects are markedly different compared to the ones explored by Hearin et al. at lower stellar masses. We construct two models, one in which galaxy colour is stochastic (`AbM' model) as well as a model which contains assembly bias effects (`AgM' model). By confronting the redshift dependent clustering of CMASS with the predictions from our model, we argue that that galaxy colours are not a stochastic process in high-mass haloes. Our results suggest that the colours of galaxies in high-mass haloes are determined by other halo properties besides halo peak velocity and that assembly bias effects play an important role in determining the clustering properties of this sample.

  20. Application of IFT and SPSA to servo system control.

    PubMed

    Rădac, Mircea-Bogdan; Precup, Radu-Emil; Petriu, Emil M; Preitl, Stefan

    2011-12-01

    This paper treats the application of two data-based model-free gradient-based stochastic optimization techniques, i.e., iterative feedback tuning (IFT) and simultaneous perturbation stochastic approximation (SPSA), to servo system control. The representative case of controlled processes modeled by second-order systems with an integral component is discussed. New IFT and SPSA algorithms are suggested to tune the parameters of the state feedback controllers with an integrator in the linear-quadratic-Gaussian (LQG) problem formulation. An implementation case study concerning the LQG-based design of an angular position controller for a direct current servo system laboratory equipment is included to highlight the pros and cons of IFT and SPSA from an application's point of view. The comparison of IFT and SPSA algorithms is focused on an insight into their implementation.

  1. Oscillations and waves in a spatially distributed system with a 1/f spectrum

    NASA Astrophysics Data System (ADS)

    Koverda, V. P.; Skokov, V. N.

    2018-02-01

    A spatially distributed system with a 1/f power spectrum is described by two nonlinear stochastic equations. Conditions for the formation of auto-oscillations have been found using numerical methods. The formation of a 1/f and 1/k spectrum simultaneously with the formation and motion of waves under the action of white noise has been demonstrated. The large extreme fluctuations with 1/f and 1/k spectra correspond to the maximum entropy, which points to the stability of such processes. It is shown that on the background of formation and motion of waves at an external periodic action there appears spatio-temporal stochastic resonance, at which one can observe the expansion of the region of periodic pulsations under the action of white noise.

  2. Stochastic chaos induced by diffusion processes with identical spectral density but different probability density functions.

    PubMed

    Lei, Youming; Zheng, Fan

    2016-12-01

    Stochastic chaos induced by diffusion processes, with identical spectral density but different probability density functions (PDFs), is investigated in selected lightly damped Hamiltonian systems. The threshold amplitude of diffusion processes for the onset of chaos is derived by using the stochastic Melnikov method together with a mean-square criterion. Two quasi-Hamiltonian systems, namely, a damped single pendulum and damped Duffing oscillator perturbed by stochastic excitations, are used as illustrative examples. Four different cases of stochastic processes are taking as the driving excitations. It is shown that in such two systems the spectral density of diffusion processes completely determines the threshold amplitude for chaos, regardless of the shape of their PDFs, Gaussian or otherwise. Furthermore, the mean top Lyapunov exponent is employed to verify analytical results. The results obtained by numerical simulations are in accordance with the analytical results. This demonstrates that the stochastic Melnikov method is effective in predicting the onset of chaos in the quasi-Hamiltonian systems.

  3. Characterization of time series via Rényi complexity-entropy curves

    NASA Astrophysics Data System (ADS)

    Jauregui, M.; Zunino, L.; Lenzi, E. K.; Mendes, R. S.; Ribeiro, H. V.

    2018-05-01

    One of the most useful tools for distinguishing between chaotic and stochastic time series is the so-called complexity-entropy causality plane. This diagram involves two complexity measures: the Shannon entropy and the statistical complexity. Recently, this idea has been generalized by considering the Tsallis monoparametric generalization of the Shannon entropy, yielding complexity-entropy curves. These curves have proven to enhance the discrimination among different time series related to stochastic and chaotic processes of numerical and experimental nature. Here we further explore these complexity-entropy curves in the context of the Rényi entropy, which is another monoparametric generalization of the Shannon entropy. By combining the Rényi entropy with the proper generalization of the statistical complexity, we associate a parametric curve (the Rényi complexity-entropy curve) with a given time series. We explore this approach in a series of numerical and experimental applications, demonstrating the usefulness of this new technique for time series analysis. We show that the Rényi complexity-entropy curves enable the differentiation among time series of chaotic, stochastic, and periodic nature. In particular, time series of stochastic nature are associated with curves displaying positive curvature in a neighborhood of their initial points, whereas curves related to chaotic phenomena have a negative curvature; finally, periodic time series are represented by vertical straight lines.

  4. Efficient simulation of intrinsic, extrinsic and external noise in biochemical systems

    PubMed Central

    Pischel, Dennis; Sundmacher, Kai; Flassig, Robert J.

    2017-01-01

    Abstract Motivation: Biological cells operate in a noisy regime influenced by intrinsic, extrinsic and external noise, which leads to large differences of individual cell states. Stochastic effects must be taken into account to characterize biochemical kinetics accurately. Since the exact solution of the chemical master equation, which governs the underlying stochastic process, cannot be derived for most biochemical systems, approximate methods are used to obtain a solution. Results: In this study, a method to efficiently simulate the various sources of noise simultaneously is proposed and benchmarked on several examples. The method relies on the combination of the sigma point approach to describe extrinsic and external variability and the τ-leaping algorithm to account for the stochasticity due to probabilistic reactions. The comparison of our method to extensive Monte Carlo calculations demonstrates an immense computational advantage while losing an acceptable amount of accuracy. Additionally, the application to parameter optimization problems in stochastic biochemical reaction networks is shown, which is rarely applied due to its huge computational burden. To give further insight, a MATLAB script is provided including the proposed method applied to a simple toy example of gene expression. Availability and implementation: MATLAB code is available at Bioinformatics online. Contact: flassig@mpi-magdeburg.mpg.de Supplementary information: Supplementary data are available at Bioinformatics online. PMID:28881987

  5. Structure and Randomness of Continuous-Time, Discrete-Event Processes

    NASA Astrophysics Data System (ADS)

    Marzen, Sarah E.; Crutchfield, James P.

    2017-10-01

    Loosely speaking, the Shannon entropy rate is used to gauge a stochastic process' intrinsic randomness; the statistical complexity gives the cost of predicting the process. We calculate, for the first time, the entropy rate and statistical complexity of stochastic processes generated by finite unifilar hidden semi-Markov models—memoryful, state-dependent versions of renewal processes. Calculating these quantities requires introducing novel mathematical objects (ɛ -machines of hidden semi-Markov processes) and new information-theoretic methods to stochastic processes.

  6. Building Development Monitoring in Multitemporal Remotely Sensed Image Pairs with Stochastic Birth-Death Dynamics.

    PubMed

    Benedek, C; Descombes, X; Zerubia, J

    2012-01-01

    In this paper, we introduce a new probabilistic method which integrates building extraction with change detection in remotely sensed image pairs. A global optimization process attempts to find the optimal configuration of buildings, considering the observed data, prior knowledge, and interactions between the neighboring building parts. We present methodological contributions in three key issues: 1) We implement a novel object-change modeling approach based on Multitemporal Marked Point Processes, which simultaneously exploits low-level change information between the time layers and object-level building description to recognize and separate changed and unaltered buildings. 2) To answer the challenges of data heterogeneity in aerial and satellite image repositories, we construct a flexible hierarchical framework which can create various building appearance models from different elementary feature-based modules. 3) To simultaneously ensure the convergence, optimality, and computation complexity constraints raised by the increased data quantity, we adopt the quick Multiple Birth and Death optimization technique for change detection purposes, and propose a novel nonuniform stochastic object birth process which generates relevant objects with higher probability based on low-level image features.

  7. External Theory for Stochastic Processes.

    DTIC Science & Technology

    1985-11-01

    1.2 1.4 1.8 11111125 11.I6 MICROCOP RESOLUTION TEST CHART M.. MW’ PAPI ~ W W ’W IV AV a a W 4 * S6 _ ~.. r dV . Unclassif’ DA 7 4 9JT FILE COPY...intensity measure has the Laplace : <-f Transform L (f)=exp(-x (l-e - f ) whereas a Compound Poisson Process has Laplace Transform (2.3.1) L (f...see Example 2.2.4 as an illustration of this). The result is a clustering of exceedances, leading to a compounding of events in the limiting point

  8. Minimum uncertainty and squeezing in diffusion processes and stochastic quantization

    NASA Technical Reports Server (NTRS)

    Demartino, S.; Desiena, S.; Illuminati, Fabrizo; Vitiello, Giuseppe

    1994-01-01

    We show that uncertainty relations, as well as minimum uncertainty coherent and squeezed states, are structural properties for diffusion processes. Through Nelson stochastic quantization we derive the stochastic image of the quantum mechanical coherent and squeezed states.

  9. See-Through Imaging of Laser-Scanned 3d Cultural Heritage Objects Based on Stochastic Rendering of Large-Scale Point Clouds

    NASA Astrophysics Data System (ADS)

    Tanaka, S.; Hasegawa, K.; Okamoto, N.; Umegaki, R.; Wang, S.; Uemura, M.; Okamoto, A.; Koyamada, K.

    2016-06-01

    We propose a method for the precise 3D see-through imaging, or transparent visualization, of the large-scale and complex point clouds acquired via the laser scanning of 3D cultural heritage objects. Our method is based on a stochastic algorithm and directly uses the 3D points, which are acquired using a laser scanner, as the rendering primitives. This method achieves the correct depth feel without requiring depth sorting of the rendering primitives along the line of sight. Eliminating this need allows us to avoid long computation times when creating natural and precise 3D see-through views of laser-scanned cultural heritage objects. The opacity of each laser-scanned object is also flexibly controllable. For a laser-scanned point cloud consisting of more than 107 or 108 3D points, the pre-processing requires only a few minutes, and the rendering can be executed at interactive frame rates. Our method enables the creation of cumulative 3D see-through images of time-series laser-scanned data. It also offers the possibility of fused visualization for observing a laser-scanned object behind a transparent high-quality photographic image placed in the 3D scene. We demonstrate the effectiveness of our method by applying it to festival floats of high cultural value. These festival floats have complex outer and inner 3D structures and are suitable for see-through imaging.

  10. Quantum Stochastic Trajectories: The Fokker-Planck-Bohm Equation Driven by the Reduced Density Matrix.

    PubMed

    Avanzini, Francesco; Moro, Giorgio J

    2018-03-15

    The quantum molecular trajectory is the deterministic trajectory, arising from the Bohm theory, that describes the instantaneous positions of the nuclei of molecules by assuring the agreement with the predictions of quantum mechanics. Therefore, it provides the suitable framework for representing the geometry and the motions of molecules without neglecting their quantum nature. However, the quantum molecular trajectory is extremely demanding from the computational point of view, and this strongly limits its applications. To overcome such a drawback, we derive a stochastic representation of the quantum molecular trajectory, through projection operator techniques, for the degrees of freedom of an open quantum system. The resulting Fokker-Planck operator is parametrically dependent upon the reduced density matrix of the open system. Because of the pilot role played by the reduced density matrix, this stochastic approach is able to represent accurately the main features of the open system motions both at equilibrium and out of equilibrium with the environment. To verify this procedure, the predictions of the stochastic and deterministic representation are compared for a model system of six interacting harmonic oscillators, where one oscillator is taken as the open quantum system of interest. The undeniable advantage of the stochastic approach is that of providing a simplified and self-contained representation of the dynamics of the open system coordinates. Furthermore, it can be employed to study the out of equilibrium dynamics and the relaxation of quantum molecular motions during photoinduced processes, like photoinduced conformational changes and proton transfers.

  11. Bidirectional Classical Stochastic Processes with Measurements and Feedback

    NASA Technical Reports Server (NTRS)

    Hahne, G. E.

    2005-01-01

    A measurement on a quantum system is said to cause the "collapse" of the quantum state vector or density matrix. An analogous collapse occurs with measurements on a classical stochastic process. This paper addresses the question of describing the response of a classical stochastic process when there is feedback from the output of a measurement to the input, and is intended to give a model for quantum-mechanical processes that occur along a space-like reaction coordinate. The classical system can be thought of in physical terms as two counterflowing probability streams, which stochastically exchange probability currents in a way that the net probability current, and hence the overall probability, suitably interpreted, is conserved. The proposed formalism extends the . mathematics of those stochastic processes describable with linear, single-step, unidirectional transition probabilities, known as Markov chains and stochastic matrices. It is shown that a certain rearrangement and combination of the input and output of two stochastic matrices of the same order yields another matrix of the same type. Each measurement causes the partial collapse of the probability current distribution in the midst of such a process, giving rise to calculable, but non-Markov, values for the ensuing modification of the system's output probability distribution. The paper concludes with an analysis of a classical probabilistic version of the so-called grandfather paradox.

  12. Stochastic quantization of conformally coupled scalar in AdS

    NASA Astrophysics Data System (ADS)

    Jatkar, Dileep P.; Oh, Jae-Hyuk

    2013-10-01

    We explore the relation between stochastic quantization and holographic Wilsonian renormalization group flow further by studying conformally coupled scalar in AdS d+1. We establish one to one mapping between the radial flow of its double trace deformation and stochastic 2-point correlation function. This map is shown to be identical, up to a suitable field re-definition of the bulk scalar, to the original proposal in arXiv:1209.2242.

  13. Recursive stochastic effects in valley hybrid inflation

    NASA Astrophysics Data System (ADS)

    Levasseur, Laurence Perreault; Vennin, Vincent; Brandenberger, Robert

    2013-10-01

    Hybrid inflation is a two-field model where inflation ends because of a tachyonic instability, the duration of which is determined by stochastic effects and has important observational implications. Making use of the recursive approach to the stochastic formalism presented in [L. P. Levasseur, preceding article, Phys. Rev. D 88, 083537 (2013)], these effects are consistently computed. Through an analysis of backreaction, this method is shown to converge in the valley but points toward an (expected) instability in the waterfall. It is further shown that the quasistationarity of the auxiliary field distribution breaks down in the case of a short-lived waterfall. We find that the typical dispersion of the waterfall field at the critical point is then diminished, thus increasing the duration of the waterfall phase and jeopardizing the possibility of a short transition. Finally, we find that stochastic effects worsen the blue tilt of the curvature perturbations by an O(1) factor when compared with the usual slow-roll contribution.

  14. Stochastic differential equation model for linear growth birth and death processes with immigration and emigration

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Granita, E-mail: granitafc@gmail.com; Bahar, A.

    This paper discusses on linear birth and death with immigration and emigration (BIDE) process to stochastic differential equation (SDE) model. Forward Kolmogorov equation in continuous time Markov chain (CTMC) with a central-difference approximation was used to find Fokker-Planckequation corresponding to a diffusion process having the stochastic differential equation of BIDE process. The exact solution, mean and variance function of BIDE process was found.

  15. Interrupted monitoring of a stochastic process

    NASA Technical Reports Server (NTRS)

    Palmer, E.

    1977-01-01

    Normative strategies are developed for tasks where the pilot must interrupt his monitoring of a stochastic process in order to attend to other duties. Results are given as to how characteristics of the stochastic process and the other tasks affect the optimal strategies. The optimum strategy is also compared to the strategies used by subjects in a pilot experiment.

  16. An estimator for the relative entropy rate of path measures for stochastic differential equations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Opper, Manfred, E-mail: manfred.opper@tu-berlin.de

    2017-02-01

    We address the problem of estimating the relative entropy rate (RER) for two stochastic processes described by stochastic differential equations. For the case where the drift of one process is known analytically, but one has only observations from the second process, we use a variational bound on the RER to construct an estimator.

  17. Stochastic Modelling of Seafloor Morphology

    DTIC Science & Technology

    1990-06-01

    trenches, and linear island chains to the point that many of the interesting questions of marine geology now concern the processes which have shaped...are all located near the Easter Island Microplate (Figure 5.2), a region with a complex tectonic history [Hey et al., 19851, and may be anomalous...the Clipperton Transform Fault [Macdonald and Fox, 1988]. Just south of Clipperton , the ridge crest is shallow (-2550 m) and the crestal horst is broad

  18. Intercellular Variability in Protein Levels from Stochastic Expression and Noisy Cell Cycle Processes

    PubMed Central

    Soltani, Mohammad; Vargas-Garcia, Cesar A.; Antunes, Duarte; Singh, Abhyudai

    2016-01-01

    Inside individual cells, expression of genes is inherently stochastic and manifests as cell-to-cell variability or noise in protein copy numbers. Since proteins half-lives can be comparable to the cell-cycle length, randomness in cell-division times generates additional intercellular variability in protein levels. Moreover, as many mRNA/protein species are expressed at low-copy numbers, errors incurred in partitioning of molecules between two daughter cells are significant. We derive analytical formulas for the total noise in protein levels when the cell-cycle duration follows a general class of probability distributions. Using a novel hybrid approach the total noise is decomposed into components arising from i) stochastic expression; ii) partitioning errors at the time of cell division and iii) random cell-division events. These formulas reveal that random cell-division times not only generate additional extrinsic noise, but also critically affect the mean protein copy numbers and intrinsic noise components. Counter intuitively, in some parameter regimes, noise in protein levels can decrease as cell-division times become more stochastic. Computations are extended to consider genome duplication, where transcription rate is increased at a random point in the cell cycle. We systematically investigate how the timing of genome duplication influences different protein noise components. Intriguingly, results show that noise contribution from stochastic expression is minimized at an optimal genome-duplication time. Our theoretical results motivate new experimental methods for decomposing protein noise levels from synchronized and asynchronized single-cell expression data. Characterizing the contributions of individual noise mechanisms will lead to precise estimates of gene expression parameters and techniques for altering stochasticity to change phenotype of individual cells. PMID:27536771

  19. Stochastic Nature in Cellular Processes

    NASA Astrophysics Data System (ADS)

    Liu, Bo; Liu, Sheng-Jun; Wang, Qi; Yan, Shi-Wei; Geng, Yi-Zhao; Sakata, Fumihiko; Gao, Xing-Fa

    2011-11-01

    The importance of stochasticity in cellular processes is increasingly recognized in both theoretical and experimental studies. General features of stochasticity in gene regulation and expression are briefly reviewed in this article, which include the main experimental phenomena, classification, quantization and regulation of noises. The correlation and transmission of noise in cascade networks are analyzed further and the stochastic simulation methods that can capture effects of intrinsic and extrinsic noise are described.

  20. Distributed parallel computing in stochastic modeling of groundwater systems.

    PubMed

    Dong, Yanhui; Li, Guomin; Xu, Haizhen

    2013-03-01

    Stochastic modeling is a rapidly evolving, popular approach to the study of the uncertainty and heterogeneity of groundwater systems. However, the use of Monte Carlo-type simulations to solve practical groundwater problems often encounters computational bottlenecks that hinder the acquisition of meaningful results. To improve the computational efficiency, a system that combines stochastic model generation with MODFLOW-related programs and distributed parallel processing is investigated. The distributed computing framework, called the Java Parallel Processing Framework, is integrated into the system to allow the batch processing of stochastic models in distributed and parallel systems. As an example, the system is applied to the stochastic delineation of well capture zones in the Pinggu Basin in Beijing. Through the use of 50 processing threads on a cluster with 10 multicore nodes, the execution times of 500 realizations are reduced to 3% compared with those of a serial execution. Through this application, the system demonstrates its potential in solving difficult computational problems in practical stochastic modeling. © 2012, The Author(s). Groundwater © 2012, National Ground Water Association.

  1. Multidimensional stochastic approximation using locally contractive functions

    NASA Technical Reports Server (NTRS)

    Lawton, W. M.

    1975-01-01

    A Robbins-Monro type multidimensional stochastic approximation algorithm which converges in mean square and with probability one to the fixed point of a locally contractive regression function is developed. The algorithm is applied to obtain maximum likelihood estimates of the parameters for a mixture of multivariate normal distributions.

  2. Tipping point analysis of ocean acoustic noise

    NASA Astrophysics Data System (ADS)

    Livina, Valerie N.; Brouwer, Albert; Harris, Peter; Wang, Lian; Sotirakopoulos, Kostas; Robinson, Stephen

    2018-02-01

    We apply tipping point analysis to a large record of ocean acoustic data to identify the main components of the acoustic dynamical system and study possible bifurcations and transitions of the system. The analysis is based on a statistical physics framework with stochastic modelling, where we represent the observed data as a composition of deterministic and stochastic components estimated from the data using time-series techniques. We analyse long-term and seasonal trends, system states and acoustic fluctuations to reconstruct a one-dimensional stochastic equation to approximate the acoustic dynamical system. We apply potential analysis to acoustic fluctuations and detect several changes in the system states in the past 14 years. These are most likely caused by climatic phenomena. We analyse trends in sound pressure level within different frequency bands and hypothesize a possible anthropogenic impact on the acoustic environment. The tipping point analysis framework provides insight into the structure of the acoustic data and helps identify its dynamic phenomena, correctly reproducing the probability distribution and scaling properties (power-law correlations) of the time series.

  3. Solving the master equation without kinetic Monte Carlo: Tensor train approximations for a CO oxidation model

    NASA Astrophysics Data System (ADS)

    Gelß, Patrick; Matera, Sebastian; Schütte, Christof

    2016-06-01

    In multiscale modeling of heterogeneous catalytic processes, one crucial point is the solution of a Markovian master equation describing the stochastic reaction kinetics. Usually, this is too high-dimensional to be solved with standard numerical techniques and one has to rely on sampling approaches based on the kinetic Monte Carlo method. In this study we break the curse of dimensionality for the direct solution of the Markovian master equation by exploiting the Tensor Train Format for this purpose. The performance of the approach is demonstrated on a first principles based, reduced model for the CO oxidation on the RuO2(110) surface. We investigate the complexity for increasing system size and for various reaction conditions. The advantage over the stochastic simulation approach is illustrated by a problem with increased stiffness.

  4. Learning stochastic reward distributions in a speeded pointing task.

    PubMed

    Seydell, Anna; McCann, Brian C; Trommershäuser, Julia; Knill, David C

    2008-04-23

    Recent studies have shown that humans effectively take into account task variance caused by intrinsic motor noise when planning fast hand movements. However, previous evidence suggests that humans have greater difficulty accounting for arbitrary forms of stochasticity in their environment, both in economic decision making and sensorimotor tasks. We hypothesized that humans can learn to optimize movement strategies when environmental randomness can be experienced and thus implicitly learned over several trials, especially if it mimics the kinds of randomness for which subjects might have generative models. We tested the hypothesis using a task in which subjects had to rapidly point at a target region partly covered by three stochastic penalty regions introduced as "defenders." At movement completion, each defender jumped to a new position drawn randomly from fixed probability distributions. Subjects earned points when they hit the target, unblocked by a defender, and lost points otherwise. Results indicate that after approximately 600 trials, subjects approached optimal behavior. We further tested whether subjects simply learned a set of stimulus-contingent motor plans or the statistics of defenders' movements by training subjects with one penalty distribution and then testing them on a new penalty distribution. Subjects immediately changed their strategy to achieve the same average reward as subjects who had trained with the second penalty distribution. These results indicate that subjects learned the parameters of the defenders' jump distributions and used this knowledge to optimally plan their hand movements under conditions involving stochastic rewards and penalties.

  5. Hyperbolic Cross Truncations for Stochastic Fourier Cosine Series

    PubMed Central

    Zhang, Zhihua

    2014-01-01

    Based on our decomposition of stochastic processes and our asymptotic representations of Fourier cosine coefficients, we deduce an asymptotic formula of approximation errors of hyperbolic cross truncations for bivariate stochastic Fourier cosine series. Moreover we propose a kind of Fourier cosine expansions with polynomials factors such that the corresponding Fourier cosine coefficients decay very fast. Although our research is in the setting of stochastic processes, our results are also new for deterministic functions. PMID:25147842

  6. Stochastic Processes in Physics: Deterministic Origins and Control

    NASA Astrophysics Data System (ADS)

    Demers, Jeffery

    Stochastic processes are ubiquitous in the physical sciences and engineering. While often used to model imperfections and experimental uncertainties in the macroscopic world, stochastic processes can attain deeper physical significance when used to model the seemingly random and chaotic nature of the underlying microscopic world. Nowhere more prevalent is this notion than in the field of stochastic thermodynamics - a modern systematic framework used describe mesoscale systems in strongly fluctuating thermal environments which has revolutionized our understanding of, for example, molecular motors, DNA replication, far-from equilibrium systems, and the laws of macroscopic thermodynamics as they apply to the mesoscopic world. With progress, however, come further challenges and deeper questions, most notably in the thermodynamics of information processing and feedback control. Here it is becoming increasingly apparent that, due to divergences and subtleties of interpretation, the deterministic foundations of the stochastic processes themselves must be explored and understood. This thesis presents a survey of stochastic processes in physical systems, the deterministic origins of their emergence, and the subtleties associated with controlling them. First, we study time-dependent billiards in the quivering limit - a limit where a billiard system is indistinguishable from a stochastic system, and where the simplified stochastic system allows us to view issues associated with deterministic time-dependent billiards in a new light and address some long-standing problems. Then, we embark on an exploration of the deterministic microscopic Hamiltonian foundations of non-equilibrium thermodynamics, and we find that important results from mesoscopic stochastic thermodynamics have simple microscopic origins which would not be apparent without the benefit of both the micro and meso perspectives. Finally, we study the problem of stabilizing a stochastic Brownian particle with feedback control, and we find that in order to avoid paradoxes involving the first law of thermodynamics, we need a model for the fine details of the thermal driving noise. The underlying theme of this thesis is the argument that the deterministic microscopic perspective and stochastic mesoscopic perspective are both important and useful, and when used together, we can more deeply and satisfyingly understand the physics occurring over either scale.

  7. Stochasticity in materials structure, properties, and processing—A review

    NASA Astrophysics Data System (ADS)

    Hull, Robert; Keblinski, Pawel; Lewis, Dan; Maniatty, Antoinette; Meunier, Vincent; Oberai, Assad A.; Picu, Catalin R.; Samuel, Johnson; Shephard, Mark S.; Tomozawa, Minoru; Vashishth, Deepak; Zhang, Shengbai

    2018-03-01

    We review the concept of stochasticity—i.e., unpredictable or uncontrolled fluctuations in structure, chemistry, or kinetic processes—in materials. We first define six broad classes of stochasticity: equilibrium (thermodynamic) fluctuations; structural/compositional fluctuations; kinetic fluctuations; frustration and degeneracy; imprecision in measurements; and stochasticity in modeling and simulation. In this review, we focus on the first four classes that are inherent to materials phenomena. We next develop a mathematical framework for describing materials stochasticity and then show how it can be broadly applied to these four materials-related stochastic classes. In subsequent sections, we describe structural and compositional fluctuations at small length scales that modify material properties and behavior at larger length scales; systems with engineered fluctuations, concentrating primarily on composite materials; systems in which stochasticity is developed through nucleation and kinetic phenomena; and configurations in which constraints in a given system prevent it from attaining its ground state and cause it to attain several, equally likely (degenerate) states. We next describe how stochasticity in these processes results in variations in physical properties and how these variations are then accentuated by—or amplify—stochasticity in processing and manufacturing procedures. In summary, the origins of materials stochasticity, the degree to which it can be predicted and/or controlled, and the possibility of using stochastic descriptions of materials structure, properties, and processing as a new degree of freedom in materials design are described.

  8. A geometric stochastic approach based on marked point processes for road mark detection from high resolution aerial images

    NASA Astrophysics Data System (ADS)

    Tournaire, O.; Paparoditis, N.

    Road detection has been a topic of great interest in the photogrammetric and remote sensing communities since the end of the 70s. Many approaches dealing with various sensor resolutions, the nature of the scene or the wished accuracy of the extracted objects have been presented. This topic remains challenging today as the need for accurate and up-to-date data is becoming more and more important. Based on this context, we will study in this paper the road network from a particular point of view, focusing on road marks, and in particular dashed lines. Indeed, they are very useful clues, for evidence of a road, but also for tasks of a higher level. For instance, they can be used to enhance quality and to improve road databases. It is also possible to delineate the different circulation lanes, their width and functionality (speed limit, special lanes for buses or bicycles...). In this paper, we propose a new robust and accurate top-down approach for dashed line detection based on stochastic geometry. Our approach is automatic in the sense that no intervention from a human operator is necessary to initialise the algorithm or to track errors during the process. The core of our approach relies on defining geometric, radiometric and relational models for dashed lines objects. The model also has to deal with the interactions between the different objects making up a line, meaning that it introduces external knowledge taken from specifications. Our strategy is based on a stochastic method, and in particular marked point processes. Our goal is to find the objects configuration minimising an energy function made-up of a data attachment term measuring the consistency of the image with respect to the objects and a regularising term managing the relationship between neighbouring objects. To sample the energy function, we use Green algorithm's; coupled with a simulated annealing to find its minimum. Results from aerial images at various resolutions are presented showing that our approach is relevant and accurate as it can handle the most frequent layouts of dashed lines. Some issues, for instance, such as the relative weighting of both terms of the energy are also discussed in the conclusion.

  9. Stochastic modelling of microstructure formation in solidification processes

    NASA Astrophysics Data System (ADS)

    Nastac, Laurentiu; Stefanescu, Doru M.

    1997-07-01

    To relax many of the assumptions used in continuum approaches, a general stochastic model has been developed. The stochastic model can be used not only for an accurate description of the fraction of solid evolution, and therefore accurate cooling curves, but also for simulation of microstructure formation in castings. The advantage of using the stochastic approach is to give a time- and space-dependent description of solidification processes. Time- and space-dependent processes can also be described by partial differential equations. Unlike a differential formulation which, in most cases, has to be transformed into a difference equation and solved numerically, the stochastic approach is essentially a direct numerical algorithm. The stochastic model is comprehensive, since the competition between various phases is considered. Furthermore, grain impingement is directly included through the structure of the model. In the present research, all grain morphologies are simulated with this procedure. The relevance of the stochastic approach is that the simulated microstructures can be directly compared with microstructures obtained from experiments. The computer becomes a `dynamic metallographic microscope'. A comparison between deterministic and stochastic approaches has been performed. An important objective of this research was to answer the following general questions: (1) `Would fully deterministic approaches continue to be useful in solidification modelling?' and (2) `Would stochastic algorithms be capable of entirely replacing purely deterministic models?'

  10. Modeling and Properties of Nonlinear Stochastic Dynamical System of Continuous Culture

    NASA Astrophysics Data System (ADS)

    Wang, Lei; Feng, Enmin; Ye, Jianxiong; Xiu, Zhilong

    The stochastic counterpart to the deterministic description of continuous fermentation with ordinary differential equation is investigated in the process of glycerol bio-dissimilation to 1,3-propanediol by Klebsiella pneumoniae. We briefly discuss the continuous fermentation process driven by three-dimensional Brownian motion and Lipschitz coefficients, which is suitable for the factual fermentation. Subsequently, we study the existence and uniqueness of solutions for the stochastic system as well as the boundedness of the Two-order Moment and the Markov property of the solution. Finally stochastic simulation is carried out under the Stochastic Euler-Maruyama method.

  11. Site correction of stochastic simulation in southwestern Taiwan

    NASA Astrophysics Data System (ADS)

    Lun Huang, Cong; Wen, Kuo Liang; Huang, Jyun Yan

    2014-05-01

    Peak ground acceleration (PGA) of a disastrous earthquake, is concerned both in civil engineering and seismology study. Presently, the ground motion prediction equation is widely used for PGA estimation study by engineers. However, the local site effect is another important factor participates in strong motion prediction. For example, in 1985 the Mexico City, 400km far from the epicenter, suffered massive damage due to the seismic wave amplification from the local alluvial layers. (Anderson et al., 1986) In past studies, the use of stochastic method had been done and showed well performance on the simulation of ground-motion at rock site (Beresnev and Atkinson, 1998a ; Roumelioti and Beresnev, 2003). In this study, the site correction was conducted by the empirical transfer function compared with the rock site response from stochastic point-source (Boore, 2005) and finite-fault (Boore, 2009) methods. The error between the simulated and observed Fourier spectrum and PGA are calculated. Further we compared the estimated PGA to the result calculated from ground motion prediction equation. The earthquake data used in this study is recorded by Taiwan Strong Motion Instrumentation Program (TSMIP) from 1991 to 2012; the study area is located at south-western Taiwan. The empirical transfer function was generated by calculating the spectrum ratio between alluvial site and rock site (Borcheret, 1970). Due to the lack of reference rock site station in this area, the rock site ground motion was generated through stochastic point-source model instead. Several target events were then chosen for stochastic point-source simulating to the halfspace. Then, the empirical transfer function for each station was multiplied to the simulated halfspace response. Finally, we focused on two target events: the 1999 Chi-Chi earthquake (Mw=7.6) and the 2010 Jiashian earthquake (Mw=6.4). Considering the large event may contain with complex rupture mechanism, the asperity and delay time for each sub-fault is to be concerned. Both the stochastic point-source and the finite-fault model were used to check the result of our correction.

  12. A stochastic differential equations approach for the description of helium bubble size distributions in irradiated metals

    NASA Astrophysics Data System (ADS)

    Seif, Dariush; Ghoniem, Nasr M.

    2014-12-01

    A rate theory model based on the theory of nonlinear stochastic differential equations (SDEs) is developed to estimate the time-dependent size distribution of helium bubbles in metals under irradiation. Using approaches derived from Itô's calculus, rate equations for the first five moments of the size distribution in helium-vacancy space are derived, accounting for the stochastic nature of the atomic processes involved. In the first iteration of the model, the distribution is represented as a bivariate Gaussian distribution. The spread of the distribution about the mean is obtained by white-noise terms in the second-order moments, driven by fluctuations in the general absorption and emission of point defects by bubbles, and fluctuations stemming from collision cascades. This statistical model for the reconstruction of the distribution by its moments is coupled to a previously developed reduced-set, mean-field, rate theory model. As an illustrative case study, the model is applied to a tungsten plasma facing component under irradiation. Our findings highlight the important role of stochastic atomic fluctuations on the evolution of helium-vacancy cluster size distributions. It is found that when the average bubble size is small (at low dpa levels), the relative spread of the distribution is large and average bubble pressures may be very large. As bubbles begin to grow in size, average bubble pressures decrease, and stochastic fluctuations have a lessened effect. The distribution becomes tighter as it evolves in time, corresponding to a more uniform bubble population. The model is formulated in a general way, capable of including point defect drift due to internal temperature and/or stress gradients. These arise during pulsed irradiation, and also during steady irradiation as a result of externally applied or internally generated non-homogeneous stress fields. Discussion is given into how the model can be extended to include full spatial resolution and how the implementation of a path-integral approach may proceed if the distribution is known experimentally to significantly stray from a Gaussian description.

  13. On the Interplay between the Evolvability and Network Robustness in an Evolutionary Biological Network: A Systems Biology Approach

    PubMed Central

    Chen, Bor-Sen; Lin, Ying-Po

    2011-01-01

    In the evolutionary process, the random transmission and mutation of genes provide biological diversities for natural selection. In order to preserve functional phenotypes between generations, gene networks need to evolve robustly under the influence of random perturbations. Therefore, the robustness of the phenotype, in the evolutionary process, exerts a selection force on gene networks to keep network functions. However, gene networks need to adjust, by variations in genetic content, to generate phenotypes for new challenges in the network’s evolution, ie, the evolvability. Hence, there should be some interplay between the evolvability and network robustness in evolutionary gene networks. In this study, the interplay between the evolvability and network robustness of a gene network and a biochemical network is discussed from a nonlinear stochastic system point of view. It was found that if the genetic robustness plus environmental robustness is less than the network robustness, the phenotype of the biological network is robust in evolution. The tradeoff between the genetic robustness and environmental robustness in evolution is discussed from the stochastic stability robustness and sensitivity of the nonlinear stochastic biological network, which may be relevant to the statistical tradeoff between bias and variance, the so-called bias/variance dilemma. Further, the tradeoff could be considered as an antagonistic pleiotropic action of a gene network and discussed from the systems biology perspective. PMID:22084563

  14. AGN Accretion Physics in the Time Domain: Survey Cadences, Stochastic Analysis, and Physical Interpretations

    NASA Astrophysics Data System (ADS)

    Moreno, Jackeline; Vogeley, Michael S.; Richards, Gordon; O'Brien, John T.; Kasliwal, Vishal

    2018-01-01

    We present rigorous testing of survey cadences (K2, SDSS, CRTS, & Pan-STARRS) for quasar variability science using a magnetohydrodynamics synthetic lightcurve and the canonical lightcurve from Kepler, Zw 229.15. We explain where the state of the art is in regards to physical interpretations of stochastic models (CARMA) applied to AGN variability. Quasar variability offers a time domain approach of probing accretion physics at the SMBH scale. Evidence shows that the strongest amplitude changes in the brightness of AGN occur on long timescales ranging from months to hundreds of days. These global behaviors can be constrained by survey data despite low sampling resolution. CARMA processes provide a flexible family of models used to interpolate between data points, predict future observations and describe behaviors in a lightcurve. This is accomplished by decomposing a signal into rise and decay timescales, frequencies for cyclic behavior and shock amplitudes. Characteristic timescales may point to length-scales over which a physical process operates such as turbulent eddies, warping or hotspots due to local thermal instabilities. We present the distribution of SDSS Stripe 82 quasars in CARMA parameters space that pass our cadence tests and also explain how the Damped Harmonic Oscillator model, CARMA(2,1), reduces to the Damped Random Walk, CARMA(1,0), given the data in a specific region of the parameter space.

  15. Effects of stochastic time-delayed feedback on a dynamical system modeling a chemical oscillator.

    PubMed

    González Ochoa, Héctor O; Perales, Gualberto Solís; Epstein, Irving R; Femat, Ricardo

    2018-05-01

    We examine how stochastic time-delayed negative feedback affects the dynamical behavior of a model oscillatory reaction. We apply constant and stochastic time-delayed negative feedbacks to a point Field-Körös-Noyes photosensitive oscillator and compare their effects. Negative feedback is applied in the form of simulated inhibitory electromagnetic radiation with an intensity proportional to the concentration of oxidized light-sensitive catalyst in the oscillator. We first characterize the system under nondelayed inhibitory feedback; then we explore and compare the effects of constant (deterministic) versus stochastic time-delayed feedback. We find that the oscillatory amplitude, frequency, and waveform are essentially preserved when low-dispersion stochastic delayed feedback is used, whereas small but measurable changes appear when a large dispersion is applied.

  16. Effects of stochastic time-delayed feedback on a dynamical system modeling a chemical oscillator

    NASA Astrophysics Data System (ADS)

    González Ochoa, Héctor O.; Perales, Gualberto Solís; Epstein, Irving R.; Femat, Ricardo

    2018-05-01

    We examine how stochastic time-delayed negative feedback affects the dynamical behavior of a model oscillatory reaction. We apply constant and stochastic time-delayed negative feedbacks to a point Field-Körös-Noyes photosensitive oscillator and compare their effects. Negative feedback is applied in the form of simulated inhibitory electromagnetic radiation with an intensity proportional to the concentration of oxidized light-sensitive catalyst in the oscillator. We first characterize the system under nondelayed inhibitory feedback; then we explore and compare the effects of constant (deterministic) versus stochastic time-delayed feedback. We find that the oscillatory amplitude, frequency, and waveform are essentially preserved when low-dispersion stochastic delayed feedback is used, whereas small but measurable changes appear when a large dispersion is applied.

  17. Systems Reliability Framework for Surface Water Sustainability and Risk Management

    NASA Astrophysics Data System (ADS)

    Myers, J. R.; Yeghiazarian, L.

    2016-12-01

    With microbial contamination posing a serious threat to the availability of clean water across the world, it is necessary to develop a framework that evaluates the safety and sustainability of water systems in respect to non-point source fecal microbial contamination. The concept of water safety is closely related to the concept of failure in reliability theory. In water quality problems, the event of failure can be defined as the concentration of microbial contamination exceeding a certain standard for usability of water. It is pertinent in watershed management to know the likelihood of such an event of failure occurring at a particular point in space and time. Microbial fate and transport are driven by environmental processes taking place in complex, multi-component, interdependent environmental systems that are dynamic and spatially heterogeneous, which means these processes and therefore their influences upon microbial transport must be considered stochastic and variable through space and time. A physics-based stochastic model of microbial dynamics is presented that propagates uncertainty using a unique sampling method based on artificial neural networks to produce a correlation between watershed characteristics and spatial-temporal probabilistic patterns of microbial contamination. These results are used to address the question of water safety through several sustainability metrics: reliability, vulnerability, resilience and a composite sustainability index. System reliability is described uniquely though the temporal evolution of risk along watershed points or pathways. Probabilistic resilience describes how long the system is above a certain probability of failure, and the vulnerability metric describes how the temporal evolution of risk changes throughout a hierarchy of failure levels. Additionally our approach allows for the identification of contributions in microbial contamination and uncertainty from specific pathways and sources. We expect that this framework will significantly improve the efficiency and precision of sustainable watershed management strategies through providing a better understanding of how watershed characteristics and environmental parameters affect surface water quality and sustainability. With microbial contamination posing a serious threat to the availability of clean water across the world, it is necessary to develop a framework that evaluates the safety and sustainability of water systems in respect to non-point source fecal microbial contamination. The concept of water safety is closely related to the concept of failure in reliability theory. In water quality problems, the event of failure can be defined as the concentration of microbial contamination exceeding a certain standard for usability of water. It is pertinent in watershed management to know the likelihood of such an event of failure occurring at a particular point in space and time. Microbial fate and transport are driven by environmental processes taking place in complex, multi-component, interdependent environmental systems that are dynamic and spatially heterogeneous, which means these processes and therefore their influences upon microbial transport must be considered stochastic and variable through space and time. A physics-based stochastic model of microbial dynamics is presented that propagates uncertainty using a unique sampling method based on artificial neural networks to produce a correlation between watershed characteristics and spatial-temporal probabilistic patterns of microbial contamination. These results are used to address the question of water safety through several sustainability metrics: reliability, vulnerability, resilience and a composite sustainability index. System reliability is described uniquely though the temporal evolution of risk along watershed points or pathways. Probabilistic resilience describes how long the system is above a certain probability of failure, and the vulnerability metric describes how the temporal evolution of risk changes throughout a hierarchy of failure levels. Additionally our approach allows for the identification of contributions in microbial contamination and uncertainty from specific pathways and sources. We expect that this framework will significantly improve the efficiency and precision of sustainable watershed management strategies through providing a better understanding of how watershed characteristics and environmental parameters affect surface water quality and sustainability.

  18. Efficient simulation of intrinsic, extrinsic and external noise in biochemical systems.

    PubMed

    Pischel, Dennis; Sundmacher, Kai; Flassig, Robert J

    2017-07-15

    Biological cells operate in a noisy regime influenced by intrinsic, extrinsic and external noise, which leads to large differences of individual cell states. Stochastic effects must be taken into account to characterize biochemical kinetics accurately. Since the exact solution of the chemical master equation, which governs the underlying stochastic process, cannot be derived for most biochemical systems, approximate methods are used to obtain a solution. In this study, a method to efficiently simulate the various sources of noise simultaneously is proposed and benchmarked on several examples. The method relies on the combination of the sigma point approach to describe extrinsic and external variability and the τ -leaping algorithm to account for the stochasticity due to probabilistic reactions. The comparison of our method to extensive Monte Carlo calculations demonstrates an immense computational advantage while losing an acceptable amount of accuracy. Additionally, the application to parameter optimization problems in stochastic biochemical reaction networks is shown, which is rarely applied due to its huge computational burden. To give further insight, a MATLAB script is provided including the proposed method applied to a simple toy example of gene expression. MATLAB code is available at Bioinformatics online. flassig@mpi-magdeburg.mpg.de. Supplementary data are available at Bioinformatics online. © The Author 2017. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com

  19. Real-time forecasting of an epidemic using a discrete time stochastic model: a case study of pandemic influenza (H1N1-2009).

    PubMed

    Nishiura, Hiroshi

    2011-02-16

    Real-time forecasting of epidemics, especially those based on a likelihood-based approach, is understudied. This study aimed to develop a simple method that can be used for the real-time epidemic forecasting. A discrete time stochastic model, accounting for demographic stochasticity and conditional measurement, was developed and applied as a case study to the weekly incidence of pandemic influenza (H1N1-2009) in Japan. By imposing a branching process approximation and by assuming the linear growth of cases within each reporting interval, the epidemic curve is predicted using only two parameters. The uncertainty bounds of the forecasts are computed using chains of conditional offspring distributions. The quality of the forecasts made before the epidemic peak appears largely to depend on obtaining valid parameter estimates. The forecasts of both weekly incidence and final epidemic size greatly improved at and after the epidemic peak with all the observed data points falling within the uncertainty bounds. Real-time forecasting using the discrete time stochastic model with its simple computation of the uncertainty bounds was successful. Because of the simplistic model structure, the proposed model has the potential to additionally account for various types of heterogeneity, time-dependent transmission dynamics and epidemiological details. The impact of such complexities on forecasting should be explored when the data become available as part of the disease surveillance.

  20. Introduction to Stochastic Simulations for Chemical and Physical Processes: Principles and Applications

    ERIC Educational Resources Information Center

    Weiss, Charles J.

    2017-01-01

    An introduction to digital stochastic simulations for modeling a variety of physical and chemical processes is presented. Despite the importance of stochastic simulations in chemistry, the prevalence of turn-key software solutions can impose a layer of abstraction between the user and the underlying approach obscuring the methodology being…

  1. Forecasting financial asset processes: stochastic dynamics via learning neural networks.

    PubMed

    Giebel, S; Rainer, M

    2010-01-01

    Models for financial asset dynamics usually take into account their inherent unpredictable nature by including a suitable stochastic component into their process. Unknown (forward) values of financial assets (at a given time in the future) are usually estimated as expectations of the stochastic asset under a suitable risk-neutral measure. This estimation requires the stochastic model to be calibrated to some history of sufficient length in the past. Apart from inherent limitations, due to the stochastic nature of the process, the predictive power is also limited by the simplifying assumptions of the common calibration methods, such as maximum likelihood estimation and regression methods, performed often without weights on the historic time series, or with static weights only. Here we propose a novel method of "intelligent" calibration, using learning neural networks in order to dynamically adapt the parameters of the stochastic model. Hence we have a stochastic process with time dependent parameters, the dynamics of the parameters being themselves learned continuously by a neural network. The back propagation in training the previous weights is limited to a certain memory length (in the examples we consider 10 previous business days), which is similar to the maximal time lag of autoregressive processes. We demonstrate the learning efficiency of the new algorithm by tracking the next-day forecasts for the EURTRY and EUR-HUF exchange rates each.

  2. Stochastic Formalism for Thermally Driven Distribution Frontier: A Nonempirical Approach to the Potential Escape Problem

    NASA Astrophysics Data System (ADS)

    Akashi, Ryosuke; Nagornov, Yuri S.

    2018-06-01

    We develop a non-empirical scheme to search for the minimum-energy escape paths from the minima of the potential surface to unknown saddle points nearby. A stochastic algorithm is constructed to move the walkers up the surface through the potential valleys. This method employs only the local gradient and diagonal part of the Hessian matrix of the potential. An application to a two-dimensional model potential is presented to demonstrate the successful finding of the paths to the saddle points. The present scheme could serve as a starting point toward first-principles simulation of rare events across the potential basins free from empirical collective variables.

  3. Towards a theory of stochastic vorticity-augmentation. [tornado model

    NASA Technical Reports Server (NTRS)

    Liu, V. C.

    1977-01-01

    A new hypothesis to account for the formation of tornadoes is presented. An elementary one-dimensional theory is formulated for vorticity transfer between an ambient sheared wind and a transverse penetrating jet. The theory points out the relevant quantities to be determined in describing the present stochastic mode of vorticity augmentation.

  4. BACKWARD ESTIMATION OF STOCHASTIC PROCESSES WITH FAILURE EVENTS AS TIME ORIGINS1

    PubMed Central

    Gary Chan, Kwun Chuen; Wang, Mei-Cheng

    2011-01-01

    Stochastic processes often exhibit sudden systematic changes in pattern a short time before certain failure events. Examples include increase in medical costs before death and decrease in CD4 counts before AIDS diagnosis. To study such terminal behavior of stochastic processes, a natural and direct way is to align the processes using failure events as time origins. This paper studies backward stochastic processes counting time backward from failure events, and proposes one-sample nonparametric estimation of the mean of backward processes when follow-up is subject to left truncation and right censoring. We will discuss benefits of including prevalent cohort data to enlarge the identifiable region and large sample properties of the proposed estimator with related extensions. A SEER–Medicare linked data set is used to illustrate the proposed methodologies. PMID:21359167

  5. Itô and Stratonovich integrals on compound renewal processes: the normal/Poisson case

    NASA Astrophysics Data System (ADS)

    Germano, Guido; Politi, Mauro; Scalas, Enrico; Schilling, René L.

    2010-06-01

    Continuous-time random walks, or compound renewal processes, are pure-jump stochastic processes with several applications in insurance, finance, economics and physics. Based on heuristic considerations, a definition is given for stochastic integrals driven by continuous-time random walks, which includes the Itô and Stratonovich cases. It is then shown how the definition can be used to compute these two stochastic integrals by means of Monte Carlo simulations. Our example is based on the normal compound Poisson process, which in the diffusive limit converges to the Wiener process.

  6. Stochastic Modelling, Analysis, and Simulations of the Solar Cycle Dynamic Process

    NASA Astrophysics Data System (ADS)

    Turner, Douglas C.; Ladde, Gangaram S.

    2018-03-01

    Analytical solutions, discretization schemes and simulation results are presented for the time delay deterministic differential equation model of the solar dynamo presented by Wilmot-Smith et al. In addition, this model is extended under stochastic Gaussian white noise parametric fluctuations. The introduction of stochastic fluctuations incorporates variables affecting the dynamo process in the solar interior, estimation error of parameters, and uncertainty of the α-effect mechanism. Simulation results are presented and analyzed to exhibit the effects of stochastic parametric volatility-dependent perturbations. The results generalize and extend the work of Hazra et al. In fact, some of these results exhibit the oscillatory dynamic behavior generated by the stochastic parametric additative perturbations in the absence of time delay. In addition, the simulation results of the modified stochastic models influence the change in behavior of the very recently developed stochastic model of Hazra et al.

  7. Stochastic foundations of undulatory transport phenomena: generalized Poisson-Kac processes—part III extensions and applications to kinetic theory and transport

    NASA Astrophysics Data System (ADS)

    Giona, Massimiliano; Brasiello, Antonio; Crescitelli, Silvestro

    2017-08-01

    This third part extends the theory of Generalized Poisson-Kac (GPK) processes to nonlinear stochastic models and to a continuum of states. Nonlinearity is treated in two ways: (i) as a dependence of the parameters (intensity of the stochastic velocity, transition rates) of the stochastic perturbation on the state variable, similarly to the case of nonlinear Langevin equations, and (ii) as the dependence of the stochastic microdynamic equations of motion on the statistical description of the process itself (nonlinear Fokker-Planck-Kac models). Several numerical and physical examples illustrate the theory. Gathering nonlinearity and a continuum of states, GPK theory provides a stochastic derivation of the nonlinear Boltzmann equation, furnishing a positive answer to the Kac’s program in kinetic theory. The transition from stochastic microdynamics to transport theory within the framework of the GPK paradigm is also addressed.

  8. Spectral estimation for characterization of acoustic aberration.

    PubMed

    Varslot, Trond; Angelsen, Bjørn; Waag, Robert C

    2004-07-01

    Spectral estimation based on acoustic backscatter from a motionless stochastic medium is described for characterization of aberration in ultrasonic imaging. The underlying assumptions for the estimation are: The correlation length of the medium is short compared to the length of the transmitted acoustic pulse, an isoplanatic region of sufficient size exists around the focal point, and the backscatter can be modeled as an ergodic stochastic process. The motivation for this work is ultrasonic imaging with aberration correction. Measurements were performed using a two-dimensional array system with 80 x 80 transducer elements and an element pitch of 0.6 mm. The f number for the measurements was 1.2 and the center frequency was 3.0 MHz with a 53% bandwidth. Relative phase of aberration was extracted from estimated cross spectra using a robust least-mean-square-error method based on an orthogonal expansion of the phase differences of neighboring wave forms as a function of frequency. Estimates of cross-spectrum phase from measurements of random scattering through a tissue-mimicking aberrator have confidence bands approximately +/- 5 degrees wide. Both phase and magnitude are in good agreement with a reference characterization obtained from a point scatterer.

  9. Quantum principle of sensing gravitational waves: From the zero-point fluctuations to the cosmological stochastic background of spacetime

    NASA Astrophysics Data System (ADS)

    Quiñones, Diego A.; Oniga, Teodora; Varcoe, Benjamin T. H.; Wang, Charles H.-T.

    2017-08-01

    We carry out a theoretical investigation on the collective dynamics of an ensemble of correlated atoms, subject to both vacuum fluctuations of spacetime and stochastic gravitational waves. A general approach is taken with the derivation of a quantum master equation capable of describing arbitrary confined nonrelativistic matter systems in an open quantum gravitational environment. It enables us to relate the spectral function for gravitational waves and the distribution function for quantum gravitational fluctuations and to indeed introduce a new spectral function for the zero-point fluctuations of spacetime. The formulation is applied to two-level identical bosonic atoms in an off-resonant high-Q cavity that effectively inhibits undesirable electromagnetic delays, leading to a gravitational transition mechanism through certain quadrupole moment operators. The overall relaxation rate before reaching equilibrium is found to generally scale collectively with the number N of atoms. However, we are also able to identify certain states of which the decay and excitation rates with stochastic gravitational waves and vacuum spacetime fluctuations amplify more significantly with a factor of N2. Using such favorable states as a means of measuring both conventional stochastic gravitational waves and novel zero-point spacetime fluctuations, we determine the theoretical lower bounds for the respective spectral functions. Finally, we discuss the implications of our findings on future observations of gravitational waves of a wider spectral window than currently accessible. Especially, the possible sensing of the zero-point fluctuations of spacetime could provide an opportunity to generate initial evidence and further guidance of quantum gravity.

  10. Stochastic Representations of Seismic Anisotropy: Verification of Effective Media Models and Application to the Continental Crust

    NASA Astrophysics Data System (ADS)

    Song, X.; Jordan, T. H.

    2017-12-01

    The seismic anisotropy of the continental crust is dominated by two mechanisms: the local (intrinsic) anisotropy of crustal rocks caused by the lattice-preferred orientation of their constituent minerals, and the geometric (extrinsic) anisotropy caused by the alignment and layering of elastic heterogeneities by sedimentation and deformation. To assess the relative importance of these mechanisms, we have applied Jordan's (GJI, 2015) self-consistent, second-order theory to compute the effective elastic parameters of stochastic media with hexagonal local anisotropy and small-scale 3D heterogeneities that have transversely isotropic (TI) statistics. The theory pertains to stochastic TI media in which the eighth-order covariance tensor of the elastic moduli can be separated into a one-point variance tensor that describes the local anisotropy in terms of a anisotropy orientation ratio (ξ from 0 to ∞), and a two-point correlation function that describes the geometric anisotropy in terms of a heterogeneity aspect ratio (η from 0 to ∞). If there is no local anisotropy, then, in the limiting case of a horizontal stochastic laminate (η→∞), the effective-medium equations reduce to the second-order equations derived by Backus (1962) for a stochastically layered medium. This generalization of the Backus equations to 3D stochastic media, as well as the introduction of local, stochastically rotated anisotropy, provides a powerful theory for interpreting the anisotropic signatures of sedimentation and deformation in continental environments; in particular, the parameterizations that we propose are suitable for tomographic inversions. We have verified this theory through a series high-resolution numerical experiments using both isotropic and anisotropic wave-propagation codes.

  11. A high speed implementation of the random decrement algorithm

    NASA Technical Reports Server (NTRS)

    Kiraly, L. J.

    1982-01-01

    The algorithm is useful for measuring net system damping levels in stochastic processes and for the development of equivalent linearized system response models. The algorithm works by summing together all subrecords which occur after predefined threshold level is crossed. The random decrement signature is normally developed by scanning stored data and adding subrecords together. The high speed implementation of the random decrement algorithm exploits the digital character of sampled data and uses fixed record lengths of 2(n) samples to greatly speed up the process. The contributions to the random decrement signature of each data point was calculated only once and in the same sequence as the data were taken. A hardware implementation of the algorithm using random logic is diagrammed and the process is shown to be limited only by the record size and the threshold crossing frequency of the sampled data. With a hardware cycle time of 200 ns and 1024 point signature, a threshold crossing frequency of 5000 Hertz can be processed and a stably averaged signature presented in real time.

  12. Level-crossing statistics of the horizontal wind speed in the planetary surface boundary layer

    NASA Astrophysics Data System (ADS)

    Edwards, Paul J.; Hurst, Robert B.

    2001-09-01

    The probability density of the times for which the horizontal wind remains above or below a given threshold speed is of some interest in the fields of renewable energy generation and pollutant dispersal. However there appear to be no analytic or conceptual models which account for the observed power law form of the distribution of these episode lengths over a range of over three decades, from a few tens of seconds to a day or more. We reanalyze high resolution wind data and demonstrate the fractal character of the point process generated by the wind speed level crossings. We simulate the fluctuating wind speed by a Markov process which approximates the characteristics of the real (non-Markovian) wind and successfully generates a power law distribution of episode lengths. However, fundamental questions concerning the physical basis for this behavior and the connection between the properties of a continuous-time stochastic process and the fractal statistics of the point process generated by its level crossings remain unanswered.

  13. Comparison of contact conditions obtained by direct simulation with statistical analysis for normally distributed isotropic surfaces

    NASA Astrophysics Data System (ADS)

    Uchidate, M.

    2018-09-01

    In this study, with the aim of establishing a systematic knowledge on the impact of summit extraction methods and stochastic model selection in rough contact analysis, the contact area ratio (A r /A a ) obtained by statistical contact models with different summit extraction methods was compared with a direct simulation using the boundary element method (BEM). Fifty areal topography datasets with different autocorrelation functions in terms of the power index and correlation length were used for investigation. The non-causal 2D auto-regressive model which can generate datasets with specified parameters was employed in this research. Three summit extraction methods, Nayak’s theory, 8-point analysis and watershed segmentation, were examined. With regard to the stochastic model, Bhushan’s model and BGT (Bush-Gibson-Thomas) model were applied. The values of A r /A a from the stochastic models tended to be smaller than BEM. The discrepancy between the Bhushan’s model with the 8-point analysis and BEM was slightly smaller than Nayak’s theory. The results with the watershed segmentation was similar to those with the 8-point analysis. The impact of the Wolf pruning on the discrepancy between the stochastic analysis and BEM was not very clear. In case of the BGT model which employs surface gradients, good quantitative agreement against BEM was obtained when the Nayak’s bandwidth parameter was large.

  14. Turbulent fluctuations and the excitation of Z Cam outbursts

    NASA Astrophysics Data System (ADS)

    Ross, Johnathan; Latter, Henrik N.

    2017-09-01

    Z Cam variables are a subclass of dwarf nova that lie near a global bifurcation between outbursting ('limit cycle') and non-outbursting ('standstill') states. It is believed that variations in the secondary star's mass-injection rate instigate transitions between the two regimes. In this paper, we explore an alternative trigger for these transitions: stochastic fluctuations in the disc's turbulent viscosity. We employ simple one-zone and global viscous models which, though inappropriate for detailed matching to observed light curves, clearly indicate that turbulent disc fluctuations induce outbursts when the system is sufficiently close to the global bifurcation point. While the models easily produce the observed 'outburst/dip' pairs exhibited by Z Cam and Nova-like variables, they struggle to generate long trains of outbursts. We conclude that mass transfer variability is the dominant physical process determining the overall Z Cam standstill/outburst pattern, but that viscous stochasticity provides an additional ingredient explaining some of the secondary features observed.

  15. Single particle momentum and angular distributions in hadron-hadron collisions at ultrahigh energies

    NASA Technical Reports Server (NTRS)

    Chou, T. T.; Chen, N. Y.

    1985-01-01

    The forward-backward charged multiplicity distribution (P n sub F, n sub B) of events in the 540 GeV antiproton-proton collider has been extensively studied by the UA5 Collaboration. It was pointed out that the distribution with respect to n = n sub F + n sub B satisfies approximate KNO scaling and that with respect to Z = n sub F - n sub B is binomial. The geometrical model of hadron-hadron collision interprets the large multiplicity fluctuation as due to the widely different nature of collisions at different impact parameters b. For a single impact parameter b, the collision in the geometrical model should exhibit stochastic behavior. This separation of the stochastic and nonstochastic (KNO) aspects of multiparticle production processes gives conceptually a lucid and attractive picture of such collisions, leading to the concept of partition temperature T sub p and the single particle momentum spectrum to be discussed in detail.

  16. Solving the master equation without kinetic Monte Carlo: Tensor train approximations for a CO oxidation model

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gelß, Patrick, E-mail: p.gelss@fu-berlin.de; Matera, Sebastian, E-mail: matera@math.fu-berlin.de; Schütte, Christof, E-mail: schuette@mi.fu-berlin.de

    2016-06-01

    In multiscale modeling of heterogeneous catalytic processes, one crucial point is the solution of a Markovian master equation describing the stochastic reaction kinetics. Usually, this is too high-dimensional to be solved with standard numerical techniques and one has to rely on sampling approaches based on the kinetic Monte Carlo method. In this study we break the curse of dimensionality for the direct solution of the Markovian master equation by exploiting the Tensor Train Format for this purpose. The performance of the approach is demonstrated on a first principles based, reduced model for the CO oxidation on the RuO{sub 2}(110) surface.more » We investigate the complexity for increasing system size and for various reaction conditions. The advantage over the stochastic simulation approach is illustrated by a problem with increased stiffness.« less

  17. Random Interchange of Magnetic Connectivity

    NASA Astrophysics Data System (ADS)

    Matthaeus, W. H.; Ruffolo, D. J.; Servidio, S.; Wan, M.; Rappazzo, A. F.

    2015-12-01

    Magnetic connectivity, the connection between two points along a magnetic field line, has a stochastic character associated with field lines random walking in space due to magnetic fluctuations, but connectivity can also change in time due to dynamical activity [1]. For fluctuations transverse to a strong mean field, this connectivity change be caused by stochastic interchange due to component reconnection. The process may be understood approximately by formulating a diffusion-like Fokker-Planck coefficient [2] that is asymptotically related to standard field line random walk. Quantitative estimates are provided, for transverse magnetic field models and anisotropic models such as reduced magnetohydrodynamics. In heliospheric applications, these estimates may be useful for understanding mixing between open and close field line regions near coronal hole boundaries, and large latitude excursions of connectivity associated with turbulence. [1] A. F. Rappazzo, W. H. Matthaeus, D. Ruffolo, S. Servidio & M. Velli, ApJL, 758, L14 (2012) [2] D. Ruffolo & W. Matthaeus, ApJ, 806, 233 (2015)

  18. Stochastic transport in the presence of spatial disorder: Fluctuation-induced corrections to homogenization

    NASA Astrophysics Data System (ADS)

    Russell, Matthew J.; Jensen, Oliver E.; Galla, Tobias

    2016-10-01

    Motivated by uncertainty quantification in natural transport systems, we investigate an individual-based transport process involving particles undergoing a random walk along a line of point sinks whose strengths are themselves independent random variables. We assume particles are removed from the system via first-order kinetics. We analyze the system using a hierarchy of approaches when the sinks are sparsely distributed, including a stochastic homogenization approximation that yields explicit predictions for the extrinsic disorder in the stationary state due to sink strength fluctuations. The extrinsic noise induces long-range spatial correlations in the particle concentration, unlike fluctuations due to the intrinsic noise alone. Additionally, the mean concentration profile, averaged over both intrinsic and extrinsic noise, is elevated compared with the corresponding profile from a uniform sink distribution, showing that the classical homogenization approximation can be a biased estimator of the true mean.

  19. Analytically tractable model for community ecology with many species

    NASA Astrophysics Data System (ADS)

    Dickens, Benjamin; Fisher, Charles K.; Mehta, Pankaj

    2016-08-01

    A fundamental problem in community ecology is understanding how ecological processes such as selection, drift, and immigration give rise to observed patterns in species composition and diversity. Here, we analyze a recently introduced, analytically tractable, presence-absence (PA) model for community assembly, and we use it to ask how ecological traits such as the strength of competition, the amount of diversity, and demographic and environmental stochasticity affect species composition in a community. In the PA model, species are treated as stochastic binary variables that can either be present or absent in a community: species can immigrate into the community from a regional species pool and can go extinct due to competition and stochasticity. Building upon previous work, we show that, despite its simplicity, the PA model reproduces the qualitative features of more complicated models of community assembly. In agreement with recent studies of large, competitive Lotka-Volterra systems, the PA model exhibits distinct ecological behaviors organized around a special ("critical") point corresponding to Hubbell's neutral theory of biodiversity. These results suggest that the concepts of ecological "phases" and phase diagrams can provide a powerful framework for thinking about community ecology, and that the PA model captures the essential ecological dynamics of community assembly.

  20. Modeling Intraindividual Dynamics Using Stochastic Differential Equations: Age Differences in Affect Regulation.

    PubMed

    Wood, Julie; Oravecz, Zita; Vogel, Nina; Benson, Lizbeth; Chow, Sy-Miin; Cole, Pamela; Conroy, David E; Pincus, Aaron L; Ram, Nilam

    2017-12-15

    Life-span theories of aging suggest improvements and decrements in individuals' ability to regulate affect. Dynamic process models, with intensive longitudinal data, provide new opportunities to articulate specific theories about individual differences in intraindividual dynamics. This paper illustrates a method for operationalizing affect dynamics using a multilevel stochastic differential equation (SDE) model, and examines how those dynamics differ with age and trait-level tendencies to deploy emotion regulation strategies (reappraisal and suppression). Univariate multilevel SDE models, estimated in a Bayesian framework, were fit to 21 days of ecological momentary assessments of affect valence and arousal (average 6.93/day, SD = 1.89) obtained from 150 adults (age 18-89 years)-specifically capturing temporal dynamics of individuals' core affect in terms of attractor point, reactivity to biopsychosocial (BPS) inputs, and attractor strength. Older age was associated with higher arousal attractor point and less BPS-related reactivity. Greater use of reappraisal was associated with lower valence attractor point. Intraindividual variability in regulation strategy use was associated with greater BPS-related reactivity and attractor strength, but in different ways for valence and arousal. The results highlight the utility of SDE models for studying affect dynamics and informing theoretical predictions about how intraindividual dynamics change over the life course. © The Author 2017. Published by Oxford University Press on behalf of The Gerontological Society of America. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com.

  1. Adiabatic reduction of a model of stochastic gene expression with jump Markov process.

    PubMed

    Yvinec, Romain; Zhuge, Changjing; Lei, Jinzhi; Mackey, Michael C

    2014-04-01

    This paper considers adiabatic reduction in a model of stochastic gene expression with bursting transcription considered as a jump Markov process. In this model, the process of gene expression with auto-regulation is described by fast/slow dynamics. The production of mRNA is assumed to follow a compound Poisson process occurring at a rate depending on protein levels (the phenomena called bursting in molecular biology) and the production of protein is a linear function of mRNA numbers. When the dynamics of mRNA is assumed to be a fast process (due to faster mRNA degradation than that of protein) we prove that, with appropriate scalings in the burst rate, jump size or translational rate, the bursting phenomena can be transmitted to the slow variable. We show that, depending on the scaling, the reduced equation is either a stochastic differential equation with a jump Poisson process or a deterministic ordinary differential equation. These results are significant because adiabatic reduction techniques seem to have not been rigorously justified for a stochastic differential system containing a jump Markov process. We expect that the results can be generalized to adiabatic methods in more general stochastic hybrid systems.

  2. Stochastic switching in biology: from genotype to phenotype

    NASA Astrophysics Data System (ADS)

    Bressloff, Paul C.

    2017-03-01

    There has been a resurgence of interest in non-equilibrium stochastic processes in recent years, driven in part by the observation that the number of molecules (genes, mRNA, proteins) involved in gene expression are often of order 1-1000. This means that deterministic mass-action kinetics tends to break down, and one needs to take into account the discrete, stochastic nature of biochemical reactions. One of the major consequences of molecular noise is the occurrence of stochastic biological switching at both the genotypic and phenotypic levels. For example, individual gene regulatory networks can switch between graded and binary responses, exhibit translational/transcriptional bursting, and support metastability (noise-induced switching between states that are stable in the deterministic limit). If random switching persists at the phenotypic level then this can confer certain advantages to cell populations growing in a changing environment, as exemplified by bacterial persistence in response to antibiotics. Gene expression at the single-cell level can also be regulated by changes in cell density at the population level, a process known as quorum sensing. In contrast to noise-driven phenotypic switching, the switching mechanism in quorum sensing is stimulus-driven and thus noise tends to have a detrimental effect. A common approach to modeling stochastic gene expression is to assume a large but finite system and to approximate the discrete processes by continuous processes using a system-size expansion. However, there is a growing need to have some familiarity with the theory of stochastic processes that goes beyond the standard topics of chemical master equations, the system-size expansion, Langevin equations and the Fokker-Planck equation. Examples include stochastic hybrid systems (piecewise deterministic Markov processes), large deviations and the Wentzel-Kramers-Brillouin (WKB) method, adiabatic reductions, and queuing/renewal theory. The major aim of this review is to provide a self-contained survey of these mathematical methods, mainly within the context of biological switching processes at both the genotypic and phenotypic levels. However, applications to other examples of biological switching are also discussed, including stochastic ion channels, diffusion in randomly switching environments, bacterial chemotaxis, and stochastic neural networks.

  3. Modeling financial markets by the multiplicative sequence of trades

    NASA Astrophysics Data System (ADS)

    Gontis, V.; Kaulakys, B.

    2004-12-01

    We introduce the stochastic multiplicative point process modeling trading activity of financial markets. Such a model system exhibits power-law spectral density S(f)∝1/fβ, scaled as power of frequency for various values of β between 0.5 and 2. Furthermore, we analyze the relation between the power-law autocorrelations and the origin of the power-law probability distribution of the trading activity. The model reproduces the spectral properties of trading activity and explains the mechanism of power-law distribution in real markets.

  4. SASS: A symmetry adapted stochastic search algorithm exploiting site symmetry

    NASA Astrophysics Data System (ADS)

    Wheeler, Steven E.; Schleyer, Paul v. R.; Schaefer, Henry F.

    2007-03-01

    A simple symmetry adapted search algorithm (SASS) exploiting point group symmetry increases the efficiency of systematic explorations of complex quantum mechanical potential energy surfaces. In contrast to previously described stochastic approaches, which do not employ symmetry, candidate structures are generated within simple point groups, such as C2, Cs, and C2v. This facilitates efficient sampling of the 3N-6 Pople's dimensional configuration space and increases the speed and effectiveness of quantum chemical geometry optimizations. Pople's concept of framework groups [J. Am. Chem. Soc. 102, 4615 (1980)] is used to partition the configuration space into structures spanning all possible distributions of sets of symmetry equivalent atoms. This provides an efficient means of computing all structures of a given symmetry with minimum redundancy. This approach also is advantageous for generating initial structures for global optimizations via genetic algorithm and other stochastic global search techniques. Application of the SASS method is illustrated by locating 14 low-lying stationary points on the cc-pwCVDZ ROCCSD(T) potential energy surface of Li5H2. The global minimum structure is identified, along with many unique, nonintuitive, energetically favorable isomers.

  5. Time-dependent probability density functions and information geometry in stochastic logistic and Gompertz models

    NASA Astrophysics Data System (ADS)

    Tenkès, Lucille-Marie; Hollerbach, Rainer; Kim, Eun-jin

    2017-12-01

    A probabilistic description is essential for understanding growth processes in non-stationary states. In this paper, we compute time-dependent probability density functions (PDFs) in order to investigate stochastic logistic and Gompertz models, which are two of the most popular growth models. We consider different types of short-correlated multiplicative and additive noise sources and compare the time-dependent PDFs in the two models, elucidating the effects of the additive and multiplicative noises on the form of PDFs. We demonstrate an interesting transition from a unimodal to a bimodal PDF as the multiplicative noise increases for a fixed value of the additive noise. A much weaker (leaky) attractor in the Gompertz model leads to a significant (singular) growth of the population of a very small size. We point out the limitation of using stationary PDFs, mean value and variance in understanding statistical properties of the growth in non-stationary states, highlighting the importance of time-dependent PDFs. We further compare these two models from the perspective of information change that occurs during the growth process. Specifically, we define an infinitesimal distance at any time by comparing two PDFs at times infinitesimally apart and sum these distances in time. The total distance along the trajectory quantifies the total number of different states that the system undergoes in time, and is called the information length. We show that the time-evolution of the two models become more similar when measured in units of the information length and point out the merit of using the information length in unifying and understanding the dynamic evolution of different growth processes.

  6. Biological signatures of dynamic river networks from a coupled landscape evolution and neutral community model

    NASA Astrophysics Data System (ADS)

    Stokes, M.; Perron, J. T.

    2017-12-01

    Freshwater systems host exceptionally species-rich communities whose spatial structure is dictated by the topology of the river networks they inhabit. Over geologic time, river networks are dynamic; drainage basins shrink and grow, and river capture establishes new connections between previously separated regions. It has been hypothesized that these changes in river network structure influence the evolution of life by exchanging and isolating species, perhaps boosting biodiversity in the process. However, no general model exists to predict the evolutionary consequences of landscape change. We couple a neutral community model of freshwater organisms to a landscape evolution model in which the river network undergoes drainage divide migration and repeated river capture. Neutral community models are macro-ecological models that include stochastic speciation and dispersal to produce realistic patterns of biodiversity. We explore the consequences of three modes of speciation - point mutation, time-protracted, and vicariant (geographic) speciation - by tracking patterns of diversity in time and comparing the final result to an equilibrium solution of the neutral model on the final landscape. Under point mutation, a simple model of stochastic and instantaneous speciation, the results are identical to the equilibrium solution and indicate the dominance of the species-area relationship in forming patterns of diversity. The number of species in a basin is proportional to its area, and regional species richness reaches its maximum when drainage area is evenly distributed among sub-basins. Time-protracted speciation is also modeled as a stochastic process, but in order to produce more realistic rates of diversification, speciation is not assumed to be instantaneous. Rather, each new species must persist for a certain amount of time before it is considered to be established. When vicariance (geographic speciation) is included, there is a transient signature of increased regional diversity after river capture. The results indicate that the mode of speciation and the rate of speciation relative to the rate of divide migration determine the evolutionary signature of river capture.

  7. [Gene method for inconsistent hydrological frequency calculation. I: Inheritance, variability and evolution principles of hydrological genes].

    PubMed

    Xie, Ping; Wu, Zi Yi; Zhao, Jiang Yan; Sang, Yan Fang; Chen, Jie

    2018-04-01

    A stochastic hydrological process is influenced by both stochastic and deterministic factors. A hydrological time series contains not only pure random components reflecting its inheri-tance characteristics, but also deterministic components reflecting variability characteristics, such as jump, trend, period, and stochastic dependence. As a result, the stochastic hydrological process presents complicated evolution phenomena and rules. To better understand these complicated phenomena and rules, this study described the inheritance and variability characteristics of an inconsistent hydrological series from two aspects: stochastic process simulation and time series analysis. In addition, several frequency analysis approaches for inconsistent time series were compared to reveal the main problems in inconsistency study. Then, we proposed a new concept of hydrological genes origined from biological genes to describe the inconsistent hydrolocal processes. The hydrologi-cal genes were constructed using moments methods, such as general moments, weight function moments, probability weight moments and L-moments. Meanwhile, the five components, including jump, trend, periodic, dependence and pure random components, of a stochastic hydrological process were defined as five hydrological bases. With this method, the inheritance and variability of inconsistent hydrological time series were synthetically considered and the inheritance, variability and evolution principles were fully described. Our study would contribute to reveal the inheritance, variability and evolution principles in probability distribution of hydrological elements.

  8. Efficiency of Finnish General Upper Secondary Schools: An Application of Stochastic Frontier Analysis with Panel Data

    ERIC Educational Resources Information Center

    Kirjavainen, Tanja

    2012-01-01

    Different stochastic frontier models for panel data are used to estimate education production functions and the efficiency of Finnish general upper secondary schools. Grades in the matriculation examination are used as an output and explained with the comprehensive school grade point average, parental socio-economic background, school resources,…

  9. Improved ensemble-mean forecasting of ENSO events by a zero-mean stochastic error model of an intermediate coupled model

    NASA Astrophysics Data System (ADS)

    Zheng, Fei; Zhu, Jiang

    2017-04-01

    How to design a reliable ensemble prediction strategy with considering the major uncertainties of a forecasting system is a crucial issue for performing an ensemble forecast. In this study, a new stochastic perturbation technique is developed to improve the prediction skills of El Niño-Southern Oscillation (ENSO) through using an intermediate coupled model. We first estimate and analyze the model uncertainties from the ensemble Kalman filter analysis results through assimilating the observed sea surface temperatures. Then, based on the pre-analyzed properties of model errors, we develop a zero-mean stochastic model-error model to characterize the model uncertainties mainly induced by the missed physical processes of the original model (e.g., stochastic atmospheric forcing, extra-tropical effects, Indian Ocean Dipole). Finally, we perturb each member of an ensemble forecast at each step by the developed stochastic model-error model during the 12-month forecasting process, and add the zero-mean perturbations into the physical fields to mimic the presence of missing processes and high-frequency stochastic noises. The impacts of stochastic model-error perturbations on ENSO deterministic predictions are examined by performing two sets of 21-yr hindcast experiments, which are initialized from the same initial conditions and differentiated by whether they consider the stochastic perturbations. The comparison results show that the stochastic perturbations have a significant effect on improving the ensemble-mean prediction skills during the entire 12-month forecasting process. This improvement occurs mainly because the nonlinear terms in the model can form a positive ensemble-mean from a series of zero-mean perturbations, which reduces the forecasting biases and then corrects the forecast through this nonlinear heating mechanism.

  10. A stochastic hybrid systems based framework for modeling dependent failure processes

    PubMed Central

    Fan, Mengfei; Zeng, Zhiguo; Zio, Enrico; Kang, Rui; Chen, Ying

    2017-01-01

    In this paper, we develop a framework to model and analyze systems that are subject to dependent, competing degradation processes and random shocks. The degradation processes are described by stochastic differential equations, whereas transitions between the system discrete states are triggered by random shocks. The modeling is, then, based on Stochastic Hybrid Systems (SHS), whose state space is comprised of a continuous state determined by stochastic differential equations and a discrete state driven by stochastic transitions and reset maps. A set of differential equations are derived to characterize the conditional moments of the state variables. System reliability and its lower bounds are estimated from these conditional moments, using the First Order Second Moment (FOSM) method and Markov inequality, respectively. The developed framework is applied to model three dependent failure processes from literature and a comparison is made to Monte Carlo simulations. The results demonstrate that the developed framework is able to yield an accurate estimation of reliability with less computational costs compared to traditional Monte Carlo-based methods. PMID:28231313

  11. A stochastic hybrid systems based framework for modeling dependent failure processes.

    PubMed

    Fan, Mengfei; Zeng, Zhiguo; Zio, Enrico; Kang, Rui; Chen, Ying

    2017-01-01

    In this paper, we develop a framework to model and analyze systems that are subject to dependent, competing degradation processes and random shocks. The degradation processes are described by stochastic differential equations, whereas transitions between the system discrete states are triggered by random shocks. The modeling is, then, based on Stochastic Hybrid Systems (SHS), whose state space is comprised of a continuous state determined by stochastic differential equations and a discrete state driven by stochastic transitions and reset maps. A set of differential equations are derived to characterize the conditional moments of the state variables. System reliability and its lower bounds are estimated from these conditional moments, using the First Order Second Moment (FOSM) method and Markov inequality, respectively. The developed framework is applied to model three dependent failure processes from literature and a comparison is made to Monte Carlo simulations. The results demonstrate that the developed framework is able to yield an accurate estimation of reliability with less computational costs compared to traditional Monte Carlo-based methods.

  12. Habitat connectivity and in-stream vegetation control temporal variability of benthic invertebrate communities.

    PubMed

    Huttunen, K-L; Mykrä, H; Oksanen, J; Astorga, A; Paavola, R; Muotka, T

    2017-05-03

    One of the key challenges to understanding patterns of β diversity is to disentangle deterministic patterns from stochastic ones. Stochastic processes may mask the influence of deterministic factors on community dynamics, hindering identification of the mechanisms causing variation in community composition. We studied temporal β diversity (among-year dissimilarity) of macroinvertebrate communities in near-pristine boreal streams across 14 years. To assess whether the observed β diversity deviates from that expected by chance, and to identify processes (deterministic vs. stochastic) through which different explanatory factors affect community variability, we used a null model approach. We observed that at the majority of sites temporal β diversity was low indicating high community stability. When stochastic variation was unaccounted for, connectivity was the only variable explaining temporal β diversity, with weakly connected sites exhibiting higher community variability through time. After accounting for stochastic effects, connectivity lost importance, suggesting that it was related to temporal β diversity via random colonization processes. Instead, β diversity was best explained by in-stream vegetation, community variability decreasing with increasing bryophyte cover. These results highlight the potential of stochastic factors to dampen the influence of deterministic processes, affecting our ability to understand and predict changes in biological communities through time.

  13. A comparative study of Conroy and Monte Carlo methods applied to multiple quadratures and multiple scattering

    NASA Technical Reports Server (NTRS)

    Deepak, A.; Fluellen, A.

    1978-01-01

    An efficient numerical method of multiple quadratures, the Conroy method, is applied to the problem of computing multiple scattering contributions in the radiative transfer through realistic planetary atmospheres. A brief error analysis of the method is given and comparisons are drawn with the more familiar Monte Carlo method. Both methods are stochastic problem-solving models of a physical or mathematical process and utilize the sampling scheme for points distributed over a definite region. In the Monte Carlo scheme the sample points are distributed randomly over the integration region. In the Conroy method, the sample points are distributed systematically, such that the point distribution forms a unique, closed, symmetrical pattern which effectively fills the region of the multidimensional integration. The methods are illustrated by two simple examples: one, of multidimensional integration involving two independent variables, and the other, of computing the second order scattering contribution to the sky radiance.

  14. A stochastic-geometric model of soil variation in Pleistocene patterned ground

    NASA Astrophysics Data System (ADS)

    Lark, Murray; Meerschman, Eef; Van Meirvenne, Marc

    2013-04-01

    In this paper we examine the spatial variability of soil in parent material with complex spatial structure which arises from complex non-linear geomorphic processes. We show that this variability can be better-modelled by a stochastic-geometric model than by a standard Gaussian random field. The benefits of the new model are seen in the reproduction of features of the target variable which influence processes like water movement and pollutant dispersal. Complex non-linear processes in the soil give rise to properties with non-Gaussian distributions. Even under a transformation to approximate marginal normality, such variables may have a more complex spatial structure than the Gaussian random field model of geostatistics can accommodate. In particular the extent to which extreme values of the variable are connected in spatially coherent regions may be misrepresented. As a result, for example, geostatistical simulation generally fails to reproduce the pathways for preferential flow in an environment where coarse infill of former fluvial channels or coarse alluvium of braided streams creates pathways for rapid movement of water. Multiple point geostatistics has been developed to deal with this problem. Multiple point methods proceed by sampling from a set of training images which can be assumed to reproduce the non-Gaussian behaviour of the target variable. The challenge is to identify appropriate sources of such images. In this paper we consider a mode of soil variation in which the soil varies continuously, exhibiting short-range lateral trends induced by local effects of the factors of soil formation which vary across the region of interest in an unpredictable way. The trends in soil variation are therefore only apparent locally, and the soil variation at regional scale appears random. We propose a stochastic-geometric model for this mode of soil variation called the Continuous Local Trend (CLT) model. We consider a case study of soil formed in relict patterned ground with pronounced lateral textural variations arising from the presence of infilled ice-wedges of Pleistocene origin. We show how knowledge of the pedogenetic processes in this environment, along with some simple descriptive statistics, can be used to select and fit a CLT model for the apparent electrical conductivity (ECa) of the soil. We use the model to simulate realizations of the CLT process, and compare these with realizations of a fitted Gaussian random field. We show how statistics that summarize the spatial coherence of regions with small values of ECa, which are expected to have coarse texture and so larger saturated hydraulic conductivity, are better reproduced by the CLT model than by the Gaussian random field. This suggests that the CLT model could be used to generate an unlimited supply of training images to allow multiple point geostatistical simulation or prediction of this or similar variables.

  15. Stochastic model to forecast ground-level ozone concentration at urban and rural areas.

    PubMed

    Dueñas, C; Fernández, M C; Cañete, S; Carretero, J; Liger, E

    2005-12-01

    Stochastic models that estimate the ground-level ozone concentrations in air at an urban and rural sampling points in South-eastern Spain have been developed. Studies of temporal series of data, spectral analyses of temporal series and ARIMA models have been used. The ARIMA model (1,0,0) x (1,0,1)24 satisfactorily predicts hourly ozone concentrations in the urban area. The ARIMA (2,1,1) x (0,1,1)24 has been developed for the rural area. In both sampling points, predictions of hourly ozone concentrations agree reasonably well with measured values. However, the prediction of hourly ozone concentrations in the rural point appears to be better than that of the urban point. The performance of ARIMA models suggests that this kind of modelling can be suitable for ozone concentrations forecasting.

  16. Gene regulation and noise reduction by coupling of stochastic processes

    NASA Astrophysics Data System (ADS)

    Ramos, Alexandre F.; Hornos, José Eduardo M.; Reinitz, John

    2015-02-01

    Here we characterize the low-noise regime of a stochastic model for a negative self-regulating binary gene. The model has two stochastic variables, the protein number and the state of the gene. Each state of the gene behaves as a protein source governed by a Poisson process. The coupling between the two gene states depends on protein number. This fact has a very important implication: There exist protein production regimes characterized by sub-Poissonian noise because of negative covariance between the two stochastic variables of the model. Hence the protein numbers obey a probability distribution that has a peak that is sharper than those of the two coupled Poisson processes that are combined to produce it. Biochemically, the noise reduction in protein number occurs when the switching of the genetic state is more rapid than protein synthesis or degradation. We consider the chemical reaction rates necessary for Poisson and sub-Poisson processes in prokaryotes and eucaryotes. Our results suggest that the coupling of multiple stochastic processes in a negative covariance regime might be a widespread mechanism for noise reduction.

  17. Gene regulation and noise reduction by coupling of stochastic processes

    PubMed Central

    Hornos, José Eduardo M.; Reinitz, John

    2015-01-01

    Here we characterize the low noise regime of a stochastic model for a negative self-regulating binary gene. The model has two stochastic variables, the protein number and the state of the gene. Each state of the gene behaves as a protein source governed by a Poisson process. The coupling between the the two gene states depends on protein number. This fact has a very important implication: there exist protein production regimes characterized by sub-Poissonian noise because of negative covariance between the two stochastic variables of the model. Hence the protein numbers obey a probability distribution that has a peak that is sharper than those of the two coupled Poisson processes that are combined to produce it. Biochemically, the noise reduction in protein number occurs when the switching of genetic state is more rapid than protein synthesis or degradation. We consider the chemical reaction rates necessary for Poisson and sub-Poisson processes in prokaryotes and eucaryotes. Our results suggest that the coupling of multiple stochastic processes in a negative covariance regime might be a widespread mechanism for noise reduction. PMID:25768447

  18. Gene regulation and noise reduction by coupling of stochastic processes.

    PubMed

    Ramos, Alexandre F; Hornos, José Eduardo M; Reinitz, John

    2015-02-01

    Here we characterize the low-noise regime of a stochastic model for a negative self-regulating binary gene. The model has two stochastic variables, the protein number and the state of the gene. Each state of the gene behaves as a protein source governed by a Poisson process. The coupling between the two gene states depends on protein number. This fact has a very important implication: There exist protein production regimes characterized by sub-Poissonian noise because of negative covariance between the two stochastic variables of the model. Hence the protein numbers obey a probability distribution that has a peak that is sharper than those of the two coupled Poisson processes that are combined to produce it. Biochemically, the noise reduction in protein number occurs when the switching of the genetic state is more rapid than protein synthesis or degradation. We consider the chemical reaction rates necessary for Poisson and sub-Poisson processes in prokaryotes and eucaryotes. Our results suggest that the coupling of multiple stochastic processes in a negative covariance regime might be a widespread mechanism for noise reduction.

  19. Noise-induced extinction for a ratio-dependent predator-prey model with strong Allee effect in prey

    NASA Astrophysics Data System (ADS)

    Mandal, Partha Sarathi

    2018-04-01

    In this paper, we study a stochastically forced ratio-dependent predator-prey model with strong Allee effect in prey population. In the deterministic case, we show that the model exhibits the stable interior equilibrium point or limit cycle corresponding to the co-existence of both species. We investigate a probabilistic mechanism of the noise-induced extinction in a zone of stable interior equilibrium point. Computational methods based on the stochastic sensitivity function technique are applied for the analysis of the dispersion of random states near stable interior equilibrium point. This method allows to construct a confidence domain and estimate the threshold value of the noise intensity for a transition from the coexistence to the extinction.

  20. Revisiting the diffusion approximation to estimate evolutionary rates of gene family diversification.

    PubMed

    Gjini, Erida; Haydon, Daniel T; David Barry, J; Cobbold, Christina A

    2014-01-21

    Genetic diversity in multigene families is shaped by multiple processes, including gene conversion and point mutation. Because multi-gene families are involved in crucial traits of organisms, quantifying the rates of their genetic diversification is important. With increasing availability of genomic data, there is a growing need for quantitative approaches that integrate the molecular evolution of gene families with their higher-scale function. In this study, we integrate a stochastic simulation framework with population genetics theory, namely the diffusion approximation, to investigate the dynamics of genetic diversification in a gene family. Duplicated genes can diverge and encode new functions as a result of point mutation, and become more similar through gene conversion. To model the evolution of pairwise identity in a multigene family, we first consider all conversion and mutation events in a discrete manner, keeping track of their details and times of occurrence; second we consider only the infinitesimal effect of these processes on pairwise identity accounting for random sampling of genes and positions. The purely stochastic approach is closer to biological reality and is based on many explicit parameters, such as conversion tract length and family size, but is more challenging analytically. The population genetics approach is an approximation accounting implicitly for point mutation and gene conversion, only in terms of per-site average probabilities. Comparison of these two approaches across a range of parameter combinations reveals that they are not entirely equivalent, but that for certain relevant regimes they do match. As an application of this modelling framework, we consider the distribution of nucleotide identity among VSG genes of African trypanosomes, representing the most prominent example of a multi-gene family mediating parasite antigenic variation and within-host immune evasion. © 2013 Published by Elsevier Ltd. All rights reserved.

  1. Site correction of a high-frequency strong-ground-motion simulation based on an empirical transfer function

    NASA Astrophysics Data System (ADS)

    Huang, Jyun-Yan; Wen, Kuo-Liang; Lin, Che-Min; Kuo, Chun-Hsiang; Chen, Chun-Te; Chang, Shuen-Chiang

    2017-05-01

    In this study, an empirical transfer function (ETF), which is the spectrum difference in Fourier amplitude spectra between observed strong ground motion and synthetic motion obtained by a stochastic point-source simulation technique, is constructed for the Taipei Basin, Taiwan. The basis stochastic point-source simulations can be treated as reference rock site conditions in order to consider site effects. The parameters of the stochastic point-source approach related to source and path effects are collected from previous well-verified studies. A database of shallow, small-magnitude earthquakes is selected to construct the ETFs so that the point-source approach for synthetic motions might be more widely applicable. The high-frequency synthetic motion obtained from the ETF procedure is site-corrected in the strong site-response area of the Taipei Basin. The site-response characteristics of the ETF show similar responses as in previous studies, which indicates that the base synthetic model is suitable for the reference rock conditions in the Taipei Basin. The dominant frequency contour corresponds to the shape of the bottom of the geological basement (the top of the Tertiary period), which is the Sungshan formation. Two clear high-amplification areas are identified in the deepest region of the Sungshan formation, as shown by an amplification contour of 0.5 Hz. Meanwhile, a high-amplification area was shifted to the basin's edge, as shown by an amplification contour of 2.0 Hz. Three target earthquakes with different kinds of source conditions, including shallow small-magnitude events, shallow and relatively large-magnitude events, and deep small-magnitude events relative to the ETF database, are tested to verify site correction. The results indicate that ETF-based site correction is effective for shallow earthquakes, even those with higher magnitudes, but is not suitable for deep earthquakes. Finally, one of the most significant shallow large-magnitude earthquakes (the 1999 Chi-Chi earthquake in Taiwan) is verified in this study. A finite fault stochastic simulation technique is applied, owing to the complexity of the fault rupture process for the Chi-Chi earthquake, and the ETF-based site-correction function is multiplied to obtain a precise simulation of high-frequency (up to 10 Hz) strong motions. The high-frequency prediction has good agreement in both time and frequency domain in this study, and the prediction level is the same as that predicted by the site-corrected ground motion prediction equation.

  2. High-Dimensional Bayesian Geostatistics

    PubMed Central

    Banerjee, Sudipto

    2017-01-01

    With the growing capabilities of Geographic Information Systems (GIS) and user-friendly software, statisticians today routinely encounter geographically referenced data containing observations from a large number of spatial locations and time points. Over the last decade, hierarchical spatiotemporal process models have become widely deployed statistical tools for researchers to better understand the complex nature of spatial and temporal variability. However, fitting hierarchical spatiotemporal models often involves expensive matrix computations with complexity increasing in cubic order for the number of spatial locations and temporal points. This renders such models unfeasible for large data sets. This article offers a focused review of two methods for constructing well-defined highly scalable spatiotemporal stochastic processes. Both these processes can be used as “priors” for spatiotemporal random fields. The first approach constructs a low-rank process operating on a lower-dimensional subspace. The second approach constructs a Nearest-Neighbor Gaussian Process (NNGP) that ensures sparse precision matrices for its finite realizations. Both processes can be exploited as a scalable prior embedded within a rich hierarchical modeling framework to deliver full Bayesian inference. These approaches can be described as model-based solutions for big spatiotemporal datasets. The models ensure that the algorithmic complexity has ~ n floating point operations (flops), where n the number of spatial locations (per iteration). We compare these methods and provide some insight into their methodological underpinnings. PMID:29391920

  3. Fickian dispersion is anomalous

    DOE PAGES

    Cushman, John H.; O’Malley, Dan

    2015-06-22

    The thesis put forward here is that the occurrence of Fickian dispersion in geophysical settings is a rare event and consequently should be labeled as anomalous. What people classically call anomalous is really the norm. In a Lagrangian setting, a process with mean square displacement which is proportional to time is generally labeled as Fickian dispersion. With a number of counter examples we show why this definition is fraught with difficulty. In a related discussion, we show an infinite second moment does not necessarily imply the process is super dispersive. By employing a rigorous mathematical definition of Fickian dispersion wemore » illustrate why it is so hard to find a Fickian process. We go on to employ a number of renormalization group approaches to classify non-Fickian dispersive behavior. Scaling laws for the probability density function for a dispersive process, the distribution for the first passage times, the mean first passage time, and the finite-size Lyapunov exponent are presented for fixed points of both deterministic and stochastic renormalization group operators. The fixed points of the renormalization group operators are p-self-similar processes. A generalized renormalization group operator is introduced whose fixed points form a set of generalized self-similar processes. Finally, power-law clocks are introduced to examine multi-scaling behavior. Several examples of these ideas are presented and discussed.« less

  4. High-Dimensional Bayesian Geostatistics.

    PubMed

    Banerjee, Sudipto

    2017-06-01

    With the growing capabilities of Geographic Information Systems (GIS) and user-friendly software, statisticians today routinely encounter geographically referenced data containing observations from a large number of spatial locations and time points. Over the last decade, hierarchical spatiotemporal process models have become widely deployed statistical tools for researchers to better understand the complex nature of spatial and temporal variability. However, fitting hierarchical spatiotemporal models often involves expensive matrix computations with complexity increasing in cubic order for the number of spatial locations and temporal points. This renders such models unfeasible for large data sets. This article offers a focused review of two methods for constructing well-defined highly scalable spatiotemporal stochastic processes. Both these processes can be used as "priors" for spatiotemporal random fields. The first approach constructs a low-rank process operating on a lower-dimensional subspace. The second approach constructs a Nearest-Neighbor Gaussian Process (NNGP) that ensures sparse precision matrices for its finite realizations. Both processes can be exploited as a scalable prior embedded within a rich hierarchical modeling framework to deliver full Bayesian inference. These approaches can be described as model-based solutions for big spatiotemporal datasets. The models ensure that the algorithmic complexity has ~ n floating point operations (flops), where n the number of spatial locations (per iteration). We compare these methods and provide some insight into their methodological underpinnings.

  5. An empirical analysis of the distribution of overshoots in a stationary Gaussian stochastic process

    NASA Technical Reports Server (NTRS)

    Carter, M. C.; Madison, M. W.

    1973-01-01

    The frequency distribution of overshoots in a stationary Gaussian stochastic process is analyzed. The primary processes involved in this analysis are computer simulation and statistical estimation. Computer simulation is used to simulate stationary Gaussian stochastic processes that have selected autocorrelation functions. An analysis of the simulation results reveals a frequency distribution for overshoots with a functional dependence on the mean and variance of the process. Statistical estimation is then used to estimate the mean and variance of a process. It is shown that for an autocorrelation function, the mean and the variance for the number of overshoots, a frequency distribution for overshoots can be estimated.

  6. Bayesian parameter inference for stochastic biochemical network models using particle Markov chain Monte Carlo

    PubMed Central

    Golightly, Andrew; Wilkinson, Darren J.

    2011-01-01

    Computational systems biology is concerned with the development of detailed mechanistic models of biological processes. Such models are often stochastic and analytically intractable, containing uncertain parameters that must be estimated from time course data. In this article, we consider the task of inferring the parameters of a stochastic kinetic model defined as a Markov (jump) process. Inference for the parameters of complex nonlinear multivariate stochastic process models is a challenging problem, but we find here that algorithms based on particle Markov chain Monte Carlo turn out to be a very effective computationally intensive approach to the problem. Approximations to the inferential model based on stochastic differential equations (SDEs) are considered, as well as improvements to the inference scheme that exploit the SDE structure. We apply the methodology to a Lotka–Volterra system and a prokaryotic auto-regulatory network. PMID:23226583

  7. Stochastic Calculus and Differential Equations for Physics and Finance

    NASA Astrophysics Data System (ADS)

    McCauley, Joseph L.

    2013-02-01

    1. Random variables and probability distributions; 2. Martingales, Markov, and nonstationarity; 3. Stochastic calculus; 4. Ito processes and Fokker-Planck equations; 5. Selfsimilar Ito processes; 6. Fractional Brownian motion; 7. Kolmogorov's PDEs and Chapman-Kolmogorov; 8. Non Markov Ito processes; 9. Black-Scholes, martingales, and Feynman-Katz; 10. Stochastic calculus with martingales; 11. Statistical physics and finance, a brief history of both; 12. Introduction to new financial economics; 13. Statistical ensembles and time series analysis; 14. Econometrics; 15. Semimartingales; References; Index.

  8. Application of a stochastic inverse to the geophysical inverse problem

    NASA Technical Reports Server (NTRS)

    Jordan, T. H.; Minster, J. B.

    1972-01-01

    The inverse problem for gross earth data can be reduced to an undertermined linear system of integral equations of the first kind. A theory is discussed for computing particular solutions to this linear system based on the stochastic inverse theory presented by Franklin. The stochastic inverse is derived and related to the generalized inverse of Penrose and Moore. A Backus-Gilbert type tradeoff curve is constructed for the problem of estimating the solution to the linear system in the presence of noise. It is shown that the stochastic inverse represents an optimal point on this tradeoff curve. A useful form of the solution autocorrelation operator as a member of a one-parameter family of smoothing operators is derived.

  9. Effects of stochastic interest rates in decision making under risk: A Markov decision process model for forest management

    Treesearch

    Mo Zhou; Joseph Buongiorno

    2011-01-01

    Most economic studies of forest decision making under risk assume a fixed interest rate. This paper investigated some implications of this stochastic nature of interest rates. Markov decision process (MDP) models, used previously to integrate stochastic stand growth and prices, can be extended to include variable interest rates as well. This method was applied to...

  10. Explanation of the Reaction of Monoclonal Antibodies with Candida Albicans Cell Surface in Terms of Compound Poisson Process

    NASA Astrophysics Data System (ADS)

    Dudek, Mirosław R.; Mleczko, Józef

    Surprisingly, still very little is known about the mathematical modeling of peaks in the binding affinities distribution function. In general, it is believed that the peaks represent antibodies directed towards single epitopes. In this paper, we refer to fluorescence flow cytometry experiments and show that even monoclonal antibodies can display multi-modal histograms of affinity distribution. This result take place when some obstacles appear in the paratope-epitope reaction such that the process of reaching the specific epitope ceases to be a point Poisson process. A typical example is the large area of cell surface, which could be unreachable by antibodies leading to the heterogeneity of the cell surface repletion. In this case the affinity of cells to bind the antibodies should be described by a more complex process than the pure-Poisson point process. We suggested to use a doubly stochastic Poisson process, where the points are replaced by a binomial point process resulting in the Neyman distribution. The distribution can have a strongly multinomial character, and with the number of modes depending on the concentration of antibodies and epitopes. All this means that there is a possibility to go beyond the simplified theory, one response towards one epitope. As a consequence, our description provides perspectives for describing antigen-antibody reactions, both qualitatively and quantitavely, even in the case when some peaks result from more than one binding mechanism.

  11. Stochastic E2F activation and reconciliation of phenomenological cell-cycle models.

    PubMed

    Lee, Tae J; Yao, Guang; Bennett, Dorothy C; Nevins, Joseph R; You, Lingchong

    2010-09-21

    The transition of the mammalian cell from quiescence to proliferation is a highly variable process. Over the last four decades, two lines of apparently contradictory, phenomenological models have been proposed to account for such temporal variability. These include various forms of the transition probability (TP) model and the growth control (GC) model, which lack mechanistic details. The GC model was further proposed as an alternative explanation for the concept of the restriction point, which we recently demonstrated as being controlled by a bistable Rb-E2F switch. Here, through a combination of modeling and experiments, we show that these different lines of models in essence reflect different aspects of stochastic dynamics in cell cycle entry. In particular, we show that the variable activation of E2F can be described by stochastic activation of the bistable Rb-E2F switch, which in turn may account for the temporal variability in cell cycle entry. Moreover, we show that temporal dynamics of E2F activation can be recast into the frameworks of both the TP model and the GC model via parameter mapping. This mapping suggests that the two lines of phenomenological models can be reconciled through the stochastic dynamics of the Rb-E2F switch. It also suggests a potential utility of the TP or GC models in defining concise, quantitative phenotypes of cell physiology. This may have implications in classifying cell types or states.

  12. Noise-induced bistability in the quasi-neutral coexistence of viral RNAs under different replication modes.

    PubMed

    Sardanyés, Josep; Arderiu, Andreu; Elena, Santiago F; Alarcón, Tomás

    2018-05-01

    Evolutionary and dynamical investigations into real viral populations indicate that RNA replication can range between the two extremes represented by so-called 'stamping machine replication' (SMR) and 'geometric replication' (GR). The impact of asymmetries in replication for single-stranded (+) sense RNA viruses has been mainly studied with deterministic models. However, viral replication should be better described by including stochasticity, as the cell infection process is typically initiated with a very small number of RNA macromolecules, and thus largely influenced by intrinsic noise. Under appropriate conditions, deterministic theoretical descriptions of viral RNA replication predict a quasi-neutral coexistence scenario, with a line of fixed points involving different strands' equilibrium ratios depending on the initial conditions. Recent research into the quasi-neutral coexistence in two competing populations reveals that stochastic fluctuations fundamentally alter the mean-field scenario, and one of the two species outcompetes the other. In this article, we study this phenomenon for viral RNA replication modes by means of stochastic simulations and a diffusion approximation. Our results reveal that noise has a strong impact on the amplification of viral RNAs, also causing the emergence of noise-induced bistability. We provide analytical criteria for the dominance of (+) sense strands depending on the initial populations on the line of equilibria, which are in agreement with direct stochastic simulation results. The biological implications of this noise-driven mechanism are discussed within the framework of the evolutionary dynamics of RNA viruses with different modes of replication. © 2018 The Author(s).

  13. Real-time forecasting of an epidemic using a discrete time stochastic model: a case study of pandemic influenza (H1N1-2009)

    PubMed Central

    2011-01-01

    Background Real-time forecasting of epidemics, especially those based on a likelihood-based approach, is understudied. This study aimed to develop a simple method that can be used for the real-time epidemic forecasting. Methods A discrete time stochastic model, accounting for demographic stochasticity and conditional measurement, was developed and applied as a case study to the weekly incidence of pandemic influenza (H1N1-2009) in Japan. By imposing a branching process approximation and by assuming the linear growth of cases within each reporting interval, the epidemic curve is predicted using only two parameters. The uncertainty bounds of the forecasts are computed using chains of conditional offspring distributions. Results The quality of the forecasts made before the epidemic peak appears largely to depend on obtaining valid parameter estimates. The forecasts of both weekly incidence and final epidemic size greatly improved at and after the epidemic peak with all the observed data points falling within the uncertainty bounds. Conclusions Real-time forecasting using the discrete time stochastic model with its simple computation of the uncertainty bounds was successful. Because of the simplistic model structure, the proposed model has the potential to additionally account for various types of heterogeneity, time-dependent transmission dynamics and epidemiological details. The impact of such complexities on forecasting should be explored when the data become available as part of the disease surveillance. PMID:21324153

  14. Fast stochastic algorithm for simulating evolutionary population dynamics

    NASA Astrophysics Data System (ADS)

    Tsimring, Lev; Hasty, Jeff; Mather, William

    2012-02-01

    Evolution and co-evolution of ecological communities are stochastic processes often characterized by vastly different rates of reproduction and mutation and a coexistence of very large and very small sub-populations of co-evolving species. This creates serious difficulties for accurate statistical modeling of evolutionary dynamics. In this talk, we introduce a new exact algorithm for fast fully stochastic simulations of birth/death/mutation processes. It produces a significant speedup compared to the direct stochastic simulation algorithm in a typical case when the total population size is large and the mutation rates are much smaller than birth/death rates. We illustrate the performance of the algorithm on several representative examples: evolution on a smooth fitness landscape, NK model, and stochastic predator-prey system.

  15. Finite-Size Scaling Analysis of Binary Stochastic Processes and Universality Classes of Information Cascade Phase Transition

    NASA Astrophysics Data System (ADS)

    Mori, Shintaro; Hisakado, Masato

    2015-05-01

    We propose a finite-size scaling analysis method for binary stochastic processes X(t) in { 0,1} based on the second moment correlation length ξ for the autocorrelation function C(t). The purpose is to clarify the critical properties and provide a new data analysis method for information cascades. As a simple model to represent the different behaviors of subjects in information cascade experiments, we assume that X(t) is a mixture of an independent random variable that takes 1 with probability q and a random variable that depends on the ratio z of the variables taking 1 among recent r variables. We consider two types of the probability f(z) that the latter takes 1: (i) analog [f(z) = z] and (ii) digital [f(z) = θ(z - 1/2)]. We study the universal functions of scaling for ξ and the integrated correlation time τ. For finite r, C(t) decays exponentially as a function of t, and there is only one stable renormalization group (RG) fixed point. In the limit r to ∞ , where X(t) depends on all the previous variables, C(t) in model (i) obeys a power law, and the system becomes scale invariant. In model (ii) with q ≠ 1/2, there are two stable RG fixed points, which correspond to the ordered and disordered phases of the information cascade phase transition with the critical exponents β = 1 and ν|| = 2.

  16. A stochastic maximum principle for backward control systems with random default time

    NASA Astrophysics Data System (ADS)

    Shen, Yang; Kuen Siu, Tak

    2013-05-01

    This paper establishes a necessary and sufficient stochastic maximum principle for backward systems, where the state processes are governed by jump-diffusion backward stochastic differential equations with random default time. An application of the sufficient stochastic maximum principle to an optimal investment and capital injection problem in the presence of default risk is discussed.

  17. Stochastic associative memory

    NASA Astrophysics Data System (ADS)

    Baumann, Erwin W.; Williams, David L.

    1993-08-01

    Artificial neural networks capable of learning and recalling stochastic associations between non-deterministic quantities have received relatively little attention to date. One potential application of such stochastic associative networks is the generation of sensory 'expectations' based on arbitrary subsets of sensor inputs to support anticipatory and investigate behavior in sensor-based robots. Another application of this type of associative memory is the prediction of how a scene will look in one spectral band, including noise, based upon its appearance in several other wavebands. This paper describes a semi-supervised neural network architecture composed of self-organizing maps associated through stochastic inter-layer connections. This 'Stochastic Associative Memory' (SAM) can learn and recall non-deterministic associations between multi-dimensional probability density functions. The stochastic nature of the network also enables it to represent noise distributions that are inherent in any true sensing process. The SAM architecture, training process, and initial application to sensor image prediction are described. Relationships to Fuzzy Associative Memory (FAM) are discussed.

  18. Nonholonomic relativistic diffusion and exact solutions for stochastic Einstein spaces

    NASA Astrophysics Data System (ADS)

    Vacaru, S. I.

    2012-03-01

    We develop an approach to the theory of nonholonomic relativistic stochastic processes in curved spaces. The Itô and Stratonovich calculus are formulated for spaces with conventional horizontal (holonomic) and vertical (nonholonomic) splitting defined by nonlinear connection structures. Geometric models of the relativistic diffusion theory are elaborated for nonholonomic (pseudo) Riemannian manifolds and phase velocity spaces. Applying the anholonomic deformation method, the field equations in Einstein's gravity and various modifications are formally integrated in general forms, with generic off-diagonal metrics depending on some classes of generating and integration functions. Choosing random generating functions we can construct various classes of stochastic Einstein manifolds. We show how stochastic gravitational interactions with mixed holonomic/nonholonomic and random variables can be modelled in explicit form and study their main geometric and stochastic properties. Finally, the conditions when non-random classical gravitational processes transform into stochastic ones and inversely are analyzed.

  19. Fluctuation theorem: A critical review

    NASA Astrophysics Data System (ADS)

    Malek Mansour, M.; Baras, F.

    2017-10-01

    Fluctuation theorem for entropy production is revisited in the framework of stochastic processes. The applicability of the fluctuation theorem to physico-chemical systems and the resulting stochastic thermodynamics were analyzed. Some unexpected limitations are highlighted in the context of jump Markov processes. We have shown that these limitations handicap the ability of the resulting stochastic thermodynamics to correctly describe the state of non-equilibrium systems in terms of the thermodynamic properties of individual processes therein. Finally, we considered the case of diffusion processes and proved that the fluctuation theorem for entropy production becomes irrelevant at the stationary state in the case of one variable systems.

  20. Boosting Bayesian parameter inference of nonlinear stochastic differential equation models by Hamiltonian scale separation.

    PubMed

    Albert, Carlo; Ulzega, Simone; Stoop, Ruedi

    2016-04-01

    Parameter inference is a fundamental problem in data-driven modeling. Given observed data that is believed to be a realization of some parameterized model, the aim is to find parameter values that are able to explain the observed data. In many situations, the dominant sources of uncertainty must be included into the model for making reliable predictions. This naturally leads to stochastic models. Stochastic models render parameter inference much harder, as the aim then is to find a distribution of likely parameter values. In Bayesian statistics, which is a consistent framework for data-driven learning, this so-called posterior distribution can be used to make probabilistic predictions. We propose a novel, exact, and very efficient approach for generating posterior parameter distributions for stochastic differential equation models calibrated to measured time series. The algorithm is inspired by reinterpreting the posterior distribution as a statistical mechanics partition function of an object akin to a polymer, where the measurements are mapped on heavier beads compared to those of the simulated data. To arrive at distribution samples, we employ a Hamiltonian Monte Carlo approach combined with a multiple time-scale integration. A separation of time scales naturally arises if either the number of measurement points or the number of simulation points becomes large. Furthermore, at least for one-dimensional problems, we can decouple the harmonic modes between measurement points and solve the fastest part of their dynamics analytically. Our approach is applicable to a wide range of inference problems and is highly parallelizable.

  1. Modelling and simulating decision processes of linked lives: An approach based on concurrent processes and stochastic race.

    PubMed

    Warnke, Tom; Reinhardt, Oliver; Klabunde, Anna; Willekens, Frans; Uhrmacher, Adelinde M

    2017-10-01

    Individuals' decision processes play a central role in understanding modern migration phenomena and other demographic processes. Their integration into agent-based computational demography depends largely on suitable support by a modelling language. We are developing the Modelling Language for Linked Lives (ML3) to describe the diverse decision processes of linked lives succinctly in continuous time. The context of individuals is modelled by networks the individual is part of, such as family ties and other social networks. Central concepts, such as behaviour conditional on agent attributes, age-dependent behaviour, and stochastic waiting times, are tightly integrated in the language. Thereby, alternative decisions are modelled by concurrent processes that compete by stochastic race. Using a migration model, we demonstrate how this allows for compact description of complex decisions, here based on the Theory of Planned Behaviour. We describe the challenges for the simulation algorithm posed by stochastic race between multiple concurrent complex decisions.

  2. Stochastic Evolution Dynamic of the Rock-Scissors-Paper Game Based on a Quasi Birth and Death Process

    NASA Astrophysics Data System (ADS)

    Yu, Qian; Fang, Debin; Zhang, Xiaoling; Jin, Chen; Ren, Qiyu

    2016-06-01

    Stochasticity plays an important role in the evolutionary dynamic of cyclic dominance within a finite population. To investigate the stochastic evolution process of the behaviour of bounded rational individuals, we model the Rock-Scissors-Paper (RSP) game as a finite, state dependent Quasi Birth and Death (QBD) process. We assume that bounded rational players can adjust their strategies by imitating the successful strategy according to the payoffs of the last round of the game, and then analyse the limiting distribution of the QBD process for the game stochastic evolutionary dynamic. The numerical experiments results are exhibited as pseudo colour ternary heat maps. Comparisons of these diagrams shows that the convergence property of long run equilibrium of the RSP game in populations depends on population size and the parameter of the payoff matrix and noise factor. The long run equilibrium is asymptotically stable, neutrally stable and unstable respectively according to the normalised parameters in the payoff matrix. Moreover, the results show that the distribution probability becomes more concentrated with a larger population size. This indicates that increasing the population size also increases the convergence speed of the stochastic evolution process while simultaneously reducing the influence of the noise factor.

  3. Stochastic Evolution Dynamic of the Rock-Scissors-Paper Game Based on a Quasi Birth and Death Process.

    PubMed

    Yu, Qian; Fang, Debin; Zhang, Xiaoling; Jin, Chen; Ren, Qiyu

    2016-06-27

    Stochasticity plays an important role in the evolutionary dynamic of cyclic dominance within a finite population. To investigate the stochastic evolution process of the behaviour of bounded rational individuals, we model the Rock-Scissors-Paper (RSP) game as a finite, state dependent Quasi Birth and Death (QBD) process. We assume that bounded rational players can adjust their strategies by imitating the successful strategy according to the payoffs of the last round of the game, and then analyse the limiting distribution of the QBD process for the game stochastic evolutionary dynamic. The numerical experiments results are exhibited as pseudo colour ternary heat maps. Comparisons of these diagrams shows that the convergence property of long run equilibrium of the RSP game in populations depends on population size and the parameter of the payoff matrix and noise factor. The long run equilibrium is asymptotically stable, neutrally stable and unstable respectively according to the normalised parameters in the payoff matrix. Moreover, the results show that the distribution probability becomes more concentrated with a larger population size. This indicates that increasing the population size also increases the convergence speed of the stochastic evolution process while simultaneously reducing the influence of the noise factor.

  4. PROPAGATOR: a synchronous stochastic wildfire propagation model with distributed computation engine

    NASA Astrophysics Data System (ADS)

    D´Andrea, M.; Fiorucci, P.; Biondi, G.; Negro, D.

    2012-04-01

    PROPAGATOR is a stochastic model of forest fire spread, useful as a rapid method for fire risk assessment. The model is based on a 2D stochastic cellular automaton. The domain of simulation is discretized using a square regular grid with cell size of 20x20 meters. The model uses high-resolution information such as elevation and type of vegetation on the ground. Input parameters are wind direction, speed and the ignition point of fire. The simulation of fire propagation is done via a stochastic mechanism of propagation between a burning cell and a non-burning cell belonging to its neighbourhood, i.e. the 8 adjacent cells in the rectangular grid. The fire spreads from one cell to its neighbours with a certain base probability, defined using vegetation types of two adjacent cells, and modified by taking into account the slope between them, wind direction and speed. The simulation is synchronous, and takes into account the time needed by the burning fire to cross each cell. Vegetation cover, slope, wind speed and direction affect the fire-propagation speed from cell to cell. The model simulates several mutually independent realizations of the same stochastic fire propagation process. Each of them provides a map of the area burned at each simulation time step. Propagator simulates self-extinction of the fire, and the propagation process continues until at least one cell of the domain is burning in each realization. The output of the model is a series of maps representing the probability of each cell of the domain to be affected by the fire at each time-step: these probabilities are obtained by evaluating the relative frequency of ignition of each cell with respect to the complete set of simulations. Propagator is available as a module in the OWIS (Opera Web Interfaces) system. The model simulation runs on a dedicated server and it is remote controlled from the client program, NAZCA. Ignition points of the simulation can be selected directly in a high-resolution, three-dimensional graphical representation of the Italian territory within NAZCA. The other simulation parameters, namely wind speed and direction, number of simulations, computing grid size and temporal resolution, can be selected from within the program interface. The output of the simulation is showed in real-time during the simulation, and are also available off-line and on the DEWETRA system, a Web GIS-based system for environmental risk assessment, developed according to OGC-INSPIRE standards. The model execution is very fast, providing a full prevision for the scenario in few minutes, and can be useful for real-time active fire management and suppression.

  5. Analyzing long-term correlated stochastic processes by means of recurrence networks: Potentials and pitfalls

    NASA Astrophysics Data System (ADS)

    Zou, Yong; Donner, Reik V.; Kurths, Jürgen

    2015-02-01

    Long-range correlated processes are ubiquitous, ranging from climate variables to financial time series. One paradigmatic example for such processes is fractional Brownian motion (fBm). In this work, we highlight the potentials and conceptual as well as practical limitations when applying the recently proposed recurrence network (RN) approach to fBm and related stochastic processes. In particular, we demonstrate that the results of a previous application of RN analysis to fBm [Liu et al. Phys. Rev. E 89, 032814 (2014), 10.1103/PhysRevE.89.032814] are mainly due to an inappropriate treatment disregarding the intrinsic nonstationarity of such processes. Complementarily, we analyze some RN properties of the closely related stationary fractional Gaussian noise (fGn) processes and find that the resulting network properties are well-defined and behave as one would expect from basic conceptual considerations. Our results demonstrate that RN analysis can indeed provide meaningful results for stationary stochastic processes, given a proper selection of its intrinsic methodological parameters, whereas it is prone to fail to uniquely retrieve RN properties for nonstationary stochastic processes like fBm.

  6. Evolution with Stochastic Fitness and Stochastic Migration

    PubMed Central

    Rice, Sean H.; Papadopoulos, Anthony

    2009-01-01

    Background Migration between local populations plays an important role in evolution - influencing local adaptation, speciation, extinction, and the maintenance of genetic variation. Like other evolutionary mechanisms, migration is a stochastic process, involving both random and deterministic elements. Many models of evolution have incorporated migration, but these have all been based on simplifying assumptions, such as low migration rate, weak selection, or large population size. We thus have no truly general and exact mathematical description of evolution that incorporates migration. Methodology/Principal Findings We derive an exact equation for directional evolution, essentially a stochastic Price equation with migration, that encompasses all processes, both deterministic and stochastic, contributing to directional change in an open population. Using this result, we show that increasing the variance in migration rates reduces the impact of migration relative to selection. This means that models that treat migration as a single parameter tend to be biassed - overestimating the relative impact of immigration. We further show that selection and migration interact in complex ways, one result being that a strategy for which fitness is negatively correlated with migration rates (high fitness when migration is low) will tend to increase in frequency, even if it has lower mean fitness than do other strategies. Finally, we derive an equation for the effective migration rate, which allows some of the complex stochastic processes that we identify to be incorporated into models with a single migration parameter. Conclusions/Significance As has previously been shown with selection, the role of migration in evolution is determined by the entire distributions of immigration and emigration rates, not just by the mean values. The interactions of stochastic migration with stochastic selection produce evolutionary processes that are invisible to deterministic evolutionary theory. PMID:19816580

  7. Detection and localization of change points in temporal networks with the aid of stochastic block models

    NASA Astrophysics Data System (ADS)

    De Ridder, Simon; Vandermarliere, Benjamin; Ryckebusch, Jan

    2016-11-01

    A framework based on generalized hierarchical random graphs (GHRGs) for the detection of change points in the structure of temporal networks has recently been developed by Peel and Clauset (2015 Proc. 29th AAAI Conf. on Artificial Intelligence). We build on this methodology and extend it to also include the versatile stochastic block models (SBMs) as a parametric family for reconstructing the empirical networks. We use five different techniques for change point detection on prototypical temporal networks, including empirical and synthetic ones. We find that none of the considered methods can consistently outperform the others when it comes to detecting and locating the expected change points in empirical temporal networks. With respect to the precision and the recall of the results of the change points, we find that the method based on a degree-corrected SBM has better recall properties than other dedicated methods, especially for sparse networks and smaller sliding time window widths.

  8. ? filtering for stochastic systems driven by Poisson processes

    NASA Astrophysics Data System (ADS)

    Song, Bo; Wu, Zheng-Guang; Park, Ju H.; Shi, Guodong; Zhang, Ya

    2015-01-01

    This paper investigates the ? filtering problem for stochastic systems driven by Poisson processes. By utilising the martingale theory such as the predictable projection operator and the dual predictable projection operator, this paper transforms the expectation of stochastic integral with respect to the Poisson process into the expectation of Lebesgue integral. Then, based on this, this paper designs an ? filter such that the filtering error system is mean-square asymptotically stable and satisfies a prescribed ? performance level. Finally, a simulation example is given to illustrate the effectiveness of the proposed filtering scheme.

  9. Simulation of stochastic wind action on transmission power lines

    NASA Astrophysics Data System (ADS)

    Wielgos, Piotr; Lipecki, Tomasz; Flaga, Andrzej

    2018-01-01

    The paper presents FEM analysis of the wind action on overhead transmission power lines. The wind action is based on a stochastic simulation of the wind field in several points of the structure and on the wind tunnel tests on aerodynamic coefficients of the single conductor consisting of three wires. In FEM calculations the section of the transmission power line composed of three spans is considered. Non-linear analysis with deadweight of the structure is performed first to obtain the deformed shape of conductors. Next, time-dependent wind forces are applied to respective points of conductors and non-linear dynamic analysis is carried out.

  10. Stochastic effects in hybrid inflation

    NASA Astrophysics Data System (ADS)

    Martin, Jérôme; Vennin, Vincent

    2012-02-01

    Hybrid inflation is a two-field model where inflation ends due to an instability. In the neighborhood of the instability point, the potential is very flat and the quantum fluctuations dominate over the classical motion of the inflaton and waterfall fields. In this article, we study this regime in the framework of stochastic inflation. We numerically solve the two coupled Langevin equations controlling the evolution of the fields and compute the probability distributions of the total number of e-folds and of the inflation exit point. Then, we discuss the physical consequences of our results, in particular, the question of how the quantum diffusion can affect the observable predictions of hybrid inflation.

  11. A multiple-point geostatistical approach to quantifying uncertainty for flow and transport simulation in geologically complex environments

    NASA Astrophysics Data System (ADS)

    Cronkite-Ratcliff, C.; Phelps, G. A.; Boucher, A.

    2011-12-01

    In many geologic settings, the pathways of groundwater flow are controlled by geologic heterogeneities which have complex geometries. Models of these geologic heterogeneities, and consequently, their effects on the simulated pathways of groundwater flow, are characterized by uncertainty. Multiple-point geostatistics, which uses a training image to represent complex geometric descriptions of geologic heterogeneity, provides a stochastic approach to the analysis of geologic uncertainty. Incorporating multiple-point geostatistics into numerical models provides a way to extend this analysis to the effects of geologic uncertainty on the results of flow simulations. We present two case studies to demonstrate the application of multiple-point geostatistics to numerical flow simulation in complex geologic settings with both static and dynamic conditioning data. Both cases involve the development of a training image from a complex geometric description of the geologic environment. Geologic heterogeneity is modeled stochastically by generating multiple equally-probable realizations, all consistent with the training image. Numerical flow simulation for each stochastic realization provides the basis for analyzing the effects of geologic uncertainty on simulated hydraulic response. The first case study is a hypothetical geologic scenario developed using data from the alluvial deposits in Yucca Flat, Nevada. The SNESIM algorithm is used to stochastically model geologic heterogeneity conditioned to the mapped surface geology as well as vertical drill-hole data. Numerical simulation of groundwater flow and contaminant transport through geologic models produces a distribution of hydraulic responses and contaminant concentration results. From this distribution of results, the probability of exceeding a given contaminant concentration threshold can be used as an indicator of uncertainty about the location of the contaminant plume boundary. The second case study considers a characteristic lava-flow aquifer system in Pahute Mesa, Nevada. A 3D training image is developed by using object-based simulation of parametric shapes to represent the key morphologic features of rhyolite lava flows embedded within ash-flow tuffs. In addition to vertical drill-hole data, transient pressure head data from aquifer tests can be used to constrain the stochastic model outcomes. The use of both static and dynamic conditioning data allows the identification of potential geologic structures that control hydraulic response. These case studies demonstrate the flexibility of the multiple-point geostatistics approach for considering multiple types of data and for developing sophisticated models of geologic heterogeneities that can be incorporated into numerical flow simulations.

  12. A stochastic chemostat model with an inhibitor and noise independent of population sizes

    NASA Astrophysics Data System (ADS)

    Sun, Shulin; Zhang, Xiaolu

    2018-02-01

    In this paper, a stochastic chemostat model with an inhibitor is considered, here the inhibitor is input from an external source and two organisms in chemostat compete for a nutrient. Firstly, we show that the system has a unique global positive solution. Secondly, by constructing some suitable Lyapunov functions, we investigate that the average in time of the second moment of the solutions of the stochastic model is bounded for a relatively small noise. That is, the asymptotic behaviors of the stochastic system around the equilibrium points of the deterministic system are studied. However, the sufficient large noise can make the microorganisms become extinct with probability one, although the solutions to the original deterministic model may be persistent. Finally, the obtained analytical results are illustrated by computer simulations.

  13. Evaluation of Uncertainty in Runoff Analysis Incorporating Theory of Stochastic Process

    NASA Astrophysics Data System (ADS)

    Yoshimi, Kazuhiro; Wang, Chao-Wen; Yamada, Tadashi

    2015-04-01

    The aim of this paper is to provide a theoretical framework of uncertainty estimate on rainfall-runoff analysis based on theory of stochastic process. SDE (stochastic differential equation) based on this theory has been widely used in the field of mathematical finance due to predict stock price movement. Meanwhile, some researchers in the field of civil engineering have investigated by using this knowledge about SDE (stochastic differential equation) (e.g. Kurino et.al, 1999; Higashino and Kanda, 2001). However, there have been no studies about evaluation of uncertainty in runoff phenomenon based on comparisons between SDE (stochastic differential equation) and Fokker-Planck equation. The Fokker-Planck equation is a partial differential equation that describes the temporal variation of PDF (probability density function), and there is evidence to suggest that SDEs and Fokker-Planck equations are equivalent mathematically. In this paper, therefore, the uncertainty of discharge on the uncertainty of rainfall is explained theoretically and mathematically by introduction of theory of stochastic process. The lumped rainfall-runoff model is represented by SDE (stochastic differential equation) due to describe it as difference formula, because the temporal variation of rainfall is expressed by its average plus deviation, which is approximated by Gaussian distribution. This is attributed to the observed rainfall by rain-gauge station and radar rain-gauge system. As a result, this paper has shown that it is possible to evaluate the uncertainty of discharge by using the relationship between SDE (stochastic differential equation) and Fokker-Planck equation. Moreover, the results of this study show that the uncertainty of discharge increases as rainfall intensity rises and non-linearity about resistance grows strong. These results are clarified by PDFs (probability density function) that satisfy Fokker-Planck equation about discharge. It means the reasonable discharge can be estimated based on the theory of stochastic processes, and it can be applied to the probabilistic risk of flood management.

  14. First passage Brownian functional properties of snowmelt dynamics

    NASA Astrophysics Data System (ADS)

    Dubey, Ashutosh; Bandyopadhyay, Malay

    2018-04-01

    In this paper, we model snow-melt dynamics in terms of a Brownian motion (BM) with purely time dependent drift and difusion and examine its first passage properties by suggesting and examining several Brownian functionals which characterize the lifetime and reactivity of such stochastic processes. We introduce several probability distribution functions (PDFs) associated with such time dependent BMs. For instance, for a BM with initial starting point x0, we derive analytical expressions for : (i) the PDF P(tf|x0) of the first passage time tf which specify the lifetime of such stochastic process, (ii) the PDF P(A|x0) of the area A till the first passage time and it provides us numerous valuable information about the total fresh water availability during melting, (iii) the PDF P(M) associated with the maximum size M of the BM process before the first passage time, and (iv) the joint PDF P(M; tm) of the maximum size M and its occurrence time tm before the first passage time. These P(M) and P(M; tm) are useful in determining the time of maximum fresh water availability and in calculating the total maximum amount of available fresh water. These PDFs are examined for the power law time dependent drift and diffusion which matches quite well with the available data of snowmelt dynamics.

  15. Anomalous sea surface structures as an object of statistical topography

    NASA Astrophysics Data System (ADS)

    Klyatskin, V. I.; Koshel, K. V.

    2015-06-01

    By exploiting ideas of statistical topography, we analyze the stochastic boundary problem of emergence of anomalous high structures on the sea surface. The kinematic boundary condition on the sea surface is assumed to be a closed stochastic quasilinear equation. Applying the stochastic Liouville equation, and presuming the stochastic nature of a given hydrodynamic velocity field within the diffusion approximation, we derive an equation for a spatially single-point, simultaneous joint probability density of the surface elevation field and its gradient. An important feature of the model is that it accounts for stochastic bottom irregularities as one, but not a single, perturbation. Hence, we address the assumption of the infinitely deep ocean to obtain statistic features of the surface elevation field and the squared elevation gradient field. According to the calculations, we show that clustering in the absolute surface elevation gradient field happens with the unit probability. It results in the emergence of rare events such as anomalous high structures and deep gaps on the sea surface almost in every realization of a stochastic velocity field.

  16. Separation of Dynamics in the Free Energy Landscape

    NASA Astrophysics Data System (ADS)

    Ekimoto, Toru; Odagaki, Takashi; Yoshimori, Akira

    2008-02-01

    The dynamics of a representative point in a model free energy landscape (FEL) is analyzed by the Langevin equation with the FEL as the driving potential. From the detailed analysis of the generalized susceptibility, fast, slow and Johari-Goldstein (JG) processes are shown to be well described by the FEL. Namely, the fast process is determined by the stochastic motion confined in a basin of the FEL and the relaxation time is related to the curvature of the FEL at the bottom of the basin. The jump motion among basins gives rise to the slow relaxation whose relaxation time is determined by the distribution of the barriers in the FEL and the JG process is produced by weak modulation of the FEL.

  17. Parameter estimation for terrain modeling from gradient data. [navigation system for Martian rover

    NASA Technical Reports Server (NTRS)

    Dangelo, K. R.

    1974-01-01

    A method is developed for modeling terrain surfaces for use on an unmanned Martian roving vehicle. The modeling procedure employs a two-step process which uses gradient as well as height data in order to improve the accuracy of the model's gradient. Least square approximation is used in order to stochastically determine the parameters which describe the modeled surface. A complete error analysis of the modeling procedure is included which determines the effect of instrumental measurement errors on the model's accuracy. Computer simulation is used as a means of testing the entire modeling process which includes the acquisition of data points, the two-step modeling process and the error analysis. Finally, to illustrate the procedure, a numerical example is included.

  18. Demonstration of fundamental statistics by studying timing of electronics signals in a physics-based laboratory

    NASA Astrophysics Data System (ADS)

    Beach, Shaun E.; Semkow, Thomas M.; Remling, David J.; Bradt, Clayton J.

    2017-07-01

    We have developed accessible methods to demonstrate fundamental statistics in several phenomena, in the context of teaching electronic signal processing in a physics-based college-level curriculum. A relationship between the exponential time-interval distribution and Poisson counting distribution for a Markov process with constant rate is derived in a novel way and demonstrated using nuclear counting. Negative binomial statistics is demonstrated as a model for overdispersion and justified by the effect of electronic noise in nuclear counting. The statistics of digital packets on a computer network are shown to be compatible with the fractal-point stochastic process leading to a power-law as well as generalized inverse Gaussian density distributions of time intervals between packets.

  19. Stochastic differential equations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sobczyk, K.

    1990-01-01

    This book provides a unified treatment of both regular (or random) and Ito stochastic differential equations. It focuses on solution methods, including some developed only recently. Applications are discussed, in particular an insight is given into both the mathematical structure, and the most efficient solution methods (analytical as well as numerical). Starting from basic notions and results of the theory of stochastic processes and stochastic calculus (including Ito's stochastic integral), many principal mathematical problems and results related to stochastic differential equations are expounded here for the first time. Applications treated include those relating to road vehicles, earthquake excitations and offshoremore » structures.« less

  20. Relativistic analysis of stochastic kinematics

    NASA Astrophysics Data System (ADS)

    Giona, Massimiliano

    2017-10-01

    The relativistic analysis of stochastic kinematics is developed in order to determine the transformation of the effective diffusivity tensor in inertial frames. Poisson-Kac stochastic processes are initially considered. For one-dimensional spatial models, the effective diffusion coefficient measured in a frame Σ moving with velocity w with respect to the rest frame of the stochastic process is inversely proportional to the third power of the Lorentz factor γ (w ) =(1-w2/c2) -1 /2 . Subsequently, higher-dimensional processes are analyzed and it is shown that the diffusivity tensor in a moving frame becomes nonisotropic: The diffusivities parallel and orthogonal to the velocity of the moving frame scale differently with respect to γ (w ) . The analysis of discrete space-time diffusion processes permits one to obtain a general transformation theory of the tensor diffusivity, confirmed by several different simulation experiments. Several implications of the theory are also addressed and discussed.

  1. A Comparison of Deterministic and Stochastic Modeling Approaches for Biochemical Reaction Systems: On Fixed Points, Means, and Modes.

    PubMed

    Hahl, Sayuri K; Kremling, Andreas

    2016-01-01

    In the mathematical modeling of biochemical reactions, a convenient standard approach is to use ordinary differential equations (ODEs) that follow the law of mass action. However, this deterministic ansatz is based on simplifications; in particular, it neglects noise, which is inherent to biological processes. In contrast, the stochasticity of reactions is captured in detail by the discrete chemical master equation (CME). Therefore, the CME is frequently applied to mesoscopic systems, where copy numbers of involved components are small and random fluctuations are thus significant. Here, we compare those two common modeling approaches, aiming at identifying parallels and discrepancies between deterministic variables and possible stochastic counterparts like the mean or modes of the state space probability distribution. To that end, a mathematically flexible reaction scheme of autoregulatory gene expression is translated into the corresponding ODE and CME formulations. We show that in the thermodynamic limit, deterministic stable fixed points usually correspond well to the modes in the stationary probability distribution. However, this connection might be disrupted in small systems. The discrepancies are characterized and systematically traced back to the magnitude of the stoichiometric coefficients and to the presence of nonlinear reactions. These factors are found to synergistically promote large and highly asymmetric fluctuations. As a consequence, bistable but unimodal, and monostable but bimodal systems can emerge. This clearly challenges the role of ODE modeling in the description of cellular signaling and regulation, where some of the involved components usually occur in low copy numbers. Nevertheless, systems whose bimodality originates from deterministic bistability are found to sustain a more robust separation of the two states compared to bimodal, but monostable systems. In regulatory circuits that require precise coordination, ODE modeling is thus still expected to provide relevant indications on the underlying dynamics.

  2. Anomalous scaling of stochastic processes and the Moses effect

    NASA Astrophysics Data System (ADS)

    Chen, Lijian; Bassler, Kevin E.; McCauley, Joseph L.; Gunaratne, Gemunu H.

    2017-04-01

    The state of a stochastic process evolving over a time t is typically assumed to lie on a normal distribution whose width scales like t1/2. However, processes in which the probability distribution is not normal and the scaling exponent differs from 1/2 are known. The search for possible origins of such "anomalous" scaling and approaches to quantify them are the motivations for the work reported here. In processes with stationary increments, where the stochastic process is time-independent, autocorrelations between increments and infinite variance of increments can cause anomalous scaling. These sources have been referred to as the Joseph effect and the Noah effect, respectively. If the increments are nonstationary, then scaling of increments with t can also lead to anomalous scaling, a mechanism we refer to as the Moses effect. Scaling exponents quantifying the three effects are defined and related to the Hurst exponent that characterizes the overall scaling of the stochastic process. Methods of time series analysis that enable accurate independent measurement of each exponent are presented. Simple stochastic processes are used to illustrate each effect. Intraday financial time series data are analyzed, revealing that their anomalous scaling is due only to the Moses effect. In the context of financial market data, we reiterate that the Joseph exponent, not the Hurst exponent, is the appropriate measure to test the efficient market hypothesis.

  3. Anomalous scaling of stochastic processes and the Moses effect.

    PubMed

    Chen, Lijian; Bassler, Kevin E; McCauley, Joseph L; Gunaratne, Gemunu H

    2017-04-01

    The state of a stochastic process evolving over a time t is typically assumed to lie on a normal distribution whose width scales like t^{1/2}. However, processes in which the probability distribution is not normal and the scaling exponent differs from 1/2 are known. The search for possible origins of such "anomalous" scaling and approaches to quantify them are the motivations for the work reported here. In processes with stationary increments, where the stochastic process is time-independent, autocorrelations between increments and infinite variance of increments can cause anomalous scaling. These sources have been referred to as the Joseph effect and the Noah effect, respectively. If the increments are nonstationary, then scaling of increments with t can also lead to anomalous scaling, a mechanism we refer to as the Moses effect. Scaling exponents quantifying the three effects are defined and related to the Hurst exponent that characterizes the overall scaling of the stochastic process. Methods of time series analysis that enable accurate independent measurement of each exponent are presented. Simple stochastic processes are used to illustrate each effect. Intraday financial time series data are analyzed, revealing that their anomalous scaling is due only to the Moses effect. In the context of financial market data, we reiterate that the Joseph exponent, not the Hurst exponent, is the appropriate measure to test the efficient market hypothesis.

  4. Diffusive processes in a stochastic magnetic field

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wang, H.; Vlad, M.; Vanden Eijnden, E.

    1995-05-01

    The statistical representation of a fluctuating (stochastic) magnetic field configuration is studied in detail. The Eulerian correlation functions of the magnetic field are determined, taking into account all geometrical constraints: these objects form a nondiagonal matrix. The Lagrangian correlations, within the reasonable Corrsin approximation, are reduced to a single scalar function, determined by an integral equation. The mean square perpendicular deviation of a geometrical point moving along a perturbed field line is determined by a nonlinear second-order differential equation. The separation of neighboring field lines in a stochastic magnetic field is studied. We find exponentiation lengths of both signs describing,more » in particular, a decay (on the average) of any initial anisotropy. The vanishing sum of these exponentiation lengths ensures the existence of an invariant which was overlooked in previous works. Next, the separation of a particle`s trajectory from the magnetic field line to which it was initially attached is studied by a similar method. Here too an initial phase of exponential separation appears. Assuming the existence of a final diffusive phase, anomalous diffusion coefficients are found for both weakly and strongly collisional limits. The latter is identical to the well known Rechester-Rosenbluth coefficient, which is obtained here by a more quantitative (though not entirely deductive) treatment than in earlier works.« less

  5. The Separatrix Algorithm for Synthesis and Analysis of Stochastic Simulations with Applications in Disease Modeling

    PubMed Central

    Klein, Daniel J.; Baym, Michael; Eckhoff, Philip

    2014-01-01

    Decision makers in epidemiology and other disciplines are faced with the daunting challenge of designing interventions that will be successful with high probability and robust against a multitude of uncertainties. To facilitate the decision making process in the context of a goal-oriented objective (e.g., eradicate polio by ), stochastic models can be used to map the probability of achieving the goal as a function of parameters. Each run of a stochastic model can be viewed as a Bernoulli trial in which “success” is returned if and only if the goal is achieved in simulation. However, each run can take a significant amount of time to complete, and many replicates are required to characterize each point in parameter space, so specialized algorithms are required to locate desirable interventions. To address this need, we present the Separatrix Algorithm, which strategically locates parameter combinations that are expected to achieve the goal with a user-specified probability of success (e.g. 95%). Technically, the algorithm iteratively combines density-corrected binary kernel regression with a novel information-gathering experiment design to produce results that are asymptotically correct and work well in practice. The Separatrix Algorithm is demonstrated on several test problems, and on a detailed individual-based simulation of malaria. PMID:25078087

  6. The stochastic system approach for estimating dynamic treatments effect.

    PubMed

    Commenges, Daniel; Gégout-Petit, Anne

    2015-10-01

    The problem of assessing the effect of a treatment on a marker in observational studies raises the difficulty that attribution of the treatment may depend on the observed marker values. As an example, we focus on the analysis of the effect of a HAART on CD4 counts, where attribution of the treatment may depend on the observed marker values. This problem has been treated using marginal structural models relying on the counterfactual/potential response formalism. Another approach to causality is based on dynamical models, and causal influence has been formalized in the framework of the Doob-Meyer decomposition of stochastic processes. Causal inference however needs assumptions that we detail in this paper and we call this approach to causality the "stochastic system" approach. First we treat this problem in discrete time, then in continuous time. This approach allows incorporating biological knowledge naturally. When working in continuous time, the mechanistic approach involves distinguishing the model for the system and the model for the observations. Indeed, biological systems live in continuous time, and mechanisms can be expressed in the form of a system of differential equations, while observations are taken at discrete times. Inference in mechanistic models is challenging, particularly from a numerical point of view, but these models can yield much richer and reliable results.

  7. An analytically tractable model for community ecology with many species

    NASA Astrophysics Data System (ADS)

    Dickens, Benjamin; Fisher, Charles; Mehta, Pankaj; Pankaj Mehta Biophysics Theory Group Team

    A fundamental problem in community ecology is to understand how ecological processes such as selection, drift, and immigration yield observed patterns in species composition and diversity. Here, we present an analytically tractable, presence-absence (PA) model for community assembly and use it to ask how ecological traits such as the strength of competition, diversity in competition, and stochasticity affect species composition in a community. In our PA model, we treat species as stochastic binary variables that can either be present or absent in a community: species can immigrate into the community from a regional species pool and can go extinct due to competition and stochasticity. Despite its simplicity, the PA model reproduces the qualitative features of more complicated models of community assembly. In agreement with recent work on large, competitive Lotka-Volterra systems, the PA model exhibits distinct ecological behaviors organized around a special (``critical'') point corresponding to Hubbell's neutral theory of biodiversity. Our results suggest that the concepts of ``phases'' and phase diagrams can provide a powerful framework for thinking about community ecology and that the PA model captures the essential ecological dynamics of community assembly. Pm was supported by a Simons Investigator in the Mathematical Modeling of Living Systems and a Sloan Research Fellowship.

  8. Stochasticity, succession, and environmental perturbations in a fluidic ecosystem.

    PubMed

    Zhou, Jizhong; Deng, Ye; Zhang, Ping; Xue, Kai; Liang, Yuting; Van Nostrand, Joy D; Yang, Yunfeng; He, Zhili; Wu, Liyou; Stahl, David A; Hazen, Terry C; Tiedje, James M; Arkin, Adam P

    2014-03-04

    Unraveling the drivers of community structure and succession in response to environmental change is a central goal in ecology. Although the mechanisms shaping community structure have been intensively examined, those controlling ecological succession remain elusive. To understand the relative importance of stochastic and deterministic processes in mediating microbial community succession, a unique framework composed of four different cases was developed for fluidic and nonfluidic ecosystems. The framework was then tested for one fluidic ecosystem: a groundwater system perturbed by adding emulsified vegetable oil (EVO) for uranium immobilization. Our results revealed that groundwater microbial community diverged substantially away from the initial community after EVO amendment and eventually converged to a new community state, which was closely clustered with its initial state. However, their composition and structure were significantly different from each other. Null model analysis indicated that both deterministic and stochastic processes played important roles in controlling the assembly and succession of the groundwater microbial community, but their relative importance was time dependent. Additionally, consistent with the proposed conceptual framework but contradictory to conventional wisdom, the community succession responding to EVO amendment was primarily controlled by stochastic rather than deterministic processes. During the middle phase of the succession, the roles of stochastic processes in controlling community composition increased substantially, ranging from 81.3% to 92.0%. Finally, there are limited successional studies available to support different cases in the conceptual framework, but further well-replicated explicit time-series experiments are needed to understand the relative importance of deterministic and stochastic processes in controlling community succession.

  9. Large Deviations for Stationary Probabilities of a Family of Continuous Time Markov Chains via Aubry-Mather Theory

    NASA Astrophysics Data System (ADS)

    Lopes, Artur O.; Neumann, Adriana

    2015-05-01

    In the present paper, we consider a family of continuous time symmetric random walks indexed by , . For each the matching random walk take values in the finite set of states ; notice that is a subset of , where is the unitary circle. The infinitesimal generator of such chain is denoted by . The stationary probability for such process converges to the uniform distribution on the circle, when . Here we want to study other natural measures, obtained via a limit on , that are concentrated on some points of . We will disturb this process by a potential and study for each the perturbed stationary measures of this new process when . We disturb the system considering a fixed potential and we will denote by the restriction of to . Then, we define a non-stochastic semigroup generated by the matrix , where is the infinifesimal generator of . From the continuous time Perron's Theorem one can normalized such semigroup, and, then we get another stochastic semigroup which generates a continuous time Markov Chain taking values on . This new chain is called the continuous time Gibbs state associated to the potential , see (Lopes et al. in J Stat Phys 152:894-933, 2013). The stationary probability vector for such Markov Chain is denoted by . We assume that the maximum of is attained in a unique point of , and from this will follow that . Thus, here, our main goal is to analyze the large deviation principle for the family , when . The deviation function , which is defined on , will be obtained from a procedure based on fixed points of the Lax-Oleinik operator and Aubry-Mather theory. In order to obtain the associated Lax-Oleinik operator we use the Varadhan's Lemma for the process . For a careful analysis of the problem we present full details of the proof of the Large Deviation Principle, in the Skorohod space, for such family of Markov Chains, when . Finally, we compute the entropy of the invariant probabilities on the Skorohod space associated to the Markov Chains we analyze.

  10. Stochastic stability of sigma-point Unscented Predictive Filter.

    PubMed

    Cao, Lu; Tang, Yu; Chen, Xiaoqian; Zhao, Yong

    2015-07-01

    In this paper, the Unscented Predictive Filter (UPF) is derived based on unscented transformation for nonlinear estimation, which breaks the confine of conventional sigma-point filters by employing Kalman filter as subject investigated merely. In order to facilitate the new method, the algorithm flow of UPF is given firstly. Then, the theoretical analyses demonstrate that the estimate accuracy of the model error and system for the UPF is higher than that of the conventional PF. Moreover, the authors analyze the stochastic boundedness and the error behavior of Unscented Predictive Filter (UPF) for general nonlinear systems in a stochastic framework. In particular, the theoretical results present that the estimation error remains bounded and the covariance keeps stable if the system׳s initial estimation error, disturbing noise terms as well as the model error are small enough, which is the core part of the UPF theory. All of the results have been demonstrated by numerical simulations for a nonlinear example system. Copyright © 2015 ISA. Published by Elsevier Ltd. All rights reserved.

  11. Dynamic phase transitions and dynamic phase diagrams of the Blume-Emery-Griffiths model in an oscillating field: the effective-field theory based on the Glauber-type stochastic dynamics

    NASA Astrophysics Data System (ADS)

    Ertaş, Mehmet; Keskin, Mustafa

    2015-06-01

    Using the effective-field theory based on the Glauber-type stochastic dynamics (DEFT), we investigate dynamic phase transitions and dynamic phase diagrams of the Blume-Emery-Griffiths model under an oscillating magnetic field. We presented the dynamic phase diagrams in (T/J, h0/J), (D/J, T/J) and (K/J, T/J) planes, where T, h0, D, K and z are the temperature, magnetic field amplitude, crystal-field interaction, biquadratic interaction and the coordination number. The dynamic phase diagrams exhibit several ordered phases, coexistence phase regions and special critical points, as well as re-entrant behavior depending on interaction parameters. We also compare and discuss the results with the results of the same system within the mean-field theory based on the Glauber-type stochastic dynamics and find that some of the dynamic first-order phase lines and special dynamic critical points disappeared in the DEFT calculation.

  12. Analytical approximations for spatial stochastic gene expression in single cells and tissues

    PubMed Central

    Smith, Stephen; Cianci, Claudia; Grima, Ramon

    2016-01-01

    Gene expression occurs in an environment in which both stochastic and diffusive effects are significant. Spatial stochastic simulations are computationally expensive compared with their deterministic counterparts, and hence little is currently known of the significance of intrinsic noise in a spatial setting. Starting from the reaction–diffusion master equation (RDME) describing stochastic reaction–diffusion processes, we here derive expressions for the approximate steady-state mean concentrations which are explicit functions of the dimensionality of space, rate constants and diffusion coefficients. The expressions have a simple closed form when the system consists of one effective species. These formulae show that, even for spatially homogeneous systems, mean concentrations can depend on diffusion coefficients: this contradicts the predictions of deterministic reaction–diffusion processes, thus highlighting the importance of intrinsic noise. We confirm our theory by comparison with stochastic simulations, using the RDME and Brownian dynamics, of two models of stochastic and spatial gene expression in single cells and tissues. PMID:27146686

  13. Robust and Adaptive Online Time Series Prediction with Long Short-Term Memory

    PubMed Central

    Tao, Qing

    2017-01-01

    Online time series prediction is the mainstream method in a wide range of fields, ranging from speech analysis and noise cancelation to stock market analysis. However, the data often contains many outliers with the increasing length of time series in real world. These outliers can mislead the learned model if treated as normal points in the process of prediction. To address this issue, in this paper, we propose a robust and adaptive online gradient learning method, RoAdam (Robust Adam), for long short-term memory (LSTM) to predict time series with outliers. This method tunes the learning rate of the stochastic gradient algorithm adaptively in the process of prediction, which reduces the adverse effect of outliers. It tracks the relative prediction error of the loss function with a weighted average through modifying Adam, a popular stochastic gradient method algorithm for training deep neural networks. In our algorithm, the large value of the relative prediction error corresponds to a small learning rate, and vice versa. The experiments on both synthetic data and real time series show that our method achieves better performance compared to the existing methods based on LSTM. PMID:29391864

  14. Robust and Adaptive Online Time Series Prediction with Long Short-Term Memory.

    PubMed

    Yang, Haimin; Pan, Zhisong; Tao, Qing

    2017-01-01

    Online time series prediction is the mainstream method in a wide range of fields, ranging from speech analysis and noise cancelation to stock market analysis. However, the data often contains many outliers with the increasing length of time series in real world. These outliers can mislead the learned model if treated as normal points in the process of prediction. To address this issue, in this paper, we propose a robust and adaptive online gradient learning method, RoAdam (Robust Adam), for long short-term memory (LSTM) to predict time series with outliers. This method tunes the learning rate of the stochastic gradient algorithm adaptively in the process of prediction, which reduces the adverse effect of outliers. It tracks the relative prediction error of the loss function with a weighted average through modifying Adam, a popular stochastic gradient method algorithm for training deep neural networks. In our algorithm, the large value of the relative prediction error corresponds to a small learning rate, and vice versa. The experiments on both synthetic data and real time series show that our method achieves better performance compared to the existing methods based on LSTM.

  15. Some remarks on quantum physics, stochastic processes, and nonlinear filtering theory

    NASA Astrophysics Data System (ADS)

    Balaji, Bhashyam

    2016-05-01

    The mathematical similarities between quantum mechanics and stochastic processes has been studied in the literature. Some of the major results are reviewed, such as the relationship between the Fokker-Planck equation and the Schrödinger equation. Also reviewed are more recent results that show the mathematical similarities between quantum many particle systems and concepts in other areas of applied science, such as stochastic Petri nets. Some connections to filtering theory are discussed.

  16. Stochastic modification of the Schrödinger-Newton equation

    NASA Astrophysics Data System (ADS)

    Bera, Sayantani; Mohan, Ravi; Singh, Tejinder P.

    2015-07-01

    The Schrödinger-Newton (SN) equation describes the effect of self-gravity on the evolution of a quantum system, and it has been proposed that gravitationally induced decoherence drives the system to one of the stationary solutions of the SN equation. However, the equation itself lacks a decoherence mechanism, because it does not possess any stochastic feature. In the present work we derive a stochastic modification of the Schrödinger-Newton equation, starting from the Einstein-Langevin equation in the theory of stochastic semiclassical gravity. We specialize this equation to the case of a single massive point particle, and by using Karolyhazy's phase variance method, we derive the Diósi-Penrose criterion for the decoherence time. We obtain a (nonlinear) master equation corresponding to this stochastic SN equation. This equation is, however, linear at the level of the approximation we use to prove decoherence; hence, the no-signaling requirement is met. Lastly, we use physical arguments to obtain expressions for the decoherence length of extended objects.

  17. Stochastic bifurcation in a model of love with colored noise

    NASA Astrophysics Data System (ADS)

    Yue, Xiaokui; Dai, Honghua; Yuan, Jianping

    2015-07-01

    In this paper, we wish to examine the stochastic bifurcation induced by multiplicative Gaussian colored noise in a dynamical model of love where the random factor is used to describe the complexity and unpredictability of psychological systems. First, the dynamics in deterministic love-triangle model are considered briefly including equilibrium points and their stability, chaotic behaviors and chaotic attractors. Then, the influences of Gaussian colored noise with different parameters are explored such as the phase plots, top Lyapunov exponents, stationary probability density function (PDF) and stochastic bifurcation. The stochastic P-bifurcation through a qualitative change of the stationary PDF will be observed and bifurcation diagram on parameter plane of correlation time and noise intensity is presented to find the bifurcation behaviors in detail. Finally, the top Lyapunov exponent is computed to determine the D-bifurcation when the noise intensity achieves to a critical value. By comparison, we find there is no connection between two kinds of stochastic bifurcation.

  18. Mean Field Games for Stochastic Growth with Relative Utility

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Huang, Minyi, E-mail: mhuang@math.carleton.ca; Nguyen, Son Luu, E-mail: sonluu.nguyen@upr.edu

    This paper considers continuous time stochastic growth-consumption optimization in a mean field game setting. The individual capital stock evolution is determined by a Cobb–Douglas production function, consumption and stochastic depreciation. The individual utility functional combines an own utility and a relative utility with respect to the population. The use of the relative utility reflects human psychology, leading to a natural pattern of mean field interaction. The fixed point equation of the mean field game is derived with the aid of some ordinary differential equations. Due to the relative utility interaction, our performance analysis depends on some ratio based approximation errormore » estimate.« less

  19. A Family of Poisson Processes for Use in Stochastic Models of Precipitation

    NASA Astrophysics Data System (ADS)

    Penland, C.

    2013-12-01

    Both modified Poisson processes and compound Poisson processes can be relevant to stochastic parameterization of precipitation. This presentation compares the dynamical properties of these systems and discusses the physical situations in which each might be appropriate. If the parameters describing either class of systems originate in hydrodynamics, then proper consideration of stochastic calculus is required during numerical implementation of the parameterization. It is shown here that an improper numerical treatment can have severe implications for estimating rainfall distributions, particularly in the tails of the distributions and, thus, on the frequency of extreme events.

  20. Markovian limit for a reduced operation-valued stochastic process

    NASA Astrophysics Data System (ADS)

    Barchielli, Alberto

    1987-04-01

    Operation-valued stochastic processes give a formalization of the concept of continuous (in time) measurements in quantum mechanics. In this article, a first stage M of a measuring apparatus coupled to the system S is explicitly introduced, and continuous measurement of some observables of M is considered (one can speak of an indirect continuous measurement on S). When the degrees of freedom of the measuring apparatus M are eliminated and the weak coupling limit is taken, it is shown that an operation-valued stochastic process describing a direct continuous observation of the system S is obtained.

  1. Models for interrupted monitoring of a stochastic process

    NASA Technical Reports Server (NTRS)

    Palmer, E.

    1977-01-01

    As computers are added to the cockpit, the pilot's job is changing from of manually flying the aircraft, to one of supervising computers which are doing navigation, guidance and energy management calculations as well as automatically flying the aircraft. In this supervisorial role the pilot must divide his attention between monitoring the aircraft's performance and giving commands to the computer. Normative strategies are developed for tasks where the pilot must interrupt his monitoring of a stochastic process in order to attend to other duties. Results are given as to how characteristics of the stochastic process and the other tasks affect the optimal strategies.

  2. Stochastic assembly in a subtropical forest chronosequence: evidence from contrasting changes of species, phylogenetic and functional dissimilarity over succession.

    PubMed

    Mi, Xiangcheng; Swenson, Nathan G; Jia, Qi; Rao, Mide; Feng, Gang; Ren, Haibao; Bebber, Daniel P; Ma, Keping

    2016-09-07

    Deterministic and stochastic processes jointly determine the community dynamics of forest succession. However, it has been widely held in previous studies that deterministic processes dominate forest succession. Furthermore, inference of mechanisms for community assembly may be misleading if based on a single axis of diversity alone. In this study, we evaluated the relative roles of deterministic and stochastic processes along a disturbance gradient by integrating species, functional, and phylogenetic beta diversity in a subtropical forest chronosequence in Southeastern China. We found a general pattern of increasing species turnover, but little-to-no change in phylogenetic and functional turnover over succession at two spatial scales. Meanwhile, the phylogenetic and functional beta diversity were not significantly different from random expectation. This result suggested a dominance of stochastic assembly, contrary to the general expectation that deterministic processes dominate forest succession. On the other hand, we found significant interactions of environment and disturbance and limited evidence for significant deviations of phylogenetic or functional turnover from random expectations for different size classes. This result provided weak evidence of deterministic processes over succession. Stochastic assembly of forest succession suggests that post-disturbance restoration may be largely unpredictable and difficult to control in subtropical forests.

  3. Stochastic point-source modeling of ground motions in the Cascadia region

    USGS Publications Warehouse

    Atkinson, G.M.; Boore, D.M.

    1997-01-01

    A stochastic model is used to develop preliminary ground motion relations for the Cascadia region for rock sites. The model parameters are derived from empirical analyses of seismographic data from the Cascadia region. The model is based on a Brune point-source characterized by a stress parameter of 50 bars. The model predictions are compared to ground-motion data from the Cascadia region and to data from large earthquakes in other subduction zones. The point-source simulations match the observations from moderate events (M 100 km). The discrepancy at large magnitudes suggests further work on modeling finite-fault effects and regional attenuation is warranted. In the meantime, the preliminary equations are satisfactory for predicting motions from events of M < 7 and provide conservative estimates of motions from larger events at distances less than 100 km.

  4. Cascades on a stochastic pulse-coupled network

    NASA Astrophysics Data System (ADS)

    Wray, C. M.; Bishop, S. R.

    2014-09-01

    While much recent research has focused on understanding isolated cascades of networks, less attention has been given to dynamical processes on networks exhibiting repeated cascades of opposing influence. An example of this is the dynamic behaviour of financial markets where cascades of buying and selling can occur, even over short timescales. To model these phenomena, a stochastic pulse-coupled oscillator network with upper and lower thresholds is described and analysed. Numerical confirmation of asynchronous and synchronous regimes of the system is presented, along with analytical identification of the fixed point state vector of the asynchronous mean field system. A lower bound for the finite system mean field critical value of network coupling probability is found that separates the asynchronous and synchronous regimes. For the low-dimensional mean field system, a closed-form equation is found for cascade size, in terms of the network coupling probability. Finally, a description of how this model can be applied to interacting agents in a financial market is provided.

  5. Cascades on a stochastic pulse-coupled network

    PubMed Central

    Wray, C. M.; Bishop, S. R.

    2014-01-01

    While much recent research has focused on understanding isolated cascades of networks, less attention has been given to dynamical processes on networks exhibiting repeated cascades of opposing influence. An example of this is the dynamic behaviour of financial markets where cascades of buying and selling can occur, even over short timescales. To model these phenomena, a stochastic pulse-coupled oscillator network with upper and lower thresholds is described and analysed. Numerical confirmation of asynchronous and synchronous regimes of the system is presented, along with analytical identification of the fixed point state vector of the asynchronous mean field system. A lower bound for the finite system mean field critical value of network coupling probability is found that separates the asynchronous and synchronous regimes. For the low-dimensional mean field system, a closed-form equation is found for cascade size, in terms of the network coupling probability. Finally, a description of how this model can be applied to interacting agents in a financial market is provided. PMID:25213626

  6. Stochastic modelling of animal movement.

    PubMed

    Smouse, Peter E; Focardi, Stefano; Moorcroft, Paul R; Kie, John G; Forester, James D; Morales, Juan M

    2010-07-27

    Modern animal movement modelling derives from two traditions. Lagrangian models, based on random walk behaviour, are useful for multi-step trajectories of single animals. Continuous Eulerian models describe expected behaviour, averaged over stochastic realizations, and are usefully applied to ensembles of individuals. We illustrate three modern research arenas. (i) Models of home-range formation describe the process of an animal 'settling down', accomplished by including one or more focal points that attract the animal's movements. (ii) Memory-based models are used to predict how accumulated experience translates into biased movement choices, employing reinforced random walk behaviour, with previous visitation increasing or decreasing the probability of repetition. (iii) Lévy movement involves a step-length distribution that is over-dispersed, relative to standard probability distributions, and adaptive in exploring new environments or searching for rare targets. Each of these modelling arenas implies more detail in the movement pattern than general models of movement can accommodate, but realistic empiric evaluation of their predictions requires dense locational data, both in time and space, only available with modern GPS telemetry.

  7. Some Applications of the Model of the Partion Points on a One Dimensional Lattice

    NASA Astrophysics Data System (ADS)

    Mejdani, R.; Huseini, H.

    1996-02-01

    We have shown that by using a model of gas of partition points on a one-dimensional lattice, we can find some results about the saturation curves for enzyme kinetics or the average domain-size, which we have obtained before by using a correlated walks' theory or a probabilistic (combinatoric) way. We have studied, using the same model and the same technique, the denaturation process, i.e., the breaking of the hydrogen bonds connecting the two strands, under treatment by heat. Also, we have discussed, without entering in details, the problem related to the spread of an infections disease and the stochastic model of partition points. We think that this model, being simple and mathematically transparent, can be advantageous for the other theoratical investigations in chemistry or modern biology. PACS NOS.: 05.50. + q; 05.70.Ce; 64.10.+h; 87.10. +e; 87.15.Rn

  8. An information-based approach to change-point analysis with applications to biophysics and cell biology.

    PubMed

    Wiggins, Paul A

    2015-07-21

    This article describes the application of a change-point algorithm to the analysis of stochastic signals in biological systems whose underlying state dynamics consist of transitions between discrete states. Applications of this analysis include molecular-motor stepping, fluorophore bleaching, electrophysiology, particle and cell tracking, detection of copy number variation by sequencing, tethered-particle motion, etc. We present a unified approach to the analysis of processes whose noise can be modeled by Gaussian, Wiener, or Ornstein-Uhlenbeck processes. To fit the model, we exploit explicit, closed-form algebraic expressions for maximum-likelihood estimators of model parameters and estimated information loss of the generalized noise model, which can be computed extremely efficiently. We implement change-point detection using the frequentist information criterion (which, to our knowledge, is a new information criterion). The frequentist information criterion specifies a single, information-based statistical test that is free from ad hoc parameters and requires no prior probability distribution. We demonstrate this information-based approach in the analysis of simulated and experimental tethered-particle-motion data. Copyright © 2015 Biophysical Society. Published by Elsevier Inc. All rights reserved.

  9. Diffusion Processes Satisfying a Conservation Law Constraint

    DOE PAGES

    Bakosi, J.; Ristorcelli, J. R.

    2014-03-04

    We investigate coupled stochastic differential equations governing N non-negative continuous random variables that satisfy a conservation principle. In various fields a conservation law requires that a set of fluctuating variables be non-negative and (if appropriately normalized) sum to one. As a result, any stochastic differential equation model to be realizable must not produce events outside of the allowed sample space. We develop a set of constraints on the drift and diffusion terms of such stochastic models to ensure that both the non-negativity and the unit-sum conservation law constraint are satisfied as the variables evolve in time. We investigate the consequencesmore » of the developed constraints on the Fokker-Planck equation, the associated system of stochastic differential equations, and the evolution equations of the first four moments of the probability density function. We show that random variables, satisfying a conservation law constraint, represented by stochastic diffusion processes, must have diffusion terms that are coupled and nonlinear. The set of constraints developed enables the development of statistical representations of fluctuating variables satisfying a conservation law. We exemplify the results with the bivariate beta process and the multivariate Wright-Fisher, Dirichlet, and Lochner’s generalized Dirichlet processes.« less

  10. Diffusion Processes Satisfying a Conservation Law Constraint

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bakosi, J.; Ristorcelli, J. R.

    We investigate coupled stochastic differential equations governing N non-negative continuous random variables that satisfy a conservation principle. In various fields a conservation law requires that a set of fluctuating variables be non-negative and (if appropriately normalized) sum to one. As a result, any stochastic differential equation model to be realizable must not produce events outside of the allowed sample space. We develop a set of constraints on the drift and diffusion terms of such stochastic models to ensure that both the non-negativity and the unit-sum conservation law constraint are satisfied as the variables evolve in time. We investigate the consequencesmore » of the developed constraints on the Fokker-Planck equation, the associated system of stochastic differential equations, and the evolution equations of the first four moments of the probability density function. We show that random variables, satisfying a conservation law constraint, represented by stochastic diffusion processes, must have diffusion terms that are coupled and nonlinear. The set of constraints developed enables the development of statistical representations of fluctuating variables satisfying a conservation law. We exemplify the results with the bivariate beta process and the multivariate Wright-Fisher, Dirichlet, and Lochner’s generalized Dirichlet processes.« less

  11. Multivariate moment closure techniques for stochastic kinetic models

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lakatos, Eszter, E-mail: e.lakatos13@imperial.ac.uk; Ale, Angelique; Kirk, Paul D. W.

    2015-09-07

    Stochastic effects dominate many chemical and biochemical processes. Their analysis, however, can be computationally prohibitively expensive and a range of approximation schemes have been proposed to lighten the computational burden. These, notably the increasingly popular linear noise approximation and the more general moment expansion methods, perform well for many dynamical regimes, especially linear systems. At higher levels of nonlinearity, it comes to an interplay between the nonlinearities and the stochastic dynamics, which is much harder to capture correctly by such approximations to the true stochastic processes. Moment-closure approaches promise to address this problem by capturing higher-order terms of the temporallymore » evolving probability distribution. Here, we develop a set of multivariate moment-closures that allows us to describe the stochastic dynamics of nonlinear systems. Multivariate closure captures the way that correlations between different molecular species, induced by the reaction dynamics, interact with stochastic effects. We use multivariate Gaussian, gamma, and lognormal closure and illustrate their use in the context of two models that have proved challenging to the previous attempts at approximating stochastic dynamics: oscillations in p53 and Hes1. In addition, we consider a larger system, Erk-mediated mitogen-activated protein kinases signalling, where conventional stochastic simulation approaches incur unacceptably high computational costs.« less

  12. A stochastic model for soft tissue failure using acoustic emission data.

    PubMed

    Sánchez-Molina, D; Martínez-González, E; Velázquez-Ameijide, J; Llumà, J; Rebollo Soria, M C; Arregui-Dalmases, C

    2015-11-01

    The strength of soft tissues is due mainly to collagen fibers. In most collagenous tissues, the arrangement of the fibers is random, but has preferred directions. The random arrangement makes it difficult to make deterministic predictions about the starting process of fiber breaking under tension. When subjected to tensile stress the fibers are progressively straighten out and then start to be stretched. At the beginning of fiber breaking, some of the fibers reach their maximum tensile strength and break down while some others remain unstressed (this latter fibers will assume then bigger stress until they eventually arrive to their failure point). In this study, a sample of human esophagi was subjected to a tensile breaking of fibers, up to the complete failure of the specimen. An experimental setup using Acoustic Emission to detect the elastic energy released is used during the test to detect the location of the emissions and the number of micro-failures per time unit. The data were statistically analyzed in order to be compared to a stochastic model which relates the level of stress in the tissue and the probability of breaking given the number of previously broken fibers (i.e. the deterioration in the tissue). The probability of a fiber breaking as the stretch increases in the tissue can be represented by a non-homogeneous Markov process which is the basis of the stochastic model proposed. This paper shows that a two-parameter model can account for the fiber breaking and the expected distribution for ultimate stress is a Fréchet distribution. Copyright © 2015 Elsevier Ltd. All rights reserved.

  13. The critical domain size of stochastic population models.

    PubMed

    Reimer, Jody R; Bonsall, Michael B; Maini, Philip K

    2017-02-01

    Identifying the critical domain size necessary for a population to persist is an important question in ecology. Both demographic and environmental stochasticity impact a population's ability to persist. Here we explore ways of including this variability. We study populations with distinct dispersal and sedentary stages, which have traditionally been modelled using a deterministic integrodifference equation (IDE) framework. Individual-based models (IBMs) are the most intuitive stochastic analogues to IDEs but yield few analytic insights. We explore two alternate approaches; one is a scaling up to the population level using the Central Limit Theorem, and the other a variation on both Galton-Watson branching processes and branching processes in random environments. These branching process models closely approximate the IBM and yield insight into the factors determining the critical domain size for a given population subject to stochasticity.

  14. Subcritical Hopf Bifurcation and Stochastic Resonance of Electrical Activities in Neuron under Electromagnetic Induction

    PubMed Central

    Fu, Yu-Xuan; Kang, Yan-Mei; Xie, Yong

    2018-01-01

    The FitzHugh–Nagumo model is improved to consider the effect of the electromagnetic induction on single neuron. On the basis of investigating the Hopf bifurcation behavior of the improved model, stochastic resonance in the stochastic version is captured near the bifurcation point. It is revealed that a weak harmonic oscillation in the electromagnetic disturbance can be amplified through stochastic resonance, and it is the cooperative effect of random transition between the resting state and the large amplitude oscillating state that results in the resonant phenomenon. Using the noise dependence of the mean of interburst intervals, we essentially suggest a biologically feasible clue for detecting weak signal by means of neuron model with subcritical Hopf bifurcation. These observations should be helpful in understanding the influence of the magnetic field to neural electrical activity. PMID:29467642

  15. Subcritical Hopf Bifurcation and Stochastic Resonance of Electrical Activities in Neuron under Electromagnetic Induction.

    PubMed

    Fu, Yu-Xuan; Kang, Yan-Mei; Xie, Yong

    2018-01-01

    The FitzHugh-Nagumo model is improved to consider the effect of the electromagnetic induction on single neuron. On the basis of investigating the Hopf bifurcation behavior of the improved model, stochastic resonance in the stochastic version is captured near the bifurcation point. It is revealed that a weak harmonic oscillation in the electromagnetic disturbance can be amplified through stochastic resonance, and it is the cooperative effect of random transition between the resting state and the large amplitude oscillating state that results in the resonant phenomenon. Using the noise dependence of the mean of interburst intervals, we essentially suggest a biologically feasible clue for detecting weak signal by means of neuron model with subcritical Hopf bifurcation. These observations should be helpful in understanding the influence of the magnetic field to neural electrical activity.

  16. Time-ordered product expansions for computational stochastic system biology.

    PubMed

    Mjolsness, Eric

    2013-06-01

    The time-ordered product framework of quantum field theory can also be used to understand salient phenomena in stochastic biochemical networks. It is used here to derive Gillespie's stochastic simulation algorithm (SSA) for chemical reaction networks; consequently, the SSA can be interpreted in terms of Feynman diagrams. It is also used here to derive other, more general simulation and parameter-learning algorithms including simulation algorithms for networks of stochastic reaction-like processes operating on parameterized objects, and also hybrid stochastic reaction/differential equation models in which systems of ordinary differential equations evolve the parameters of objects that can also undergo stochastic reactions. Thus, the time-ordered product expansion can be used systematically to derive simulation and parameter-fitting algorithms for stochastic systems.

  17. Semiclassical stochastic mechanics for the Coulomb potential with applications to modelling dark matter

    NASA Astrophysics Data System (ADS)

    Neate, Andrew; Truman, Aubrey

    2016-05-01

    Little is known about dark matter particles save that their most important interactions with ordinary matter are gravitational and that, if they exist, they are stable, slow moving and relatively massive. Based on these assumptions, a semiclassical approximation to the Schrödinger equation under the action of a Coulomb potential should be relevant for modelling their behaviour. We investigate the semiclassical limit of the Schrödinger equation for a particle of mass M under a Coulomb potential in the context of Nelson's stochastic mechanics. This is done using a Freidlin-Wentzell asymptotic series expansion in the parameter ɛ = √{ ħ / M } for the Nelson diffusion. It is shown that for wave functions ψ ˜ exp((R + iS)/ɛ2) where R and S are real valued, the ɛ = 0 behaviour is governed by a constrained Hamiltonian system with Hamiltonian Hr and constraint Hi = 0 where the superscripts r and i denote the real and imaginary parts of the Bohr correspondence limit of the quantum mechanical Hamiltonian, independent of Nelson's ideas. Nelson's stochastic mechanics is restored in dealing with the nodal surface singularities and by computing (correct to first order in ɛ) the relevant diffusion process in terms of Jacobi fields thereby revealing Kepler's laws in a new light. The key here is that the constrained Hamiltonian system has just two solutions corresponding to the forward and backward drifts in Nelson's stochastic mechanics. We discuss the application of this theory to modelling dark matter particles under the influence of a large gravitating point mass.

  18. The Two-On-One Stochastic Duel

    DTIC Science & Technology

    1983-12-01

    ACN 67500 TRASANA-TR-43-83 (.0 (v THE TWO-ON-ONE STOCHASTIC DUEL I • Prepared By A.V. Gafarian C.J. Ancker, Jr. DECEMBER 19833D I°"’" " TIC ELECTE...83 M A IL / _ _ 4. TITLE (and Subtitle) TYPE OF REPORT & PERIOD CO\\,ERED The Two-On-One Stochastic Duel Final Report 6. PERFORMING ORG. REPORT NUMBER...Stochastic Duels , Stochastic Processed, and Attrition. 5-14cIa~c fal roLCS-e ss 120. ABSTRACT (C’ntfMte am reverse Ed& if necesemay and idemtitf by block

  19. Practical Unitary Simulator for Non-Markovian Complex Processes

    NASA Astrophysics Data System (ADS)

    Binder, Felix C.; Thompson, Jayne; Gu, Mile

    2018-06-01

    Stochastic processes are as ubiquitous throughout the quantitative sciences as they are notorious for being difficult to simulate and predict. In this Letter, we propose a unitary quantum simulator for discrete-time stochastic processes which requires less internal memory than any classical analogue throughout the simulation. The simulator's internal memory requirements equal those of the best previous quantum models. However, in contrast to previous models, it only requires a (small) finite-dimensional Hilbert space. Moreover, since the simulator operates unitarily throughout, it avoids any unnecessary information loss. We provide a stepwise construction for simulators for a large class of stochastic processes hence directly opening the possibility for experimental implementations with current platforms for quantum computation. The results are illustrated for an example process.

  20. Importance of vesicle release stochasticity in neuro-spike communication.

    PubMed

    Ramezani, Hamideh; Akan, Ozgur B

    2017-07-01

    Aim of this paper is proposing a stochastic model for vesicle release process, a part of neuro-spike communication. Hence, we study biological events occurring in this process and use microphysiological simulations to observe functionality of these events. Since the most important source of variability in vesicle release probability is opening of voltage dependent calcium channels (VDCCs) followed by influx of calcium ions through these channels, we propose a stochastic model for this event, while using a deterministic model for other variability sources. To capture the stochasticity of calcium influx to pre-synaptic neuron in our model, we study its statistics and find that it can be modeled by a distribution defined based on Normal and Logistic distributions.

  1. Derivation of kinetic equations from non-Wiener stochastic differential equations

    NASA Astrophysics Data System (ADS)

    Basharov, A. M.

    2013-12-01

    Kinetic differential-difference equations containing terms with fractional derivatives and describing α -stable Levy processes with 0 < α < 1 have been derived in a unified manner in terms of one-dimensional stochastic differential equations controlled merely by the Poisson processes.

  2. Using Multi-Objective Genetic Programming to Synthesize Stochastic Processes

    NASA Astrophysics Data System (ADS)

    Ross, Brian; Imada, Janine

    Genetic programming is used to automatically construct stochastic processes written in the stochastic π-calculus. Grammar-guided genetic programming constrains search to useful process algebra structures. The time-series behaviour of a target process is denoted with a suitable selection of statistical feature tests. Feature tests can permit complex process behaviours to be effectively evaluated. However, they must be selected with care, in order to accurately characterize the desired process behaviour. Multi-objective evaluation is shown to be appropriate for this application, since it permits heterogeneous statistical feature tests to reside as independent objectives. Multiple undominated solutions can be saved and evaluated after a run, for determination of those that are most appropriate. Since there can be a vast number of candidate solutions, however, strategies for filtering and analyzing this set are required.

  3. Reduced equations of motion for quantum systems driven by diffusive Markov processes.

    PubMed

    Sarovar, Mohan; Grace, Matthew D

    2012-09-28

    The expansion of a stochastic Liouville equation for the coupled evolution of a quantum system and an Ornstein-Uhlenbeck process into a hierarchy of coupled differential equations is a useful technique that simplifies the simulation of stochastically driven quantum systems. We expand the applicability of this technique by completely characterizing the class of diffusive Markov processes for which a useful hierarchy of equations can be derived. The expansion of this technique enables the examination of quantum systems driven by non-Gaussian stochastic processes with bounded range. We present an application of this extended technique by simulating Stark-tuned Förster resonance transfer in Rydberg atoms with nonperturbative position fluctuations.

  4. The development of the deterministic nonlinear PDEs in particle physics to stochastic case

    NASA Astrophysics Data System (ADS)

    Abdelrahman, Mahmoud A. E.; Sohaly, M. A.

    2018-06-01

    In the present work, accuracy method called, Riccati-Bernoulli Sub-ODE technique is used for solving the deterministic and stochastic case of the Phi-4 equation and the nonlinear Foam Drainage equation. Also, the control on the randomness input is studied for stability stochastic process solution.

  5. Statistical Features of the 2010 Beni-Ilmane, Algeria, Aftershock Sequence

    NASA Astrophysics Data System (ADS)

    Hamdache, M.; Peláez, J. A.; Gospodinov, D.; Henares, J.

    2018-03-01

    The aftershock sequence of the 2010 Beni-Ilmane ( M W 5.5) earthquake is studied in depth to analyze the spatial and temporal variability of seismicity parameters of the relationships modeling the sequence. The b value of the frequency-magnitude distribution is examined rigorously. A threshold magnitude of completeness equal to 2.1, using the maximum curvature procedure or the changing point algorithm, and a b value equal to 0.96 ± 0.03 have been obtained for the entire sequence. Two clusters have been identified and characterized by their faulting type, exhibiting b values equal to 0.99 ± 0.05 and 1.04 ± 0.05. Additionally, the temporal decay of the aftershock sequence was examined using a stochastic point process. The analysis was done through the restricted epidemic-type aftershock sequence (RETAS) stochastic model, which allows the possibility to recognize the prevailing clustering pattern of the relaxation process in the examined area. The analysis selected the epidemic-type aftershock sequence (ETAS) model to offer the most appropriate description of the temporal distribution, which presumes that all events in the sequence can cause secondary aftershocks. Finally, the fractal dimensions are estimated using the integral correlation. The obtained D 2 values are 2.15 ± 0.01, 2.23 ± 0.01 and 2.17 ± 0.02 for the entire sequence, and for the first and second cluster, respectively. An analysis of the temporal evolution of the fractal dimensions D -2, D 0, D 2 and the spectral slope has been also performed to derive and characterize the different clusters included in the sequence.

  6. Soil pH mediates the balance between stochastic and deterministic assembly of bacteria

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Tripathi, Binu M.; Stegen, James C.; Kim, Mincheol

    Little is known about the factors affecting the relative influence of stochastic and deterministic processes that governs the assembly of microbial communities in successional soils. Here, we conducted a meta-analysis of bacterial communities using six different successional soils data sets, scattered across different regions, with different pH conditions in early and late successional soils. We found that soil pH was the best predictor of bacterial community assembly and the relative importance of stochastic and deterministic processes along successional soils. Extreme acidic or alkaline pH conditions lead to assembly of phylogenetically more clustered bacterial communities through deterministic processes, whereas pH conditionsmore » close to neutral lead to phylogenetically less clustered bacterial communities with more stochasticity. We suggest that the influence of pH, rather than successional age, is the main driving force in producing trends in phylogenetic assembly of bacteria, and that pH also influences the relative balance of stochastic and deterministic processes along successional soils. Given that pH had a much stronger association with community assembly than did successional age, we evaluated whether the inferred influence of pH was maintained when studying globally-distributed samples collected without regard for successional age. This dataset confirmed the strong influence of pH, suggesting that the influence of soil pH on community assembly processes occurs globally. Extreme pH conditions likely exert more stringent limits on survival and fitness, imposing strong selective pressures through ecological and evolutionary time. Taken together, these findings suggest that the degree to which stochastic vs. deterministic processes shape soil bacterial community assembly is a consequence of soil pH rather than successional age.« less

  7. Stochasticity, succession, and environmental perturbations in a fluidic ecosystem

    PubMed Central

    Zhou, Jizhong; Deng, Ye; Zhang, Ping; Xue, Kai; Liang, Yuting; Van Nostrand, Joy D.; Yang, Yunfeng; He, Zhili; Wu, Liyou; Stahl, David A.; Hazen, Terry C.; Tiedje, James M.; Arkin, Adam P.

    2014-01-01

    Unraveling the drivers of community structure and succession in response to environmental change is a central goal in ecology. Although the mechanisms shaping community structure have been intensively examined, those controlling ecological succession remain elusive. To understand the relative importance of stochastic and deterministic processes in mediating microbial community succession, a unique framework composed of four different cases was developed for fluidic and nonfluidic ecosystems. The framework was then tested for one fluidic ecosystem: a groundwater system perturbed by adding emulsified vegetable oil (EVO) for uranium immobilization. Our results revealed that groundwater microbial community diverged substantially away from the initial community after EVO amendment and eventually converged to a new community state, which was closely clustered with its initial state. However, their composition and structure were significantly different from each other. Null model analysis indicated that both deterministic and stochastic processes played important roles in controlling the assembly and succession of the groundwater microbial community, but their relative importance was time dependent. Additionally, consistent with the proposed conceptual framework but contradictory to conventional wisdom, the community succession responding to EVO amendment was primarily controlled by stochastic rather than deterministic processes. During the middle phase of the succession, the roles of stochastic processes in controlling community composition increased substantially, ranging from 81.3% to 92.0%. Finally, there are limited successional studies available to support different cases in the conceptual framework, but further well-replicated explicit time-series experiments are needed to understand the relative importance of deterministic and stochastic processes in controlling community succession. PMID:24550501

  8. Data-driven monitoring for stochastic systems and its application on batch process

    NASA Astrophysics Data System (ADS)

    Yin, Shen; Ding, Steven X.; Haghani Abandan Sari, Adel; Hao, Haiyang

    2013-07-01

    Batch processes are characterised by a prescribed processing of raw materials into final products for a finite duration and play an important role in many industrial sectors due to the low-volume and high-value products. Process dynamics and stochastic disturbances are inherent characteristics of batch processes, which cause monitoring of batch processes a challenging problem in practice. To solve this problem, a subspace-aided data-driven approach is presented in this article for batch process monitoring. The advantages of the proposed approach lie in its simple form and its abilities to deal with stochastic disturbances and process dynamics existing in the process. The kernel density estimation, which serves as a non-parametric way of estimating the probability density function, is utilised for threshold calculation. An industrial benchmark of fed-batch penicillin production is finally utilised to verify the effectiveness of the proposed approach.

  9. Stochastic evolutionary voluntary public goods game with punishment in a Quasi-birth-and-death process.

    PubMed

    Quan, Ji; Liu, Wei; Chu, Yuqing; Wang, Xianjia

    2017-11-23

    Traditional replication dynamic model and the corresponding concept of evolutionary stable strategy (ESS) only takes into account whether the system can return to the equilibrium after being subjected to a small disturbance. In the real world, due to continuous noise, the ESS of the system may not be stochastically stable. In this paper, a model of voluntary public goods game with punishment is studied in a stochastic situation. Unlike the existing model, we describe the evolutionary process of strategies in the population as a generalized quasi-birth-and-death process. And we investigate the stochastic stable equilibrium (SSE) instead. By numerical experiments, we get all possible SSEs of the system for any combination of parameters, and investigate the influence of parameters on the probabilities of the system to select different equilibriums. It is found that in the stochastic situation, the introduction of the punishment and non-participation strategies can change the evolutionary dynamics of the system and equilibrium of the game. There is a large range of parameters that the system selects the cooperative states as its SSE with a high probability. This result provides us an insight and control method for the evolution of cooperation in the public goods game in stochastic situations.

  10. Aboveground and belowground arthropods experience different relative influences of stochastic versus deterministic community assembly processes following disturbance

    PubMed Central

    Martinez, Alexander S.; Faist, Akasha M.

    2016-01-01

    Background Understanding patterns of biodiversity is a longstanding challenge in ecology. Similar to other biotic groups, arthropod community structure can be shaped by deterministic and stochastic processes, with limited understanding of what moderates the relative influence of these processes. Disturbances have been noted to alter the relative influence of deterministic and stochastic processes on community assembly in various study systems, implicating ecological disturbances as a potential moderator of these forces. Methods Using a disturbance gradient along a 5-year chronosequence of insect-induced tree mortality in a subalpine forest of the southern Rocky Mountains, Colorado, USA, we examined changes in community structure and relative influences of deterministic and stochastic processes in the assembly of aboveground (surface and litter-active species) and belowground (species active in organic and mineral soil layers) arthropod communities. Arthropods were sampled for all years of the chronosequence via pitfall traps (aboveground community) and modified Winkler funnels (belowground community) and sorted to morphospecies. Community structure of both communities were assessed via comparisons of morphospecies abundance, diversity, and composition. Assembly processes were inferred from a mixture of linear models and matrix correlations testing for community associations with environmental properties, and from null-deviation models comparing observed vs. expected levels of species turnover (Beta diversity) among samples. Results Tree mortality altered community structure in both aboveground and belowground arthropod communities, but null models suggested that aboveground communities experienced greater relative influences of deterministic processes, while the relative influence of stochastic processes increased for belowground communities. Additionally, Mantel tests and linear regression models revealed significant associations between the aboveground arthropod communities and vegetation and soil properties, but no significant association among belowground arthropod communities and environmental factors. Discussion Our results suggest context-dependent influences of stochastic and deterministic community assembly processes across different fractions of a spatially co-occurring ground-dwelling arthropod community following disturbance. This variation in assembly may be linked to contrasting ecological strategies and dispersal rates within above- and below-ground communities. Our findings add to a growing body of evidence indicating concurrent influences of stochastic and deterministic processes in community assembly, and highlight the need to consider potential variation across different fractions of biotic communities when testing community ecology theory and considering conservation strategies. PMID:27761333

  11. Stochastic dynamics and stable equilibrium of evolutionary optional public goods game in finite populations

    NASA Astrophysics Data System (ADS)

    Quan, Ji; Liu, Wei; Chu, Yuqing; Wang, Xianjia

    2018-07-01

    Continuous noise caused by mutation is widely present in evolutionary systems. Considering the noise effects and under the optional participation mechanism, a stochastic model for evolutionary public goods game in a finite size population is established. The evolutionary process of strategies in the population is described as a multidimensional ergodic and continuous time Markov process. The stochastic stable state of the system is analyzed by the limit distribution of the stochastic process. By numerical experiments, the influences of the fixed income coefficient for non-participants and the investment income coefficient of the public goods on the stochastic stable equilibrium of the system are analyzed. Through the numerical calculation results, we found that the optional participation mechanism can change the evolutionary dynamics and the equilibrium of the public goods game, and there is a range of parameters which can effectively promote the evolution of cooperation. Further, we obtain the accurate quantitative relationship between the parameters and the probabilities for the system to choose different stable equilibriums, which can be used to realize the control of cooperation.

  12. q-Gaussian distributions and multiplicative stochastic processes for analysis of multiple financial time series

    NASA Astrophysics Data System (ADS)

    Sato, Aki-Hiro

    2010-12-01

    This study considers q-Gaussian distributions and stochastic differential equations with both multiplicative and additive noises. In the M-dimensional case a q-Gaussian distribution can be theoretically derived as a stationary probability distribution of the multiplicative stochastic differential equation with both mutually independent multiplicative and additive noises. By using the proposed stochastic differential equation a method to evaluate a default probability under a given risk buffer is proposed.

  13. Modelling the cancer growth process by Stochastic Differential Equations with the effect of Chondroitin Sulfate (CS) as anticancer therapeutics

    NASA Astrophysics Data System (ADS)

    Syahidatul Ayuni Mazlan, Mazma; Rosli, Norhayati; Jauhari Arief Ichwan, Solachuddin; Suhaity Azmi, Nina

    2017-09-01

    A stochastic model is introduced to describe the growth of cancer affected by anti-cancer therapeutics of Chondroitin Sulfate (CS). The parameters values of the stochastic model are estimated via maximum likelihood function. The numerical method of Euler-Maruyama will be employed to solve the model numerically. The efficiency of the stochastic model is measured by comparing the simulated result with the experimental data.

  14. Research in Stochastic Processes.

    DTIC Science & Technology

    1982-10-31

    Office of Scientific Research Grant AFOSR F49620 82 C 0009 Period: 1 Noveber 1981 through 31 October 1982 Title: Research in Stochastic Processes Co...STA4ATIS CAMBANIS The work briefly described here was developed in connection with problems arising from and related to the statistical comunication

  15. Changing contributions of stochastic and deterministic processes in community assembly over a successional gradient.

    PubMed

    Måren, Inger Elisabeth; Kapfer, Jutta; Aarrestad, Per Arild; Grytnes, John-Arvid; Vandvik, Vigdis

    2018-01-01

    Successional dynamics in plant community assembly may result from both deterministic and stochastic ecological processes. The relative importance of different ecological processes is expected to vary over the successional sequence, between different plant functional groups, and with the disturbance levels and land-use management regimes of the successional systems. We evaluate the relative importance of stochastic and deterministic processes in bryophyte and vascular plant community assembly after fire in grazed and ungrazed anthropogenic coastal heathlands in Northern Europe. A replicated series of post-fire successions (n = 12) were initiated under grazed and ungrazed conditions, and vegetation data were recorded in permanent plots over 13 years. We used redundancy analysis (RDA) to test for deterministic successional patterns in species composition repeated across the replicate successional series and analyses of co-occurrence to evaluate to what extent species respond synchronously along the successional gradient. Change in species co-occurrences over succession indicates stochastic successional dynamics at the species level (i.e., species equivalence), whereas constancy in co-occurrence indicates deterministic dynamics (successional niche differentiation). The RDA shows high and deterministic vascular plant community compositional change, especially early in succession. Co-occurrence analyses indicate stochastic species-level dynamics the first two years, which then give way to more deterministic replacements. Grazed and ungrazed successions are similar, but the early stage stochasticity is higher in ungrazed areas. Bryophyte communities in ungrazed successions resemble vascular plant communities. In contrast, bryophytes in grazed successions showed consistently high stochasticity and low determinism in both community composition and species co-occurrence. In conclusion, stochastic and individualistic species responses early in succession give way to more niche-driven dynamics in later successional stages. Grazing reduces predictability in both successional trends and species-level dynamics, especially in plant functional groups that are not well adapted to disturbance. © 2017 The Authors. Ecology, published by Wiley Periodicals, Inc., on behalf of the Ecological Society of America.

  16. Pricing foreign equity option under stochastic volatility tempered stable Lévy processes

    NASA Astrophysics Data System (ADS)

    Gong, Xiaoli; Zhuang, Xintian

    2017-10-01

    Considering that financial assets returns exhibit leptokurtosis, asymmetry properties as well as clustering and heteroskedasticity effect, this paper substitutes the logarithm normal jumps in Heston stochastic volatility model by the classical tempered stable (CTS) distribution and normal tempered stable (NTS) distribution to construct stochastic volatility tempered stable Lévy processes (TSSV) model. The TSSV model framework permits infinite activity jump behaviors of return dynamics and time varying volatility consistently observed in financial markets through subordinating tempered stable process to stochastic volatility process, capturing leptokurtosis, fat tailedness and asymmetry features of returns. By employing the analytical characteristic function and fast Fourier transform (FFT) technique, the formula for probability density function (PDF) of TSSV returns is derived, making the analytical formula for foreign equity option (FEO) pricing available. High frequency financial returns data are employed to verify the effectiveness of proposed models in reflecting the stylized facts of financial markets. Numerical analysis is performed to investigate the relationship between the corresponding parameters and the implied volatility of foreign equity option.

  17. Kinetic theory of age-structured stochastic birth-death processes

    NASA Astrophysics Data System (ADS)

    Greenman, Chris D.; Chou, Tom

    2016-01-01

    Classical age-structured mass-action models such as the McKendrick-von Foerster equation have been extensively studied but are unable to describe stochastic fluctuations or population-size-dependent birth and death rates. Stochastic theories that treat semi-Markov age-dependent processes using, e.g., the Bellman-Harris equation do not resolve a population's age structure and are unable to quantify population-size dependencies. Conversely, current theories that include size-dependent population dynamics (e.g., mathematical models that include carrying capacity such as the logistic equation) cannot be easily extended to take into account age-dependent birth and death rates. In this paper, we present a systematic derivation of a new, fully stochastic kinetic theory for interacting age-structured populations. By defining multiparticle probability density functions, we derive a hierarchy of kinetic equations for the stochastic evolution of an aging population undergoing birth and death. We show that the fully stochastic age-dependent birth-death process precludes factorization of the corresponding probability densities, which then must be solved by using a Bogoliubov--Born--Green--Kirkwood--Yvon-like hierarchy. Explicit solutions are derived in three limits: no birth, no death, and steady state. These are then compared with their corresponding mean-field results. Our results generalize both deterministic models and existing master equation approaches by providing an intuitive and efficient way to simultaneously model age- and population-dependent stochastic dynamics applicable to the study of demography, stem cell dynamics, and disease evolution.

  18. Accurate hybrid stochastic simulation of a system of coupled chemical or biochemical reactions.

    PubMed

    Salis, Howard; Kaznessis, Yiannis

    2005-02-01

    The dynamical solution of a well-mixed, nonlinear stochastic chemical kinetic system, described by the Master equation, may be exactly computed using the stochastic simulation algorithm. However, because the computational cost scales with the number of reaction occurrences, systems with one or more "fast" reactions become costly to simulate. This paper describes a hybrid stochastic method that partitions the system into subsets of fast and slow reactions, approximates the fast reactions as a continuous Markov process, using a chemical Langevin equation, and accurately describes the slow dynamics using the integral form of the "Next Reaction" variant of the stochastic simulation algorithm. The key innovation of this method is its mechanism of efficiently monitoring the occurrences of slow, discrete events while simultaneously simulating the dynamics of a continuous, stochastic or deterministic process. In addition, by introducing an approximation in which multiple slow reactions may occur within a time step of the numerical integration of the chemical Langevin equation, the hybrid stochastic method performs much faster with only a marginal decrease in accuracy. Multiple examples, including a biological pulse generator and a large-scale system benchmark, are simulated using the exact and proposed hybrid methods as well as, for comparison, a previous hybrid stochastic method. Probability distributions of the solutions are compared and the weak errors of the first two moments are computed. In general, these hybrid methods may be applied to the simulation of the dynamics of a system described by stochastic differential, ordinary differential, and Master equations.

  19. A straightforward method to compute average stochastic oscillations from data samples.

    PubMed

    Júlvez, Jorge

    2015-10-19

    Many biological systems exhibit sustained stochastic oscillations in their steady state. Assessing these oscillations is usually a challenging task due to the potential variability of the amplitude and frequency of the oscillations over time. As a result of this variability, when several stochastic replications are averaged, the oscillations are flattened and can be overlooked. This can easily lead to the erroneous conclusion that the system reaches a constant steady state. This paper proposes a straightforward method to detect and asses stochastic oscillations. The basis of the method is in the use of polar coordinates for systems with two species, and cylindrical coordinates for systems with more than two species. By slightly modifying these coordinate systems, it is possible to compute the total angular distance run by the system and the average Euclidean distance to a reference point. This allows us to compute confidence intervals, both for the average angular speed and for the distance to a reference point, from a set of replications. The use of polar (or cylindrical) coordinates provides a new perspective of the system dynamics. The mean trajectory that can be obtained by averaging the usual cartesian coordinates of the samples informs about the trajectory of the center of mass of the replications. In contrast to such a mean cartesian trajectory, the mean polar trajectory can be used to compute the average circular motion of those replications, and therefore, can yield evidence about sustained steady state oscillations. Both, the coordinate transformation and the computation of confidence intervals, can be carried out efficiently. This results in an efficient method to evaluate stochastic oscillations.

  20. Online POMDP Algorithms for Very Large Observation Spaces

    DTIC Science & Technology

    2017-06-06

    stochastic optimization: From sets to paths." In Advances in Neural Information Processing Systems, pp. 1585- 1593 . 2015. • Luo, Yuanfu, Haoyu Bai...and Wee Sun Lee. "Adaptive stochastic optimization: From sets to paths." In Advances in Neural Information Processing Systems, pp. 1585- 1593 . 2015

  1. An Analysis of Stochastic Duels Involving Fixed Rates of Fire

    DTIC Science & Technology

    The thesis presents an analysis of stochastic duels involving two opposing weapon systems with constant rates of fire. The duel was developed as a...process stochastic duels . The analysis was then extended to the two versus one duel where the three weapon systems were assumed to have fixed rates of fire.

  2. STOCHSIMGPU: parallel stochastic simulation for the Systems Biology Toolbox 2 for MATLAB.

    PubMed

    Klingbeil, Guido; Erban, Radek; Giles, Mike; Maini, Philip K

    2011-04-15

    The importance of stochasticity in biological systems is becoming increasingly recognized and the computational cost of biologically realistic stochastic simulations urgently requires development of efficient software. We present a new software tool STOCHSIMGPU that exploits graphics processing units (GPUs) for parallel stochastic simulations of biological/chemical reaction systems and show that significant gains in efficiency can be made. It is integrated into MATLAB and works with the Systems Biology Toolbox 2 (SBTOOLBOX2) for MATLAB. The GPU-based parallel implementation of the Gillespie stochastic simulation algorithm (SSA), the logarithmic direct method (LDM) and the next reaction method (NRM) is approximately 85 times faster than the sequential implementation of the NRM on a central processing unit (CPU). Using our software does not require any changes to the user's models, since it acts as a direct replacement of the stochastic simulation software of the SBTOOLBOX2. The software is open source under the GPL v3 and available at http://www.maths.ox.ac.uk/cmb/STOCHSIMGPU. The web site also contains supplementary information. klingbeil@maths.ox.ac.uk Supplementary data are available at Bioinformatics online.

  3. A kinetic theory for age-structured stochastic birth-death processes

    NASA Astrophysics Data System (ADS)

    Chou, Tom; Greenman, Chris

    Classical age-structured mass-action models such as the McKendrick-von Foerster equation have been extensively studied but they are structurally unable to describe stochastic fluctuations or population-size-dependent birth and death rates. Conversely, current theories that include size-dependent population dynamics (e.g., carrying capacity) cannot be easily extended to take into account age-dependent birth and death rates. In this paper, we present a systematic derivation of a new fully stochastic kinetic theory for interacting age-structured populations. By defining multiparticle probability density functions, we derive a hierarchy of kinetic equations for the stochastic evolution of an aging population undergoing birth and death. We show that the fully stochastic age-dependent birth-death process precludes factorization of the corresponding probability densities, which then must be solved by using a BBGKY-like hierarchy. Our results generalize both deterministic models and existing master equation approaches by providing an intuitive and efficient way to simultaneously model age- and population-dependent stochastic dynamics applicable to the study of demography, stem cell dynamics, and disease evolution. NSF.

  4. Stochastic derivative-free optimization using a trust region framework

    DOE PAGES

    Larson, Jeffrey; Billups, Stephen C.

    2016-02-17

    This study presents a trust region algorithm to minimize a function f when one has access only to noise-corrupted function values f¯. The model-based algorithm dynamically adjusts its step length, taking larger steps when the model and function agree and smaller steps when the model is less accurate. The method does not require the user to specify a fixed pattern of points used to build local models and does not repeatedly sample points. If f is sufficiently smooth and the noise is independent and identically distributed with mean zero and finite variance, we prove that our algorithm produces iterates suchmore » that the corresponding function gradients converge in probability to zero. As a result, we present a prototype of our algorithm that, while simplistic in its management of previously evaluated points, solves benchmark problems in fewer function evaluations than do existing stochastic approximation methods.« less

  5. Stochastic Dominance and Analysis of ODI Batting Performance: the Indian Cricket Team, 1989-2005

    PubMed Central

    Damodaran, Uday

    2006-01-01

    Relative to other team games, the contribution of individual team members to the overall team performance is more easily quantifiable in cricket. Viewing players as securities and the team as a portfolio, cricket thus lends itself better to the use of analytical methods usually employed in the analysis of securities and portfolios. This paper demonstrates the use of stochastic dominance rules, normally used in investment management, to analyze the One Day International (ODI) batting performance of Indian cricketers. The data used span the years 1989 to 2005. In dealing with cricketing data the existence of ‘not out’ scores poses a problem while processing the data. In this paper, using a Bayesian approach, the ‘not-out’ scores are first replaced with a conditional average. The conditional average that is used represents an estimate of the score that the player would have gone on to score, if the ‘not out’ innings had been completed. The data thus treated are then used in the stochastic dominance analysis. To use stochastic dominance rules we need to characterize the ‘utility’ of a batsman. The first derivative of the utility function, with respect to runs scored, of an ODI batsman can safely be assumed to be positive (more runs scored are preferred to less). However, the second derivative needs not be negative (no diminishing marginal utility for runs scored). This means that we cannot clearly specify whether the value attached to an additional run scored is lesser at higher levels of scores. Because of this, only first-order stochastic dominance is used to analyze the performance of the players under consideration. While this has its limitation (specifically, we cannot arrive at a complete utility value for each batsman), the approach does well in describing player performance. Moreover, the results have intuitive appeal. Key Points The problem of dealing with ‘not out’ scores in cricket is tackled using a Bayesian approach. Stochastic dominance rules are used to characterize the utility of a batsman. Since the marginal utility of runs scored is not diminishing in nature, only first order stochastic dominance rules are used. The results, demonstrated using data for the Indian cricket team are intuitively appealing. The limitation of the approach is that it cannot arrive at a complete utility value for the batsman. PMID:24357944

  6. Stochastic Dominance and Analysis of ODI Batting Performance: the Indian Cricket Team, 1989-2005.

    PubMed

    Damodaran, Uday

    2006-01-01

    Relative to other team games, the contribution of individual team members to the overall team performance is more easily quantifiable in cricket. Viewing players as securities and the team as a portfolio, cricket thus lends itself better to the use of analytical methods usually employed in the analysis of securities and portfolios. This paper demonstrates the use of stochastic dominance rules, normally used in investment management, to analyze the One Day International (ODI) batting performance of Indian cricketers. The data used span the years 1989 to 2005. In dealing with cricketing data the existence of 'not out' scores poses a problem while processing the data. In this paper, using a Bayesian approach, the 'not-out' scores are first replaced with a conditional average. The conditional average that is used represents an estimate of the score that the player would have gone on to score, if the 'not out' innings had been completed. The data thus treated are then used in the stochastic dominance analysis. To use stochastic dominance rules we need to characterize the 'utility' of a batsman. The first derivative of the utility function, with respect to runs scored, of an ODI batsman can safely be assumed to be positive (more runs scored are preferred to less). However, the second derivative needs not be negative (no diminishing marginal utility for runs scored). This means that we cannot clearly specify whether the value attached to an additional run scored is lesser at higher levels of scores. Because of this, only first-order stochastic dominance is used to analyze the performance of the players under consideration. While this has its limitation (specifically, we cannot arrive at a complete utility value for each batsman), the approach does well in describing player performance. Moreover, the results have intuitive appeal. Key PointsThe problem of dealing with 'not out' scores in cricket is tackled using a Bayesian approach.Stochastic dominance rules are used to characterize the utility of a batsman.Since the marginal utility of runs scored is not diminishing in nature, only first order stochastic dominance rules are used.The results, demonstrated using data for the Indian cricket team are intuitively appealing.The limitation of the approach is that it cannot arrive at a complete utility value for the batsman.

  7. Chaotic Expansions of Elements of the Universal Enveloping Superalgebra Associated with a Z2-graded Quantum Stochastic Calculus

    NASA Astrophysics Data System (ADS)

    Eyre, T. M. W.

    Given a polynomial function f of classical stochastic integrator processes whose differentials satisfy a closed Ito multiplication table, we can express the stochastic derivative of f as We establish an analogue of this formula in the form of a chaotic decomposition for Z2-graded theories of quantum stochastic calculus based on the natural coalgebra structure of the universal enveloping superalgebra.

  8. Stochastic dynamics of melt ponds and sea ice-albedo climate feedback

    NASA Astrophysics Data System (ADS)

    Sudakov, Ivan

    Evolution of melt ponds on the Arctic sea surface is a complicated stochastic process. We suggest a low-order model with ice-albedo feedback which describes stochastic dynamics of melt ponds geometrical characteristics. The model is a stochastic dynamical system model of energy balance in the climate system. We describe the equilibria in this model. We conclude the transition in fractal dimension of melt ponds affects the shape of the sea ice albedo curve.

  9. Effects of Stochastic Traffic Flow Model on Expected System Performance

    DTIC Science & Technology

    2012-12-01

    NSWC-PCD has made considerable improvements to their pedestrian flow modeling . In addition to the linear paths, the 2011 version now includes...using stochastic paths. 2.2 Linear Paths vs. Stochastic Paths 2.2.1 Linear Paths and Direct Maximum Pd Calculation Modeling pedestrian traffic flow...as a stochastic process begins with the linear path model . Let the detec- tion area be R x C voxels. This creates C 2 total linear paths, path(Cs

  10. Stochastic control and the second law of thermodynamics

    NASA Technical Reports Server (NTRS)

    Brockett, R. W.; Willems, J. C.

    1979-01-01

    The second law of thermodynamics is studied from the point of view of stochastic control theory. We find that the feedback control laws which are of interest are those which depend only on average values, and not on sample path behavior. We are lead to a criterion which, when satisfied, permits one to assign a temperature to a stochastic system in such a way as to have Carnot cycles be the optimal trajectories of optimal control problems. Entropy is also defined and we are able to prove an equipartition of energy theorem using this definition of temperature. Our formulation allows one to treat irreversibility in a quite natural and completely precise way.

  11. A simple approximation of moments of the quasi-equilibrium distribution of an extended stochastic theta-logistic model with non-integer powers.

    PubMed

    Bhowmick, Amiya Ranjan; Bandyopadhyay, Subhadip; Rana, Sourav; Bhattacharya, Sabyasachi

    2016-01-01

    The stochastic versions of the logistic and extended logistic growth models are applied successfully to explain many real-life population dynamics and share a central body of literature in stochastic modeling of ecological systems. To understand the randomness in the population dynamics of the underlying processes completely, it is important to have a clear idea about the quasi-equilibrium distribution and its moments. Bartlett et al. (1960) took a pioneering attempt for estimating the moments of the quasi-equilibrium distribution of the stochastic logistic model. Matis and Kiffe (1996) obtain a set of more accurate and elegant approximations for the mean, variance and skewness of the quasi-equilibrium distribution of the same model using cumulant truncation method. The method is extended for stochastic power law logistic family by the same and several other authors (Nasell, 2003; Singh and Hespanha, 2007). Cumulant truncation and some alternative methods e.g. saddle point approximation, derivative matching approach can be applied if the powers involved in the extended logistic set up are integers, although plenty of evidence is available for non-integer powers in many practical situations (Sibly et al., 2005). In this paper, we develop a set of new approximations for mean, variance and skewness of the quasi-equilibrium distribution under more general family of growth curves, which is applicable for both integer and non-integer powers. The deterministic counterpart of this family of models captures both monotonic and non-monotonic behavior of the per capita growth rate, of which theta-logistic is a special case. The approximations accurately estimate the first three order moments of the quasi-equilibrium distribution. The proposed method is illustrated with simulated data and real data from global population dynamics database. Copyright © 2015 Elsevier Inc. All rights reserved.

  12. Occurrence analysis of daily rainfalls by using non-homogeneous Poissonian processes

    NASA Astrophysics Data System (ADS)

    Sirangelo, B.; Ferrari, E.; de Luca, D. L.

    2009-09-01

    In recent years several temporally homogeneous stochastic models have been applied to describe the rainfall process. In particular stochastic analysis of daily rainfall time series may contribute to explain the statistic features of the temporal variability related to the phenomenon. Due to the evident periodicity of the physical process, these models have to be used only to short temporal intervals in which occurrences and intensities of rainfalls can be considered reliably homogeneous. To this aim, occurrences of daily rainfalls can be considered as a stationary stochastic process in monthly periods. In this context point process models are widely used for at-site analysis of daily rainfall occurrence; they are continuous time series models, and are able to explain intermittent feature of rainfalls and simulate interstorm periods. With a different approach, periodic features of daily rainfalls can be interpreted by using a temporally non-homogeneous stochastic model characterized by parameters expressed as continuous functions in the time. In this case, great attention has to be paid to the parsimony of the models, as regards the number of parameters and the bias introduced into the generation of synthetic series, and to the influence of threshold values in extracting peak storm database from recorded daily rainfall heights. In this work, a stochastic model based on a non-homogeneous Poisson process, characterized by a time-dependent intensity of rainfall occurrence, is employed to explain seasonal effects of daily rainfalls exceeding prefixed threshold values. In particular, variation of rainfall occurrence intensity ? (t) is modelled by using Fourier series analysis, in which the non-homogeneous process is transformed into a homogeneous and unit one through a proper transformation of time domain, and the choice of the minimum number of harmonics is evaluated applying available statistical tests. The procedure is applied to a dataset of rain gauges located in different geographical zones of Mediterranean area. Time series have been selected on the basis of the availability of at least 50 years in the time period 1921-1985, chosen as calibration period, and of all the years of observation in the subsequent validation period 1986-2005, whose daily rainfall occurrence process variability is under hypothesis. Firstly, for each time series and for each fixed threshold value, parameters estimation of the non-homogeneous Poisson model is carried out, referred to calibration period. As second step, in order to test the hypothesis that daily rainfall occurrence process preserves the same behaviour in more recent time periods, the intensity distribution evaluated for calibration period is also adopted for the validation period. Starting from this and using a Monte Carlo approach, 1000 synthetic generations of daily rainfall occurrences, of length equal to validation period, have been carried out, and for each simulation sample ?(t) has been evaluated. This procedure is adopted because of the complexity of determining analytical statistical confidence limits referred to the sample intensity ?(t). Finally, sample intensity, theoretical function of the calibration period and 95% statistical band, evaluated by Monte Carlo approach, are matching, together with considering, for each threshold value, the mean square error (MSE) between the theoretical ?(t) and the sample one of recorded data, and his correspondent 95% one tail statistical band, estimated from the MSE values between the sample ?(t) of each synthetic series and the theoretical one. The results obtained may be very useful in the context of the identification and calibration of stochastic rainfall models based on historical precipitation data. Further applications of the non-homogeneous Poisson model will concern the joint analyses of the storm occurrence process with the rainfall height marks, interpreted by using a temporally homogeneous model in proper sub-year intervals.

  13. Inter-species competition-facilitation in stochastic riparian vegetation dynamics.

    PubMed

    Tealdi, Stefano; Camporeale, Carlo; Ridolfi, Luca

    2013-02-07

    Riparian vegetation is a highly dynamic community that lives on river banks and which depends to a great extent on the fluvial hydrology. The stochasticity of the discharge and erosion/deposition processes in fact play a key role in determining the distribution of vegetation along a riparian transect. These abiotic processes interact with biotic competition/facilitation mechanisms, such as plant competition for light, water, and nutrients. In this work, we focus on the dynamics of plants characterized by three components: (1) stochastic forcing due to river discharges, (2) competition for resources, and (3) inter-species facilitation due to the interplay between vegetation and fluid dynamics processes. A minimalist stochastic bio-hydrological model is proposed for the dynamics of the biomass of two vegetation species: one species is assumed dominant and slow-growing, the other is subdominant, but fast-growing. The stochastic model is solved analytically and the probability density function of the plant biomasses is obtained as a function of both the hydrologic and biologic parameters. The impact of the competition/facilitation processes on the distribution of vegetation species along the riparian transect is investigated and remarkable effects are observed. Finally, a good qualitative agreement is found between the model results and field data. Copyright © 2012 Elsevier Ltd. All rights reserved.

  14. Bayesian Computation for Log-Gaussian Cox Processes: A Comparative Analysis of Methods

    PubMed Central

    Teng, Ming; Nathoo, Farouk S.; Johnson, Timothy D.

    2017-01-01

    The Log-Gaussian Cox Process is a commonly used model for the analysis of spatial point pattern data. Fitting this model is difficult because of its doubly-stochastic property, i.e., it is an hierarchical combination of a Poisson process at the first level and a Gaussian Process at the second level. Various methods have been proposed to estimate such a process, including traditional likelihood-based approaches as well as Bayesian methods. We focus here on Bayesian methods and several approaches that have been considered for model fitting within this framework, including Hamiltonian Monte Carlo, the Integrated nested Laplace approximation, and Variational Bayes. We consider these approaches and make comparisons with respect to statistical and computational efficiency. These comparisons are made through several simulation studies as well as through two applications, the first examining ecological data and the second involving neuroimaging data. PMID:29200537

  15. Identification of Intensity Ratio Break Points from Photon Arrival Trajectories in Ratiometric Single Molecule Spectroscopy

    PubMed Central

    Bingemann, Dieter; Allen, Rachel M.

    2012-01-01

    We describe a statistical method to analyze dual-channel photon arrival trajectories from single molecule spectroscopy model-free to identify break points in the intensity ratio. Photons are binned with a short bin size to calculate the logarithm of the intensity ratio for each bin. Stochastic photon counting noise leads to a near-normal distribution of this logarithm and the standard student t-test is used to find statistically significant changes in this quantity. In stochastic simulations we determine the significance threshold for the t-test’s p-value at a given level of confidence. We test the method’s sensitivity and accuracy indicating that the analysis reliably locates break points with significant changes in the intensity ratio with little or no error in realistic trajectories with large numbers of small change points, while still identifying a large fraction of the frequent break points with small intensity changes. Based on these results we present an approach to estimate confidence intervals for the identified break point locations and recommend a bin size to choose for the analysis. The method proves powerful and reliable in the analysis of simulated and actual data of single molecule reorientation in a glassy matrix. PMID:22837704

  16. Research in Stochastic Processes.

    DTIC Science & Technology

    1983-10-01

    increases. A more detailed investigation for the exceedances themselves (rather than Just the cluster centers) was undertaken, together with J. HUsler and...J. HUsler and M.R. Leadbetter, Compoung Poisson limit theorems for high level exceedances by stationary sequences, Center for Stochastic Processes...stability by a random linear operator. C.D. Hardin, General (asymmetric) stable variables and processes. T. Hsing, J. HUsler and M.R. Leadbetter, Compound

  17. Simulating and quantifying legacy topographic data uncertainty: an initial step to advancing topographic change analyses

    NASA Astrophysics Data System (ADS)

    Wasklewicz, Thad; Zhu, Zhen; Gares, Paul

    2017-12-01

    Rapid technological advances, sustained funding, and a greater recognition of the value of topographic data have helped develop an increasing archive of topographic data sources. Advances in basic and applied research related to Earth surface changes require researchers to integrate recent high-resolution topography (HRT) data with the legacy datasets. Several technical challenges and data uncertainty issues persist to date when integrating legacy datasets with more recent HRT data. The disparate data sources required to extend the topographic record back in time are often stored in formats that are not readily compatible with more recent HRT data. Legacy data may also contain unknown error or unreported error that make accounting for data uncertainty difficult. There are also cases of known deficiencies in legacy datasets, which can significantly bias results. Finally, scientists are faced with the daunting challenge of definitively deriving the extent to which a landform or landscape has or will continue to change in response natural and/or anthropogenic processes. Here, we examine the question: how do we evaluate and portray data uncertainty from the varied topographic legacy sources and combine this uncertainty with current spatial data collection techniques to detect meaningful topographic changes? We view topographic uncertainty as a stochastic process that takes into consideration spatial and temporal variations from a numerical simulation and physical modeling experiment. The numerical simulation incorporates numerous topographic data sources typically found across a range of legacy data to present high-resolution data, while the physical model focuses on more recent HRT data acquisition techniques. Elevation uncertainties observed from anchor points in the digital terrain models are modeled using "states" in a stochastic estimator. Stochastic estimators trace the temporal evolution of the uncertainties and are natively capable of incorporating sensor measurements observed at various times in history. The geometric relationship between the anchor point and the sensor measurement can be approximated via spatial correlation even when a sensor does not directly observe an anchor point. Findings from a numerical simulation indicate the estimated error coincides with the actual error using certain sensors (Kinematic GNSS, ALS, TLS, and SfM-MVS). Data from 2D imagery and static GNSS did not perform as well at the time the sensor is integrated into estimator largely as a result of the low density of data added from these sources. The estimator provides a history of DEM estimation as well as the uncertainties and cross correlations observed on anchor points. Our work provides preliminary evidence that our approach is valid for integrating legacy data with HRT and warrants further exploration and field validation. [Figure not available: see fulltext.

  18. A novel Bayesian approach to acoustic emission data analysis.

    PubMed

    Agletdinov, E; Pomponi, E; Merson, D; Vinogradov, A

    2016-12-01

    Acoustic emission (AE) technique is a popular tool for materials characterization and non-destructive testing. Originating from the stochastic motion of defects in solids, AE is a random process by nature. The challenging problem arises whenever an attempt is made to identify specific points corresponding to the changes in the trends in the fluctuating AE time series. A general Bayesian framework is proposed for the analysis of AE time series, aiming at automated finding the breakpoints signaling a crossover in the dynamics of underlying AE sources. Copyright © 2016 Elsevier B.V. All rights reserved.

  19. Double simple-harmonic-oscillator formulation of the thermal equilibrium of a fluid interacting with a coherent source of phonons

    NASA Technical Reports Server (NTRS)

    Defacio, B.; Vannevel, Alan; Brander, O.

    1993-01-01

    A formulation is given for a collection of phonons (sound) in a fluid at a non-zero temperature which uses the simple harmonic oscillator twice; one to give a stochastic thermal 'noise' process and the other which generates a coherent Glauber state of phonons. Simple thermodynamic observables are calculated and the acoustic two point function, 'contrast' is presented. The role of 'coherence' in an equilibrium system is clarified by these results and the simple harmonic oscillator is a key structure in both the formulation and the calculations.

  20. Monte Carlo point process estimation of electromyographic envelopes from motor cortical spikes for brain-machine interfaces

    NASA Astrophysics Data System (ADS)

    Liao, Yuxi; She, Xiwei; Wang, Yiwen; Zhang, Shaomin; Zhang, Qiaosheng; Zheng, Xiaoxiang; Principe, Jose C.

    2015-12-01

    Objective. Representation of movement in the motor cortex (M1) has been widely studied in brain-machine interfaces (BMIs). The electromyogram (EMG) has greater bandwidth than the conventional kinematic variables (such as position, velocity), and is functionally related to the discharge of cortical neurons. As the stochastic information of EMG is derived from the explicit spike time structure, point process (PP) methods will be a good solution for decoding EMG directly from neural spike trains. Previous studies usually assume linear or exponential tuning curves between neural firing and EMG, which may not be true. Approach. In our analysis, we estimate the tuning curves in a data-driven way and find both the traditional functional-excitatory and functional-inhibitory neurons, which are widely found across a rat’s motor cortex. To accurately decode EMG envelopes from M1 neural spike trains, the Monte Carlo point process (MCPP) method is implemented based on such nonlinear tuning properties. Main results. Better reconstruction of EMG signals is shown on baseline and extreme high peaks, as our method can better preserve the nonlinearity of the neural tuning during decoding. The MCPP improves the prediction accuracy (the normalized mean squared error) 57% and 66% on average compared with the adaptive point process filter using linear and exponential tuning curves respectively, for all 112 data segments across six rats. Compared to a Wiener filter using spike rates with an optimal window size of 50 ms, MCPP decoding EMG from a point process improves the normalized mean square error (NMSE) by 59% on average. Significance. These results suggest that neural tuning is constantly changing during task execution and therefore, the use of spike timing methodologies and estimation of appropriate tuning curves needs to be undertaken for better EMG decoding in motor BMIs.

  1. Thermodynamics: A Stirling effort

    NASA Astrophysics Data System (ADS)

    Horowitz, Jordan M.; Parrondo, Juan M. R.

    2012-02-01

    The realization of a single-particle Stirling engine pushes thermodynamics into stochastic territory where fluctuations dominate, and points towards a better understanding of energy transduction at the microscale.

  2. Simultaneous estimation of deterministic and fractal stochastic components in non-stationary time series

    NASA Astrophysics Data System (ADS)

    García, Constantino A.; Otero, Abraham; Félix, Paulo; Presedo, Jesús; Márquez, David G.

    2018-07-01

    In the past few decades, it has been recognized that 1 / f fluctuations are ubiquitous in nature. The most widely used mathematical models to capture the long-term memory properties of 1 / f fluctuations have been stochastic fractal models. However, physical systems do not usually consist of just stochastic fractal dynamics, but they often also show some degree of deterministic behavior. The present paper proposes a model based on fractal stochastic and deterministic components that can provide a valuable basis for the study of complex systems with long-term correlations. The fractal stochastic component is assumed to be a fractional Brownian motion process and the deterministic component is assumed to be a band-limited signal. We also provide a method that, under the assumptions of this model, is able to characterize the fractal stochastic component and to provide an estimate of the deterministic components present in a given time series. The method is based on a Bayesian wavelet shrinkage procedure that exploits the self-similar properties of the fractal processes in the wavelet domain. This method has been validated over simulated signals and over real signals with economical and biological origin. Real examples illustrate how our model may be useful for exploring the deterministic-stochastic duality of complex systems, and uncovering interesting patterns present in time series.

  3. Modeling stochasticity and robustness in gene regulatory networks.

    PubMed

    Garg, Abhishek; Mohanram, Kartik; Di Cara, Alessandro; De Micheli, Giovanni; Xenarios, Ioannis

    2009-06-15

    Understanding gene regulation in biological processes and modeling the robustness of underlying regulatory networks is an important problem that is currently being addressed by computational systems biologists. Lately, there has been a renewed interest in Boolean modeling techniques for gene regulatory networks (GRNs). However, due to their deterministic nature, it is often difficult to identify whether these modeling approaches are robust to the addition of stochastic noise that is widespread in gene regulatory processes. Stochasticity in Boolean models of GRNs has been addressed relatively sparingly in the past, mainly by flipping the expression of genes between different expression levels with a predefined probability. This stochasticity in nodes (SIN) model leads to over representation of noise in GRNs and hence non-correspondence with biological observations. In this article, we introduce the stochasticity in functions (SIF) model for simulating stochasticity in Boolean models of GRNs. By providing biological motivation behind the use of the SIF model and applying it to the T-helper and T-cell activation networks, we show that the SIF model provides more biologically robust results than the existing SIN model of stochasticity in GRNs. Algorithms are made available under our Boolean modeling toolbox, GenYsis. The software binaries can be downloaded from http://si2.epfl.ch/ approximately garg/genysis.html.

  4. Numerical simulations in stochastic mechanics

    NASA Astrophysics Data System (ADS)

    McClendon, Marvin; Rabitz, Herschel

    1988-05-01

    The stochastic differential equation of Nelson's stochastic mechanics is integrated numerically for several simple quantum systems. The calculations are performed with use of Helfand and Greenside's method and pseudorandom numbers. The resulting trajectories are analyzed both individually and collectively to yield insight into momentum, uncertainty principles, interference, tunneling, quantum chaos, and common models of diatomic molecules from the stochastic quantization point of view. In addition to confirming Shucker's momentum theorem, these simulations illustrate, within the context of stochastic mechanics, the position-momentum and time-energy uncertainty relations, the two-slit diffraction pattern, exponential decay of an unstable system, and the greater degree of anticorrelation in a valence-bond model as compared with a molecular-orbital model of H2. The attempt to find exponential divergence of initially nearby trajectories, potentially useful as a criterion for quantum chaos, in a periodically forced oscillator is inconclusive. A way of computing excited energies from the ground-state motion is presented. In all of these studies the use of particle trajectories allows a more insightful interpretation of physical phenomena than is possible within traditional wave mechanics.

  5. Simulation-Based Methodologies for Global Optimization and Planning

    DTIC Science & Technology

    2013-10-11

    GESK ) Stochastic kriging (SK) was introduced by Ankenman, Nelson, and Staum [1] to handle the stochastic simulation setting, where the noise in the...is used for the kriging. Four experiments will be used to illustrate some charac- teristics of SK, SKG, and GESK , with respect to the choice of...samples at each point. Because GESK is able to explore the design space more via extrapolation, it does a better job of capturing the fluctuations of the

  6. O the Derivation of the Schroedinger Equation from Stochastic Mechanics.

    NASA Astrophysics Data System (ADS)

    Wallstrom, Timothy Clarke

    The thesis is divided into four largely independent chapters. The first three chapters treat mathematical problems in the theory of stochastic mechanics. The fourth chapter deals with stochastic mechanisms as a physical theory and shows that the Schrodinger equation cannot be derived from existing formulations of stochastic mechanics, as had previously been believed. Since the drift coefficients of stochastic mechanical diffusions are undefined on the nodes, or zeros of the density, an important problem has been to show that the sample paths stay away from the nodes. In Chapter 1, it is shown that for a smooth wavefunction, the closest approach to the nodes can be bounded solely in terms of the time -integrated energy. The ergodic properties of stochastic mechanical diffusions are greatly complicated by the tendency of the particles to avoid the nodes. In Chapter 2, it is shown that a sufficient condition for a stationary process to be ergodic is that there exist positive t and c such that for all x and y, p^{t} (x,y) > cp(y), and this result is applied to show that the set of spin-1over2 diffusions is uniformly ergodic. In stochastic mechanics, the Bopp-Haag-Dankel diffusions on IR^3times SO(3) are used to represent particles with spin. Nelson has conjectured that in the limit as the particle's moment of inertia I goes to zero, the projections of the Bopp -Haag-Dankel diffusions onto IR^3 converge to a Markovian limit process. This conjecture is proved for the spin-1over2 case in Chapter 3, and the limit process identified as the diffusion naturally associated with the solution to the regular Pauli equation. In Chapter 4 it is shown that the general solution of the stochastic Newton equation does not correspond to a solution of the Schrodinger equation, and that there are solutions to the Schrodinger equation which do not satisfy the Guerra-Morato Lagrangian variational principle. These observations are shown to apply equally to other existing formulations of stochastic mechanics, and it is argued that these difficulties represent fundamental inadequacies in the physical foundation of stochastic mechanics.

  7. Statistical signatures of a targeted search by bacteria

    NASA Astrophysics Data System (ADS)

    Jashnsaz, Hossein; Anderson, Gregory G.; Pressé, Steve

    2017-12-01

    Chemoattractant gradients are rarely well-controlled in nature and recent attention has turned to bacterial chemotaxis toward typical bacterial food sources such as food patches or even bacterial prey. In environments with localized food sources reminiscent of a bacterium’s natural habitat, striking phenomena—such as the volcano effect or banding—have been predicted or expected to emerge from chemotactic models. However, in practice, from limited bacterial trajectory data it is difficult to distinguish targeted searches from an untargeted search strategy for food sources. Here we use a theoretical model to identify statistical signatures of a targeted search toward point food sources, such as prey. Our model is constructed on the basis that bacteria use temporal comparisons to bias their random walk, exhibit finite memory and are subject to random (Brownian) motion as well as signaling noise. The advantage with using a stochastic model-based approach is that a stochastic model may be parametrized from individual stochastic bacterial trajectories but may then be used to generate a very large number of simulated trajectories to explore average behaviors obtained from stochastic search strategies. For example, our model predicts that a bacterium’s diffusion coefficient increases as it approaches the point source and that, in the presence of multiple sources, bacteria may take substantially longer to locate their first source giving the impression of an untargeted search strategy.

  8. Stochastic integrated assessment of climate tipping points indicates the need for strict climate policy

    NASA Astrophysics Data System (ADS)

    Lontzek, Thomas S.; Cai, Yongyang; Judd, Kenneth L.; Lenton, Timothy M.

    2015-05-01

    Perhaps the most `dangerous’ aspect of future climate change is the possibility that human activities will push parts of the climate system past tipping points, leading to irreversible impacts. The likelihood of such large-scale singular events is expected to increase with global warming, but is fundamentally uncertain. A key question is how should the uncertainty surrounding tipping events affect climate policy? We address this using a stochastic integrated assessment model, based on the widely used deterministic DICE model. The temperature-dependent likelihood of tipping is calibrated using expert opinions, which we find to be internally consistent. The irreversible impacts of tipping events are assumed to accumulate steadily over time (rather than instantaneously), consistent with scientific understanding. Even with conservative assumptions about the rate and impacts of a stochastic tipping event, today’s optimal carbon tax is increased by ~50%. For a plausibly rapid, high-impact tipping event, today’s optimal carbon tax is increased by >200%. The additional carbon tax to delay climate tipping grows at only about half the rate of the baseline carbon tax. This implies that the effective discount rate for the costs of stochastic climate tipping is much lower than the discount rate for deterministic climate damages. Our results support recent suggestions that the costs of carbon emission used to inform policy are being underestimated, and that uncertain future climate damages should be discounted at a low rate.

  9. Universal Temporal Profile of Replication Origin Activation in Eukaryotes

    NASA Astrophysics Data System (ADS)

    Goldar, Arach

    2011-03-01

    The complete and faithful transmission of eukaryotic genome to daughter cells involves the timely duplication of mother cell's DNA. DNA replication starts at multiple chromosomal positions called replication origin. From each activated replication origin two replication forks progress in opposite direction and duplicate the mother cell's DNA. While it is widely accepted that in eukaryotic organisms replication origins are activated in a stochastic manner, little is known on the sources of the observed stochasticity. It is often associated to the population variability to enter S phase. We extract from a growing Saccharomyces cerevisiae population the average rate of origin activation in a single cell by combining single molecule measurements and a numerical deconvolution technique. We show that the temporal profile of the rate of origin activation in a single cell is similar to the one extracted from a replicating cell population. Taking into account this observation we exclude the population variability as the origin of observed stochasticity in origin activation. We confirm that the rate of origin activation increases in the early stage of S phase and decreases at the latter stage. The population average activation rate extracted from single molecule analysis is in prefect accordance with the activation rate extracted from published micro-array data, confirming therefore the homogeneity and genome scale invariance of dynamic of replication process. All these observations point toward a possible role of replication fork to control the rate of origin activation.

  10. A Constructive Mean-Field Analysis of Multi-Population Neural Networks with Random Synaptic Weights and Stochastic Inputs

    PubMed Central

    Faugeras, Olivier; Touboul, Jonathan; Cessac, Bruno

    2008-01-01

    We deal with the problem of bridging the gap between two scales in neuronal modeling. At the first (microscopic) scale, neurons are considered individually and their behavior described by stochastic differential equations that govern the time variations of their membrane potentials. They are coupled by synaptic connections acting on their resulting activity, a nonlinear function of their membrane potential. At the second (mesoscopic) scale, interacting populations of neurons are described individually by similar equations. The equations describing the dynamical and the stationary mean-field behaviors are considered as functional equations on a set of stochastic processes. Using this new point of view allows us to prove that these equations are well-posed on any finite time interval and to provide a constructive method for effectively computing their unique solution. This method is proved to converge to the unique solution and we characterize its complexity and convergence rate. We also provide partial results for the stationary problem on infinite time intervals. These results shed some new light on such neural mass models as the one of Jansen and Rit (1995): their dynamics appears as a coarse approximation of the much richer dynamics that emerges from our analysis. Our numerical experiments confirm that the framework we propose and the numerical methods we derive from it provide a new and powerful tool for the exploration of neural behaviors at different scales. PMID:19255631

  11. Modelling on optimal portfolio with exchange rate based on discontinuous stochastic process

    NASA Astrophysics Data System (ADS)

    Yan, Wei; Chang, Yuwen

    2016-12-01

    Considering the stochastic exchange rate, this paper is concerned with the dynamic portfolio selection in financial market. The optimal investment problem is formulated as a continuous-time mathematical model under mean-variance criterion. These processes follow jump-diffusion processes (Weiner process and Poisson process). Then the corresponding Hamilton-Jacobi-Bellman(HJB) equation of the problem is presented and its efferent frontier is obtained. Moreover, the optimal strategy is also derived under safety-first criterion.

  12. Investigation for improving Global Positioning System (GPS) orbits using a discrete sequential estimator and stochastic models of selected physical processes

    NASA Technical Reports Server (NTRS)

    Goad, Clyde C.; Chadwell, C. David

    1993-01-01

    GEODYNII is a conventional batch least-squares differential corrector computer program with deterministic models of the physical environment. Conventional algorithms were used to process differenced phase and pseudorange data to determine eight-day Global Positioning system (GPS) orbits with several meter accuracy. However, random physical processes drive the errors whose magnitudes prevent improving the GPS orbit accuracy. To improve the orbit accuracy, these random processes should be modeled stochastically. The conventional batch least-squares algorithm cannot accommodate stochastic models, only a stochastic estimation algorithm is suitable, such as a sequential filter/smoother. Also, GEODYNII cannot currently model the correlation among data values. Differenced pseudorange, and especially differenced phase, are precise data types that can be used to improve the GPS orbit precision. To overcome these limitations and improve the accuracy of GPS orbits computed using GEODYNII, we proposed to develop a sequential stochastic filter/smoother processor by using GEODYNII as a type of trajectory preprocessor. Our proposed processor is now completed. It contains a correlated double difference range processing capability, first order Gauss Markov models for the solar radiation pressure scale coefficient and y-bias acceleration, and a random walk model for the tropospheric refraction correction. The development approach was to interface the standard GEODYNII output files (measurement partials and variationals) with software modules containing the stochastic estimator, the stochastic models, and a double differenced phase range processing routine. Thus, no modifications to the original GEODYNII software were required. A schematic of the development is shown. The observational data are edited in the preprocessor and the data are passed to GEODYNII as one of its standard data types. A reference orbit is determined using GEODYNII as a batch least-squares processor and the GEODYNII measurement partial (FTN90) and variational (FTN80, V-matrix) files are generated. These two files along with a control statement file and a satellite identification and mass file are passed to the filter/smoother to estimate time-varying parameter states at each epoch, improved satellite initial elements, and improved estimates of constant parameters.

  13. Northern Hemisphere glaciation and the evolution of Plio-Pleistocene climate noise

    NASA Astrophysics Data System (ADS)

    Meyers, Stephen R.; Hinnov, Linda A.

    2010-08-01

    Deterministic orbital controls on climate variability are commonly inferred to dominate across timescales of 104-106 years, although some studies have suggested that stochastic processes may be of equal or greater importance. Here we explicitly quantify changes in deterministic orbital processes (forcing and/or pacing) versus stochastic climate processes during the Plio-Pleistocene, via time-frequency analysis of two prominent foraminifera oxygen isotopic stacks. Our results indicate that development of the Northern Hemisphere ice sheet is paralleled by an overall amplification of both deterministic and stochastic climate energy, but their relative dominance is variable. The progression from a more stochastic early Pliocene to a strongly deterministic late Pleistocene is primarily accommodated during two transitory phases of Northern Hemisphere ice sheet growth. This long-term trend is punctuated by “stochastic events,” which we interpret as evidence for abrupt reorganization of the climate system at the initiation and termination of the mid-Pleistocene transition and at the onset of Northern Hemisphere glaciation. In addition to highlighting a complex interplay between deterministic and stochastic climate change during the Plio-Pleistocene, our results support an early onset for Northern Hemisphere glaciation (between 3.5 and 3.7 Ma) and reveal some new characteristics of the orbital signal response, such as the puzzling emergence of 100 ka and 400 ka cyclic climate variability during theoretical eccentricity nodes.

  14. Noise shaping in populations of coupled model neurons.

    PubMed

    Mar, D J; Chow, C C; Gerstner, W; Adams, R W; Collins, J J

    1999-08-31

    Biological information-processing systems, such as populations of sensory and motor neurons, may use correlations between the firings of individual elements to obtain lower noise levels and a systemwide performance improvement in the dynamic range or the signal-to-noise ratio. Here, we implement such correlations in networks of coupled integrate-and-fire neurons using inhibitory coupling and demonstrate that this can improve the system dynamic range and the signal-to-noise ratio in a population rate code. The improvement can surpass that expected for simple averaging of uncorrelated elements. A theory that predicts the resulting power spectrum is developed in terms of a stochastic point-process model in which the instantaneous population firing rate is modulated by the coupling between elements.

  15. Multitime correlation functions in nonclassical stochastic processes

    NASA Astrophysics Data System (ADS)

    Krumm, F.; Sperling, J.; Vogel, W.

    2016-06-01

    A general method is introduced for verifying multitime quantum correlations through the characteristic function of the time-dependent P functional that generalizes the Glauber-Sudarshan P function. Quantum correlation criteria are derived which identify quantum effects for an arbitrary number of points in time. The Magnus expansion is used to visualize the impact of the required time ordering, which becomes crucial in situations when the interaction problem is explicitly time dependent. We show that the latter affects the multi-time-characteristic function and, therefore, the temporal evolution of the nonclassicality. As an example, we apply our technique to an optical parametric process with a frequency mismatch. The resulting two-time-characteristic function yields full insight into the two-time quantum correlation properties of such a system.

  16. Agent-based model of angiogenesis simulates capillary sprout initiation in multicellular networks

    PubMed Central

    Walpole, J.; Chappell, J.C.; Cluceru, J.G.; Mac Gabhann, F.; Bautch, V.L.; Peirce, S. M.

    2015-01-01

    Many biological processes are controlled by both deterministic and stochastic influences. However, efforts to model these systems often rely on either purely stochastic or purely rule-based methods. To better understand the balance between stochasticity and determinism in biological processes a computational approach that incorporates both influences may afford additional insight into underlying biological mechanisms that give rise to emergent system properties. We apply a combined approach to the simulation and study of angiogenesis, the growth of new blood vessels from existing networks. This complex multicellular process begins with selection of an initiating endothelial cell, or tip cell, which sprouts from the parent vessels in response to stimulation by exogenous cues. We have constructed an agent-based model of sprouting angiogenesis to evaluate endothelial cell sprout initiation frequency and location, and we have experimentally validated it using high-resolution time-lapse confocal microscopy. ABM simulations were then compared to a Monte Carlo model, revealing that purely stochastic simulations could not generate sprout locations as accurately as the rule-informed agent-based model. These findings support the use of rule-based approaches for modeling the complex mechanisms underlying sprouting angiogenesis over purely stochastic methods. PMID:26158406

  17. Agent-based model of angiogenesis simulates capillary sprout initiation in multicellular networks.

    PubMed

    Walpole, J; Chappell, J C; Cluceru, J G; Mac Gabhann, F; Bautch, V L; Peirce, S M

    2015-09-01

    Many biological processes are controlled by both deterministic and stochastic influences. However, efforts to model these systems often rely on either purely stochastic or purely rule-based methods. To better understand the balance between stochasticity and determinism in biological processes a computational approach that incorporates both influences may afford additional insight into underlying biological mechanisms that give rise to emergent system properties. We apply a combined approach to the simulation and study of angiogenesis, the growth of new blood vessels from existing networks. This complex multicellular process begins with selection of an initiating endothelial cell, or tip cell, which sprouts from the parent vessels in response to stimulation by exogenous cues. We have constructed an agent-based model of sprouting angiogenesis to evaluate endothelial cell sprout initiation frequency and location, and we have experimentally validated it using high-resolution time-lapse confocal microscopy. ABM simulations were then compared to a Monte Carlo model, revealing that purely stochastic simulations could not generate sprout locations as accurately as the rule-informed agent-based model. These findings support the use of rule-based approaches for modeling the complex mechanisms underlying sprouting angiogenesis over purely stochastic methods.

  18. Stochastic flow shop scheduling of overlapping jobs on tandem machines in application to optimizing the US Army's deliberate nuclear, biological, and chemical decontamination process, (final report). Master's thesis

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Novikov, V.

    1991-05-01

    The U.S. Army's detailed equipment decontamination process is a stochastic flow shop which has N independent non-identical jobs (vehicles) which have overlapping processing times. This flow shop consists of up to six non-identical machines (stations). With the exception of one station, the processing times of the jobs are random variables. Based on an analysis of the processing times, the jobs for the 56 Army heavy division companies were scheduled according to the best shortest expected processing time - longest expected processing time (SEPT-LEPT) sequence. To assist in this scheduling the Gap Comparison Heuristic was developed to select the best SEPT-LEPTmore » schedule. This schedule was then used in balancing the detailed equipment decon line in order to find the best possible site configuration subject to several constraints. The detailed troop decon line, in which all jobs are independent and identically distributed, was then balanced. Lastly, an NBC decon optimization computer program was developed using the scheduling and line balancing results. This program serves as a prototype module for the ANBACIS automated NBC decision support system.... Decontamination, Stochastic flow shop, Scheduling, Stochastic scheduling, Minimization of the makespan, SEPT-LEPT Sequences, Flow shop line balancing, ANBACIS.« less

  19. Unified picture of strong-coupling stochastic thermodynamics and time reversals

    NASA Astrophysics Data System (ADS)

    Aurell, Erik

    2018-04-01

    Strong-coupling statistical thermodynamics is formulated as the Hamiltonian dynamics of an observed system interacting with another unobserved system (a bath). It is shown that the entropy production functional of stochastic thermodynamics, defined as the log ratio of forward and backward system path probabilities, is in a one-to-one relation with the log ratios of the joint initial conditions of the system and the bath. A version of strong-coupling statistical thermodynamics where the system-bath interaction vanishes at the beginning and at the end of a process is, as is also weak-coupling stochastic thermodynamics, related to the bath initially in equilibrium by itself. The heat is then the change of bath energy over the process, and it is discussed when this heat is a functional of the system history alone. The version of strong-coupling statistical thermodynamics introduced by Seifert and Jarzynski is related to the bath initially in conditional equilibrium with respect to the system. This leads to heat as another functional of the system history which needs to be determined by thermodynamic integration. The log ratio of forward and backward system path probabilities in a stochastic process is finally related to log ratios of the initial conditions of a combined system and bath. It is shown that the entropy production formulas of stochastic processes under a general class of time reversals are given by the differences of bath energies in a larger underlying Hamiltonian system. The paper highlights the centrality of time reversal in stochastic thermodynamics, also in the case of strong coupling.

  20. A stochastic diffusion process for Lochner's generalized Dirichlet distribution

    DOE PAGES

    Bakosi, J.; Ristorcelli, J. R.

    2013-10-01

    The method of potential solutions of Fokker-Planck equations is used to develop a transport equation for the joint probability of N stochastic variables with Lochner’s generalized Dirichlet distribution as its asymptotic solution. Individual samples of a discrete ensemble, obtained from the system of stochastic differential equations, equivalent to the Fokker-Planck equation developed here, satisfy a unit-sum constraint at all times and ensure a bounded sample space, similarly to the process developed in for the Dirichlet distribution. Consequently, the generalized Dirichlet diffusion process may be used to represent realizations of a fluctuating ensemble of N variables subject to a conservation principle.more » Compared to the Dirichlet distribution and process, the additional parameters of the generalized Dirichlet distribution allow a more general class of physical processes to be modeled with a more general covariance matrix.« less

  1. Computational Cellular Dynamics Based on the Chemical Master Equation: A Challenge for Understanding Complexity

    PubMed Central

    Liang, Jie; Qian, Hong

    2010-01-01

    Modern molecular biology has always been a great source of inspiration for computational science. Half a century ago, the challenge from understanding macromolecular dynamics has led the way for computations to be part of the tool set to study molecular biology. Twenty-five years ago, the demand from genome science has inspired an entire generation of computer scientists with an interest in discrete mathematics to join the field that is now called bioinformatics. In this paper, we shall lay out a new mathematical theory for dynamics of biochemical reaction systems in a small volume (i.e., mesoscopic) in terms of a stochastic, discrete-state continuous-time formulation, called the chemical master equation (CME). Similar to the wavefunction in quantum mechanics, the dynamically changing probability landscape associated with the state space provides a fundamental characterization of the biochemical reaction system. The stochastic trajectories of the dynamics are best known through the simulations using the Gillespie algorithm. In contrast to the Metropolis algorithm, this Monte Carlo sampling technique does not follow a process with detailed balance. We shall show several examples how CMEs are used to model cellular biochemical systems. We shall also illustrate the computational challenges involved: multiscale phenomena, the interplay between stochasticity and nonlinearity, and how macroscopic determinism arises from mesoscopic dynamics. We point out recent advances in computing solutions to the CME, including exact solution of the steady state landscape and stochastic differential equations that offer alternatives to the Gilespie algorithm. We argue that the CME is an ideal system from which one can learn to understand “complex behavior” and complexity theory, and from which important biological insight can be gained. PMID:24999297

  2. Computational Cellular Dynamics Based on the Chemical Master Equation: A Challenge for Understanding Complexity.

    PubMed

    Liang, Jie; Qian, Hong

    2010-01-01

    Modern molecular biology has always been a great source of inspiration for computational science. Half a century ago, the challenge from understanding macromolecular dynamics has led the way for computations to be part of the tool set to study molecular biology. Twenty-five years ago, the demand from genome science has inspired an entire generation of computer scientists with an interest in discrete mathematics to join the field that is now called bioinformatics. In this paper, we shall lay out a new mathematical theory for dynamics of biochemical reaction systems in a small volume (i.e., mesoscopic) in terms of a stochastic, discrete-state continuous-time formulation, called the chemical master equation (CME). Similar to the wavefunction in quantum mechanics, the dynamically changing probability landscape associated with the state space provides a fundamental characterization of the biochemical reaction system. The stochastic trajectories of the dynamics are best known through the simulations using the Gillespie algorithm. In contrast to the Metropolis algorithm, this Monte Carlo sampling technique does not follow a process with detailed balance. We shall show several examples how CMEs are used to model cellular biochemical systems. We shall also illustrate the computational challenges involved: multiscale phenomena, the interplay between stochasticity and nonlinearity, and how macroscopic determinism arises from mesoscopic dynamics. We point out recent advances in computing solutions to the CME, including exact solution of the steady state landscape and stochastic differential equations that offer alternatives to the Gilespie algorithm. We argue that the CME is an ideal system from which one can learn to understand "complex behavior" and complexity theory, and from which important biological insight can be gained.

  3. Stochastic hybrid systems for studying biochemical processes.

    PubMed

    Singh, Abhyudai; Hespanha, João P

    2010-11-13

    Many protein and mRNA species occur at low molecular counts within cells, and hence are subject to large stochastic fluctuations in copy numbers over time. Development of computationally tractable frameworks for modelling stochastic fluctuations in population counts is essential to understand how noise at the cellular level affects biological function and phenotype. We show that stochastic hybrid systems (SHSs) provide a convenient framework for modelling the time evolution of population counts of different chemical species involved in a set of biochemical reactions. We illustrate recently developed techniques that allow fast computations of the statistical moments of the population count, without having to run computationally expensive Monte Carlo simulations of the biochemical reactions. Finally, we review different examples from the literature that illustrate the benefits of using SHSs for modelling biochemical processes.

  4. Stochastic reaction-diffusion algorithms for macromolecular crowding

    NASA Astrophysics Data System (ADS)

    Sturrock, Marc

    2016-06-01

    Compartment-based (lattice-based) reaction-diffusion algorithms are often used for studying complex stochastic spatio-temporal processes inside cells. In this paper the influence of macromolecular crowding on stochastic reaction-diffusion simulations is investigated. Reaction-diffusion processes are considered on two different kinds of compartmental lattice, a cubic lattice and a hexagonal close packed lattice, and solved using two different algorithms, the stochastic simulation algorithm and the spatiocyte algorithm (Arjunan and Tomita 2010 Syst. Synth. Biol. 4, 35-53). Obstacles (modelling macromolecular crowding) are shown to have substantial effects on the mean squared displacement and average number of molecules in the domain but the nature of these effects is dependent on the choice of lattice, with the cubic lattice being more susceptible to the effects of the obstacles. Finally, improvements for both algorithms are presented.

  5. Robust model selection and the statistical classification of languages

    NASA Astrophysics Data System (ADS)

    García, J. E.; González-López, V. A.; Viola, M. L. L.

    2012-10-01

    In this paper we address the problem of model selection for the set of finite memory stochastic processes with finite alphabet, when the data is contaminated. We consider m independent samples, with more than half of them being realizations of the same stochastic process with law Q, which is the one we want to retrieve. We devise a model selection procedure such that for a sample size large enough, the selected process is the one with law Q. Our model selection strategy is based on estimating relative entropies to select a subset of samples that are realizations of the same law. Although the procedure is valid for any family of finite order Markov models, we will focus on the family of variable length Markov chain models, which include the fixed order Markov chain model family. We define the asymptotic breakdown point (ABDP) for a model selection procedure, and we show the ABDP for our procedure. This means that if the proportion of contaminated samples is smaller than the ABDP, then, as the sample size grows our procedure selects a model for the process with law Q. We also use our procedure in a setting where we have one sample conformed by the concatenation of sub-samples of two or more stochastic processes, with most of the subsamples having law Q. We conducted a simulation study. In the application section we address the question of the statistical classification of languages according to their rhythmic features using speech samples. This is an important open problem in phonology. A persistent difficulty on this problem is that the speech samples correspond to several sentences produced by diverse speakers, corresponding to a mixture of distributions. The usual procedure to deal with this problem has been to choose a subset of the original sample which seems to best represent each language. The selection is made by listening to the samples. In our application we use the full dataset without any preselection of samples. We apply our robust methodology estimating a model which represent the main law for each language. Our findings agree with the linguistic conjecture, related to the rhythm of the languages included on our dataset.

  6. Valuation of Capabilities and System Architecture Options to Meet Affordability Requirement

    DTIC Science & Technology

    2014-04-30

    is an extension of the historic volatility and trend of the stock using Brownian motion . In finance , the Black-Scholes equation is used to value...the underlying asset whose value is modeled as a stochastic process. In finance , the underlying asset is a tradeable stock and the stochastic process

  7. On a Result for Finite Markov Chains

    ERIC Educational Resources Information Center

    Kulathinal, Sangita; Ghosh, Lagnojita

    2006-01-01

    In an undergraduate course on stochastic processes, Markov chains are discussed in great detail. Textbooks on stochastic processes provide interesting properties of finite Markov chains. This note discusses one such property regarding the number of steps in which a state is reachable or accessible from another state in a finite Markov chain with M…

  8. Stochastic resonance effects reveal the neural mechanisms of transcranial magnetic stimulation

    PubMed Central

    Schwarzkopf, Dietrich Samuel; Silvanto, Juha; Rees, Geraint

    2011-01-01

    Transcranial magnetic stimulation (TMS) is a popular method for studying causal relationships between neural activity and behavior. However its mode of action remains controversial, and so far there is no framework to explain its wide range of facilitatory and inhibitory behavioral effects. While some theoretical accounts suggests that TMS suppresses neuronal processing, other competing accounts propose that the effects of TMS result from the addition of noise to neuronal processing. Here we exploited the stochastic resonance phenomenon to distinguish these theoretical accounts and determine how TMS affects neuronal processing. Specifically, we showed that online TMS can induce stochastic resonance in the human brain. At low intensity, TMS facilitated the detection of weak motion signals but with higher TMS intensities and stronger motion signals we found only impairment in detection. These findings suggest that TMS acts by adding noise to neuronal processing, at least in an online TMS protocol. Importantly, such stochastic resonance effects may also explain why TMS parameters that under normal circumstances impair behavior, can induce behavioral facilitations when the stimulated area is in an adapted or suppressed state. PMID:21368025

  9. Averaging Principle for the Higher Order Nonlinear Schrödinger Equation with a Random Fast Oscillation

    NASA Astrophysics Data System (ADS)

    Gao, Peng

    2018-06-01

    This work concerns the problem associated with averaging principle for a higher order nonlinear Schrödinger equation perturbed by a oscillating term arising as the solution of a stochastic reaction-diffusion equation evolving with respect to the fast time. This model can be translated into a multiscale stochastic partial differential equations. Stochastic averaging principle is a powerful tool for studying qualitative analysis of stochastic dynamical systems with different time-scales. To be more precise, under suitable conditions, we prove that there is a limit process in which the fast varying process is averaged out and the limit process which takes the form of the higher order nonlinear Schrödinger equation is an average with respect to the stationary measure of the fast varying process. Finally, by using the Khasminskii technique we can obtain the rate of strong convergence for the slow component towards the solution of the averaged equation, and as a consequence, the system can be reduced to a single higher order nonlinear Schrödinger equation with a modified coefficient.

  10. Averaging Principle for the Higher Order Nonlinear Schrödinger Equation with a Random Fast Oscillation

    NASA Astrophysics Data System (ADS)

    Gao, Peng

    2018-04-01

    This work concerns the problem associated with averaging principle for a higher order nonlinear Schrödinger equation perturbed by a oscillating term arising as the solution of a stochastic reaction-diffusion equation evolving with respect to the fast time. This model can be translated into a multiscale stochastic partial differential equations. Stochastic averaging principle is a powerful tool for studying qualitative analysis of stochastic dynamical systems with different time-scales. To be more precise, under suitable conditions, we prove that there is a limit process in which the fast varying process is averaged out and the limit process which takes the form of the higher order nonlinear Schrödinger equation is an average with respect to the stationary measure of the fast varying process. Finally, by using the Khasminskii technique we can obtain the rate of strong convergence for the slow component towards the solution of the averaged equation, and as a consequence, the system can be reduced to a single higher order nonlinear Schrödinger equation with a modified coefficient.

  11. Global climate impacts of stochastic deep convection parameterization in the NCAR CAM5

    DOE PAGES

    Wang, Yong; Zhang, Guang J.

    2016-09-29

    In this paper, the stochastic deep convection parameterization of Plant and Craig (PC) is implemented in the Community Atmospheric Model version 5 (CAM5) to incorporate the stochastic processes of convection into the Zhang-McFarlane (ZM) deterministic deep convective scheme. Its impacts on deep convection, shallow convection, large-scale precipitation and associated dynamic and thermodynamic fields are investigated. Results show that with the introduction of the PC stochastic parameterization, deep convection is decreased while shallow convection is enhanced. The decrease in deep convection is mainly caused by the stochastic process and the spatial averaging of input quantities for the PC scheme. More detrainedmore » liquid water associated with more shallow convection leads to significant increase in liquid water and ice water paths, which increases large-scale precipitation in tropical regions. Specific humidity, relative humidity, zonal wind in the tropics, and precipitable water are all improved. The simulation of shortwave cloud forcing (SWCF) is also improved. The PC stochastic parameterization decreases the global mean SWCF from -52.25 W/m 2 in the standard CAM5 to -48.86 W/m 2, close to -47.16 W/m 2 in observations. The improvement in SWCF over the tropics is due to decreased low cloud fraction simulated by the stochastic scheme. Sensitivity tests of tuning parameters are also performed to investigate the sensitivity of simulated climatology to uncertain parameters in the stochastic deep convection scheme.« less

  12. Global climate impacts of stochastic deep convection parameterization in the NCAR CAM5

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wang, Yong; Zhang, Guang J.

    In this paper, the stochastic deep convection parameterization of Plant and Craig (PC) is implemented in the Community Atmospheric Model version 5 (CAM5) to incorporate the stochastic processes of convection into the Zhang-McFarlane (ZM) deterministic deep convective scheme. Its impacts on deep convection, shallow convection, large-scale precipitation and associated dynamic and thermodynamic fields are investigated. Results show that with the introduction of the PC stochastic parameterization, deep convection is decreased while shallow convection is enhanced. The decrease in deep convection is mainly caused by the stochastic process and the spatial averaging of input quantities for the PC scheme. More detrainedmore » liquid water associated with more shallow convection leads to significant increase in liquid water and ice water paths, which increases large-scale precipitation in tropical regions. Specific humidity, relative humidity, zonal wind in the tropics, and precipitable water are all improved. The simulation of shortwave cloud forcing (SWCF) is also improved. The PC stochastic parameterization decreases the global mean SWCF from -52.25 W/m 2 in the standard CAM5 to -48.86 W/m 2, close to -47.16 W/m 2 in observations. The improvement in SWCF over the tropics is due to decreased low cloud fraction simulated by the stochastic scheme. Sensitivity tests of tuning parameters are also performed to investigate the sensitivity of simulated climatology to uncertain parameters in the stochastic deep convection scheme.« less

  13. A stochastic approach to noise modeling for barometric altimeters.

    PubMed

    Sabatini, Angelo Maria; Genovese, Vincenzo

    2013-11-18

    The question whether barometric altimeters can be applied to accurately track human motions is still debated, since their measurement performance are rather poor due to either coarse resolution or drifting behavior problems. As a step toward accurate short-time tracking of changes in height (up to few minutes), we develop a stochastic model that attempts to capture some statistical properties of the barometric altimeter noise. The barometric altimeter noise is decomposed in three components with different physical origin and properties: a deterministic time-varying mean, mainly correlated with global environment changes, and a first-order Gauss-Markov (GM) random process, mainly accounting for short-term, local environment changes, the effects of which are prominent, respectively, for long-time and short-time motion tracking; an uncorrelated random process, mainly due to wideband electronic noise, including quantization noise. Autoregressive-moving average (ARMA) system identification techniques are used to capture the correlation structure of the piecewise stationary GM component, and to estimate its standard deviation, together with the standard deviation of the uncorrelated component. M-point moving average filters used alone or in combination with whitening filters learnt from ARMA model parameters are further tested in few dynamic motion experiments and discussed for their capability of short-time tracking small-amplitude, low-frequency motions.

  14. Stochastic 3D modeling of Ostwald ripening at ultra-high volume fractions of the coarsening phase

    NASA Astrophysics Data System (ADS)

    Spettl, A.; Wimmer, R.; Werz, T.; Heinze, M.; Odenbach, S.; Krill, C. E., III; Schmidt, V.

    2015-09-01

    We present a (dynamic) stochastic simulation model for 3D grain morphologies undergoing a grain coarsening phenomenon known as Ostwald ripening. For low volume fractions of the coarsening phase, the classical LSW theory predicts a power-law evolution of the mean particle size and convergence toward self-similarity of the particle size distribution; experiments suggest that this behavior holds also for high volume fractions. In the present work, we have analyzed 3D images that were recorded in situ over time in semisolid Al-Cu alloys manifesting ultra-high volume fractions of the coarsening (solid) phase. Using this information we developed a stochastic simulation model for the 3D morphology of the coarsening grains at arbitrary time steps. Our stochastic model is based on random Laguerre tessellations and is by definition self-similar—i.e. it depends only on the mean particle diameter, which in turn can be estimated at each point in time. For a given mean diameter, the stochastic model requires only three additional scalar parameters, which influence the distribution of particle sizes and their shapes. An evaluation shows that even with this minimal information the stochastic model yields an excellent representation of the statistical properties of the experimental data.

  15. New exponential stability criteria for stochastic BAM neural networks with impulses

    NASA Astrophysics Data System (ADS)

    Sakthivel, R.; Samidurai, R.; Anthoni, S. M.

    2010-10-01

    In this paper, we study the global exponential stability of time-delayed stochastic bidirectional associative memory neural networks with impulses and Markovian jumping parameters. A generalized activation function is considered, and traditional assumptions on the boundedness, monotony and differentiability of activation functions are removed. We obtain a new set of sufficient conditions in terms of linear matrix inequalities, which ensures the global exponential stability of the unique equilibrium point for stochastic BAM neural networks with impulses. The Lyapunov function method with the Itô differential rule is employed for achieving the required result. Moreover, a numerical example is provided to show that the proposed result improves the allowable upper bound of delays over some existing results in the literature.

  16. Global exponential stability of neutral high-order stochastic Hopfield neural networks with Markovian jump parameters and mixed time delays.

    PubMed

    Huang, Haiying; Du, Qiaosheng; Kang, Xibing

    2013-11-01

    In this paper, a class of neutral high-order stochastic Hopfield neural networks with Markovian jump parameters and mixed time delays is investigated. The jumping parameters are modeled as a continuous-time finite-state Markov chain. At first, the existence of equilibrium point for the addressed neural networks is studied. By utilizing the Lyapunov stability theory, stochastic analysis theory and linear matrix inequality (LMI) technique, new delay-dependent stability criteria are presented in terms of linear matrix inequalities to guarantee the neural networks to be globally exponentially stable in the mean square. Numerical simulations are carried out to illustrate the main results. © 2013 ISA. Published by ISA. All rights reserved.

  17. Mean first-passage times of non-Markovian random walkers in confinement.

    PubMed

    Guérin, T; Levernier, N; Bénichou, O; Voituriez, R

    2016-06-16

    The first-passage time, defined as the time a random walker takes to reach a target point in a confining domain, is a key quantity in the theory of stochastic processes. Its importance comes from its crucial role in quantifying the efficiency of processes as varied as diffusion-limited reactions, target search processes or the spread of diseases. Most methods of determining the properties of first-passage time in confined domains have been limited to Markovian (memoryless) processes. However, as soon as the random walker interacts with its environment, memory effects cannot be neglected: that is, the future motion of the random walker does not depend only on its current position, but also on its past trajectory. Examples of non-Markovian dynamics include single-file diffusion in narrow channels, or the motion of a tracer particle either attached to a polymeric chain or diffusing in simple or complex fluids such as nematics, dense soft colloids or viscoelastic solutions. Here we introduce an analytical approach to calculate, in the limit of a large confining volume, the mean first-passage time of a Gaussian non-Markovian random walker to a target. The non-Markovian features of the dynamics are encompassed by determining the statistical properties of the fictitious trajectory that the random walker would follow after the first-passage event takes place, which are shown to govern the first-passage time kinetics. This analysis is applicable to a broad range of stochastic processes, which may be correlated at long times. Our theoretical predictions are confirmed by numerical simulations for several examples of non-Markovian processes, including the case of fractional Brownian motion in one and higher dimensions. These results reveal, on the basis of Gaussian processes, the importance of memory effects in first-passage statistics of non-Markovian random walkers in confinement.

  18. Mean first-passage times of non-Markovian random walkers in confinement

    NASA Astrophysics Data System (ADS)

    Guérin, T.; Levernier, N.; Bénichou, O.; Voituriez, R.

    2016-06-01

    The first-passage time, defined as the time a random walker takes to reach a target point in a confining domain, is a key quantity in the theory of stochastic processes. Its importance comes from its crucial role in quantifying the efficiency of processes as varied as diffusion-limited reactions, target search processes or the spread of diseases. Most methods of determining the properties of first-passage time in confined domains have been limited to Markovian (memoryless) processes. However, as soon as the random walker interacts with its environment, memory effects cannot be neglected: that is, the future motion of the random walker does not depend only on its current position, but also on its past trajectory. Examples of non-Markovian dynamics include single-file diffusion in narrow channels, or the motion of a tracer particle either attached to a polymeric chain or diffusing in simple or complex fluids such as nematics, dense soft colloids or viscoelastic solutions. Here we introduce an analytical approach to calculate, in the limit of a large confining volume, the mean first-passage time of a Gaussian non-Markovian random walker to a target. The non-Markovian features of the dynamics are encompassed by determining the statistical properties of the fictitious trajectory that the random walker would follow after the first-passage event takes place, which are shown to govern the first-passage time kinetics. This analysis is applicable to a broad range of stochastic processes, which may be correlated at long times. Our theoretical predictions are confirmed by numerical simulations for several examples of non-Markovian processes, including the case of fractional Brownian motion in one and higher dimensions. These results reveal, on the basis of Gaussian processes, the importance of memory effects in first-passage statistics of non-Markovian random walkers in confinement.

  19. Exploring empirical rank-frequency distributions longitudinally through a simple stochastic process.

    PubMed

    Finley, Benjamin J; Kilkki, Kalevi

    2014-01-01

    The frequent appearance of empirical rank-frequency laws, such as Zipf's law, in a wide range of domains reinforces the importance of understanding and modeling these laws and rank-frequency distributions in general. In this spirit, we utilize a simple stochastic cascade process to simulate several empirical rank-frequency distributions longitudinally. We focus especially on limiting the process's complexity to increase accessibility for non-experts in mathematics. The process provides a good fit for many empirical distributions because the stochastic multiplicative nature of the process leads to an often observed concave rank-frequency distribution (on a log-log scale) and the finiteness of the cascade replicates real-world finite size effects. Furthermore, we show that repeated trials of the process can roughly simulate the longitudinal variation of empirical ranks. However, we find that the empirical variation is often less that the average simulated process variation, likely due to longitudinal dependencies in the empirical datasets. Finally, we discuss the process limitations and practical applications.

  20. Diffusion approximations to the chemical master equation only have a consistent stochastic thermodynamics at chemical equilibrium

    NASA Astrophysics Data System (ADS)

    Horowitz, Jordan M.

    2015-07-01

    The stochastic thermodynamics of a dilute, well-stirred mixture of chemically reacting species is built on the stochastic trajectories of reaction events obtained from the chemical master equation. However, when the molecular populations are large, the discrete chemical master equation can be approximated with a continuous diffusion process, like the chemical Langevin equation or low noise approximation. In this paper, we investigate to what extent these diffusion approximations inherit the stochastic thermodynamics of the chemical master equation. We find that a stochastic-thermodynamic description is only valid at a detailed-balanced, equilibrium steady state. Away from equilibrium, where there is no consistent stochastic thermodynamics, we show that one can still use the diffusive solutions to approximate the underlying thermodynamics of the chemical master equation.

  1. Diffusion approximations to the chemical master equation only have a consistent stochastic thermodynamics at chemical equilibrium.

    PubMed

    Horowitz, Jordan M

    2015-07-28

    The stochastic thermodynamics of a dilute, well-stirred mixture of chemically reacting species is built on the stochastic trajectories of reaction events obtained from the chemical master equation. However, when the molecular populations are large, the discrete chemical master equation can be approximated with a continuous diffusion process, like the chemical Langevin equation or low noise approximation. In this paper, we investigate to what extent these diffusion approximations inherit the stochastic thermodynamics of the chemical master equation. We find that a stochastic-thermodynamic description is only valid at a detailed-balanced, equilibrium steady state. Away from equilibrium, where there is no consistent stochastic thermodynamics, we show that one can still use the diffusive solutions to approximate the underlying thermodynamics of the chemical master equation.

  2. Exact solution for a non-Markovian dissipative quantum dynamics.

    PubMed

    Ferialdi, Luca; Bassi, Angelo

    2012-04-27

    We provide the exact analytic solution of the stochastic Schrödinger equation describing a harmonic oscillator interacting with a non-Markovian and dissipative environment. This result represents an arrival point in the study of non-Markovian dynamics via stochastic differential equations. It is also one of the few exactly solvable models for infinite-dimensional systems. We compute the Green's function; in the case of a free particle and with an exponentially correlated noise, we discuss the evolution of Gaussian wave functions.

  3. Lyapunov stability analysis for the generalized Kapitza pendulum

    NASA Astrophysics Data System (ADS)

    Druzhinina, O. V.; Sevastianov, L. A.; Vasilyev, S. A.; Vasilyeva, D. G.

    2017-12-01

    In this work generalization of Kapitza pendulum whose suspension point moves in the vertical and horizontal planes is made. Lyapunov stability analysis of the motion for this pendulum subjected to excitation of periodic driving forces and stochastic driving forces that act in the vertical and horizontal planes has been studied. The numerical study of the random motion for generalized Kapitza pendulum under stochastic driving forces has made. It is shown the existence of stable quasi-periodic motion for this pendulum.

  4. Robust Consumption-Investment Problem on Infinite Horizon

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zawisza, Dariusz, E-mail: dariusz.zawisza@im.uj.edu.pl

    In our paper we consider an infinite horizon consumption-investment problem under a model misspecification in a general stochastic factor model. We formulate the problem as a stochastic game and finally characterize the saddle point and the value function of that game using an ODE of semilinear type, for which we provide a proof of an existence and uniqueness theorem for its solution. Such equation is interested on its own right, since it generalizes many other equations arising in various infinite horizon optimization problems.

  5. Macroinvertebrate community assembly in pools created during peatland restoration.

    PubMed

    Brown, Lee E; Ramchunder, Sorain J; Beadle, Jeannie M; Holden, Joseph

    2016-11-01

    Many degraded ecosystems are subject to restoration attempts, providing new opportunities to unravel the processes of ecological community assembly. Restoration of previously drained northern peatlands, primarily to promote peat and carbon accumulation, has created hundreds of thousands of new open water pools. We assessed the potential benefits of this wetland restoration for aquatic biodiversity, and how communities reassemble, by comparing pool ecosystems in regions of the UK Pennines on intact (never drained) versus restored (blocked drainage-ditches) peatland. We also evaluated the conceptual idea that comparing reference ecosystems in terms of their compositional similarity to null assemblages (and thus the relative importance of stochastic versus deterministic assembly) can guide evaluations of restoration success better than analyses of community composition or diversity. Community composition data highlighted some differences in the macroinvertebrate composition of restored pools compared to undisturbed peatland pools, which could be used to suggest that alternative end-points to restoration were influenced by stochastic processes. However, widely used diversity metrics indicated no differences between undisturbed and restored pools. Novel evaluations of restoration using null models confirmed the similarity of deterministic assembly processes from the national species pool across all pools. Stochastic elements were important drivers of between-pool differences at the regional-scale but the scale of these effects was also similar across most of the pools studied. The amalgamation of assembly theory into ecosystem restoration monitoring allows us to conclude with more certainty that restoration has been successful from an ecological perspective in these systems. Evaluation of these UK findings compared to those from peatlands across Europe and North America further suggests that restoring peatland pools delivers significant benefits for aquatic fauna by providing extensive new habitat that is largely equivalent to natural pools. More generally, we suggest that assembly theory could provide new benchmarks for planning and evaluating ecological restoration success. Copyright © 2016 The Authors. Published by Elsevier B.V. All rights reserved.

  6. A Stochastic Point Cloud Sampling Method for Multi-Template Protein Comparative Modeling.

    PubMed

    Li, Jilong; Cheng, Jianlin

    2016-05-10

    Generating tertiary structural models for a target protein from the known structure of its homologous template proteins and their pairwise sequence alignment is a key step in protein comparative modeling. Here, we developed a new stochastic point cloud sampling method, called MTMG, for multi-template protein model generation. The method first superposes the backbones of template structures, and the Cα atoms of the superposed templates form a point cloud for each position of a target protein, which are represented by a three-dimensional multivariate normal distribution. MTMG stochastically resamples the positions for Cα atoms of the residues whose positions are uncertain from the distribution, and accepts or rejects new position according to a simulated annealing protocol, which effectively removes atomic clashes commonly encountered in multi-template comparative modeling. We benchmarked MTMG on 1,033 sequence alignments generated for CASP9, CASP10 and CASP11 targets, respectively. Using multiple templates with MTMG improves the GDT-TS score and TM-score of structural models by 2.96-6.37% and 2.42-5.19% on the three datasets over using single templates. MTMG's performance was comparable to Modeller in terms of GDT-TS score, TM-score, and GDT-HA score, while the average RMSD was improved by a new sampling approach. The MTMG software is freely available at: http://sysbio.rnet.missouri.edu/multicom_toolbox/mtmg.html.

  7. A Stochastic Point Cloud Sampling Method for Multi-Template Protein Comparative Modeling

    PubMed Central

    Li, Jilong; Cheng, Jianlin

    2016-01-01

    Generating tertiary structural models for a target protein from the known structure of its homologous template proteins and their pairwise sequence alignment is a key step in protein comparative modeling. Here, we developed a new stochastic point cloud sampling method, called MTMG, for multi-template protein model generation. The method first superposes the backbones of template structures, and the Cα atoms of the superposed templates form a point cloud for each position of a target protein, which are represented by a three-dimensional multivariate normal distribution. MTMG stochastically resamples the positions for Cα atoms of the residues whose positions are uncertain from the distribution, and accepts or rejects new position according to a simulated annealing protocol, which effectively removes atomic clashes commonly encountered in multi-template comparative modeling. We benchmarked MTMG on 1,033 sequence alignments generated for CASP9, CASP10 and CASP11 targets, respectively. Using multiple templates with MTMG improves the GDT-TS score and TM-score of structural models by 2.96–6.37% and 2.42–5.19% on the three datasets over using single templates. MTMG’s performance was comparable to Modeller in terms of GDT-TS score, TM-score, and GDT-HA score, while the average RMSD was improved by a new sampling approach. The MTMG software is freely available at: http://sysbio.rnet.missouri.edu/multicom_toolbox/mtmg.html. PMID:27161489

  8. Stochastic scheduling on a repairable manufacturing system

    NASA Astrophysics Data System (ADS)

    Li, Wei; Cao, Jinhua

    1995-08-01

    In this paper, we consider some stochastic scheduling problems with a set of stochastic jobs on a manufacturing system with a single machine that is subject to multiple breakdowns and repairs. When the machine processing a job fails, the job processing must restart some time later when the machine is repaired. For this typical manufacturing system, we find the optimal policies that minimize the following objective functions: (1) the weighed sum of the completion times; (2) the weighed number of late jobs having constant due dates; (3) the weighted number of late jobs having random due dates exponentially distributed, which generalize some previous results.

  9. Conference on Stochastic Processes and Their Applications (12th) held at Ithaca, New York on 11-15 Jul 83,

    DTIC Science & Technology

    1983-07-15

    RD- R136 626 CONFERENCE ON STOCHASTIC PROCESSES AND THEIR APPLICATIONS (12TH> JULY 11 15 1983 ITHACA NEW YORK(U) CORNELL UNIV ITHACA NY 15 JUL 83...oscillator phase Instability" 2t53 - 3s15 p.m. M.N. GOPALAN, Indian Institute of Technoloy, Bombay "Cost benefit analysis of systems subject to inspection...p.m. W. KLIEDANN, Univ. Bremen, Fed. Rep. Germany "Controllability of stochastic systems 8sO0 - lOsO0 p.m. RECEPTION Johnson Art Museum ’q % , t

  10. Variational processes and stochastic versions of mechanics

    NASA Astrophysics Data System (ADS)

    Zambrini, J. C.

    1986-09-01

    The dynamical structure of any reasonable stochastic version of classical mechanics is investigated, including the version created by Nelson [E. Nelson, Quantum Fluctuations (Princeton U.P., Princeton, NJ, 1985); Phys. Rev. 150, 1079 (1966)] for the description of quantum phenomena. Two different theories result from this common structure. One of them is the imaginary time version of Nelson's theory, whose existence was unknown, and yields a radically new probabilistic interpretation of the heat equation. The existence and uniqueness of all the involved stochastic processes is shown under conditions suggested by the variational approach of Yasue [K. Yasue, J. Math. Phys. 22, 1010 (1981)].

  11. Solving difficult problems creatively: a role for energy optimised deterministic/stochastic hybrid computing

    PubMed Central

    Palmer, Tim N.; O’Shea, Michael

    2015-01-01

    How is the brain configured for creativity? What is the computational substrate for ‘eureka’ moments of insight? Here we argue that creative thinking arises ultimately from a synergy between low-energy stochastic and energy-intensive deterministic processing, and is a by-product of a nervous system whose signal-processing capability per unit of available energy has become highly energy optimised. We suggest that the stochastic component has its origin in thermal (ultimately quantum decoherent) noise affecting the activity of neurons. Without this component, deterministic computational models of the brain are incomplete. PMID:26528173

  12. Stochastic analysis of multiphase flow in porous media: II. Numerical simulations

    NASA Astrophysics Data System (ADS)

    Abin, A.; Kalurachchi, J. J.; Kemblowski, M. W.; Chang, C.-M.

    1996-08-01

    The first paper (Chang et al., 1995b) of this two-part series described the stochastic analysis using spectral/perturbation approach to analyze steady state two-phase (water and oil) flow in a, liquid-unsaturated, three fluid-phase porous medium. In this paper, the results between the numerical simulations and closed-form expressions obtained using the perturbation approach are compared. We present the solution to the one-dimensional, steady-state oil and water flow equations. The stochastic input processes are the spatially correlated logk where k is the intrinsic permeability and the soil retention parameter, α. These solutions are subsequently used in the numerical simulations to estimate the statistical properties of the key output processes. The comparison between the results of the perturbation analysis and numerical simulations showed a good agreement between the two methods over a wide range of logk variability with three different combinations of input stochastic processes of logk and soil parameter α. The results clearly demonstrated the importance of considering the spatial variability of key subsurface properties under a variety of physical scenarios. The variability of both capillary pressure and saturation is affected by the type of input stochastic process used to represent the spatial variability. The results also demonstrated the applicability of perturbation theory in predicting the system variability and defining effective fluid properties through the ergodic assumption.

  13. A brief introduction to computer-intensive methods, with a view towards applications in spatial statistics and stereology.

    PubMed

    Mattfeldt, Torsten

    2011-04-01

    Computer-intensive methods may be defined as data analytical procedures involving a huge number of highly repetitive computations. We mention resampling methods with replacement (bootstrap methods), resampling methods without replacement (randomization tests) and simulation methods. The resampling methods are based on simple and robust principles and are largely free from distributional assumptions. Bootstrap methods may be used to compute confidence intervals for a scalar model parameter and for summary statistics from replicated planar point patterns, and for significance tests. For some simple models of planar point processes, point patterns can be simulated by elementary Monte Carlo methods. The simulation of models with more complex interaction properties usually requires more advanced computing methods. In this context, we mention simulation of Gibbs processes with Markov chain Monte Carlo methods using the Metropolis-Hastings algorithm. An alternative to simulations on the basis of a parametric model consists of stochastic reconstruction methods. The basic ideas behind the methods are briefly reviewed and illustrated by simple worked examples in order to encourage novices in the field to use computer-intensive methods. © 2010 The Authors Journal of Microscopy © 2010 Royal Microscopical Society.

  14. Relative Roles of Deterministic and Stochastic Processes in Driving the Vertical Distribution of Bacterial Communities in a Permafrost Core from the Qinghai-Tibet Plateau, China.

    PubMed

    Hu, Weigang; Zhang, Qi; Tian, Tian; Li, Dingyao; Cheng, Gang; Mu, Jing; Wu, Qingbai; Niu, Fujun; Stegen, James C; An, Lizhe; Feng, Huyuan

    2015-01-01

    Understanding the processes that influence the structure of biotic communities is one of the major ecological topics, and both stochastic and deterministic processes are expected to be at work simultaneously in most communities. Here, we investigated the vertical distribution patterns of bacterial communities in a 10-m-long soil core taken within permafrost of the Qinghai-Tibet Plateau. To get a better understanding of the forces that govern these patterns, we examined the diversity and structure of bacterial communities, and the change in community composition along the vertical distance (spatial turnover) from both taxonomic and phylogenetic perspectives. Measures of taxonomic and phylogenetic beta diversity revealed that bacterial community composition changed continuously along the soil core, and showed a vertical distance-decay relationship. Multiple stepwise regression analysis suggested that bacterial alpha diversity and phylogenetic structure were strongly correlated with soil conductivity and pH but weakly correlated with depth. There was evidence that deterministic and stochastic processes collectively drived bacterial vertically-structured pattern. Bacterial communities in five soil horizons (two originated from the active layer and three from permafrost) of the permafrost core were phylogenetically random, indicator of stochastic processes. However, we found a stronger effect of deterministic processes related to soil pH, conductivity, and organic carbon content that were structuring the bacterial communities. We therefore conclude that the vertical distribution of bacterial communities was governed primarily by deterministic ecological selection, although stochastic processes were also at work. Furthermore, the strong impact of environmental conditions (for example, soil physicochemical parameters and seasonal freeze-thaw cycles) on these communities underlines the sensitivity of permafrost microorganisms to climate change and potentially subsequent permafrost thaw.

  15. Relative Roles of Deterministic and Stochastic Processes in Driving the Vertical Distribution of Bacterial Communities in a Permafrost Core from the Qinghai-Tibet Plateau, China

    PubMed Central

    Tian, Tian; Li, Dingyao; Cheng, Gang; Mu, Jing; Wu, Qingbai; Niu, Fujun; Stegen, James C.; An, Lizhe; Feng, Huyuan

    2015-01-01

    Understanding the processes that influence the structure of biotic communities is one of the major ecological topics, and both stochastic and deterministic processes are expected to be at work simultaneously in most communities. Here, we investigated the vertical distribution patterns of bacterial communities in a 10-m-long soil core taken within permafrost of the Qinghai-Tibet Plateau. To get a better understanding of the forces that govern these patterns, we examined the diversity and structure of bacterial communities, and the change in community composition along the vertical distance (spatial turnover) from both taxonomic and phylogenetic perspectives. Measures of taxonomic and phylogenetic beta diversity revealed that bacterial community composition changed continuously along the soil core, and showed a vertical distance-decay relationship. Multiple stepwise regression analysis suggested that bacterial alpha diversity and phylogenetic structure were strongly correlated with soil conductivity and pH but weakly correlated with depth. There was evidence that deterministic and stochastic processes collectively drived bacterial vertically-structured pattern. Bacterial communities in five soil horizons (two originated from the active layer and three from permafrost) of the permafrost core were phylogenetically random, indicator of stochastic processes. However, we found a stronger effect of deterministic processes related to soil pH, conductivity, and organic carbon content that were structuring the bacterial communities. We therefore conclude that the vertical distribution of bacterial communities was governed primarily by deterministic ecological selection, although stochastic processes were also at work. Furthermore, the strong impact of environmental conditions (for example, soil physicochemical parameters and seasonal freeze-thaw cycles) on these communities underlines the sensitivity of permafrost microorganisms to climate change and potentially subsequent permafrost thaw. PMID:26699734

  16. Simulations of Technology-Induced and Crisis-Led Stochastic and Chaotic Fluctuations in Higher Education Processes: A Model and a Case Study for Performance and Expected Employment

    ERIC Educational Resources Information Center

    Ahmet, Kara

    2015-01-01

    This paper presents a simple model of the provision of higher educational services that considers and exemplifies nonlinear, stochastic, and potentially chaotic processes. I use the methods of system dynamics to simulate these processes in the context of a particular sociologically interesting case, namely that of the Turkish higher education…

  17. General Results in Optimal Control of Discrete-Time Nonlinear Stochastic Systems

    DTIC Science & Technology

    1988-01-01

    P. J. McLane, "Optimal Stochastic Control of Linear System. with State- and Control-Dependent Distur- bances," ZEEE Trans. 4uto. Contr., Vol. 16, No...Vol. 45, No. 1, pp. 359-362, 1987 (9] R. R. Mohler and W. J. Kolodziej, "An Overview of Stochastic Bilinear Control Processes," ZEEE Trans. Syst...34 J. of Math. anal. App.:, Vol. 47, pp. 156-161, 1974 [14) E. Yaz, "A Control Scheme for a Class of Discrete Nonlinear Stochastic Systems," ZEEE Trans

  18. Optimization of High-Dimensional Functions through Hypercube Evaluation

    PubMed Central

    Abiyev, Rahib H.; Tunay, Mustafa

    2015-01-01

    A novel learning algorithm for solving global numerical optimization problems is proposed. The proposed learning algorithm is intense stochastic search method which is based on evaluation and optimization of a hypercube and is called the hypercube optimization (HO) algorithm. The HO algorithm comprises the initialization and evaluation process, displacement-shrink process, and searching space process. The initialization and evaluation process initializes initial solution and evaluates the solutions in given hypercube. The displacement-shrink process determines displacement and evaluates objective functions using new points, and the search area process determines next hypercube using certain rules and evaluates the new solutions. The algorithms for these processes have been designed and presented in the paper. The designed HO algorithm is tested on specific benchmark functions. The simulations of HO algorithm have been performed for optimization of functions of 1000-, 5000-, or even 10000 dimensions. The comparative simulation results with other approaches demonstrate that the proposed algorithm is a potential candidate for optimization of both low and high dimensional functions. PMID:26339237

  19. Effective stochastic generator with site-dependent interactions

    NASA Astrophysics Data System (ADS)

    Khamehchi, Masoumeh; Jafarpour, Farhad H.

    2017-11-01

    It is known that the stochastic generators of effective processes associated with the unconditioned dynamics of rare events might consist of non-local interactions; however, it can be shown that there are special cases for which these generators can include local interactions. In this paper, we investigate this possibility by considering systems of classical particles moving on a one-dimensional lattice with open boundaries. The particles might have hard-core interactions similar to the particles in an exclusion process, or there can be many arbitrary particles at a single site in a zero-range process. Assuming that the interactions in the original process are local and site-independent, we will show that under certain constraints on the microscopic reaction rules, the stochastic generator of an unconditioned process can be local but site-dependent. As two examples, the asymmetric zero-temperature Glauber model and the A-model with diffusion are presented and studied under the above-mentioned constraints.

  20. Study on Stationarity of Random Load Spectrum Based on the Special Road

    NASA Astrophysics Data System (ADS)

    Yan, Huawen; Zhang, Weigong; Wang, Dong

    2017-09-01

    In the special road quality assessment method, there is a method using a wheel force sensor, the essence of this method is collecting the load spectrum of the car to reflect the quality of road. According to the definition of stochastic process, it is easy to find that the load spectrum is a stochastic process. However, the analysis method and application range of different random processes are very different, especially in engineering practice, which will directly affect the design and development of the experiment. Therefore, determining the type of a random process has important practical significance. Based on the analysis of the digital characteristics of road load spectrum, this paper determines that the road load spectrum in this experiment belongs to a stationary stochastic process, paving the way for the follow-up modeling and feature extraction of the special road.

  1. Stochastic modelling of intermittent fluctuations in the scrape-off layer: Correlations, distributions, level crossings, and moment estimation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Garcia, O. E., E-mail: odd.erik.garcia@uit.no; Kube, R.; Theodorsen, A.

    A stochastic model is presented for intermittent fluctuations in the scrape-off layer of magnetically confined plasmas. The fluctuations in the plasma density are modeled by a super-position of uncorrelated pulses with fixed shape and duration, describing radial motion of blob-like structures. In the case of an exponential pulse shape and exponentially distributed pulse amplitudes, predictions are given for the lowest order moments, probability density function, auto-correlation function, level crossings, and average times for periods spent above and below a given threshold level. Also, the mean squared errors on estimators of sample mean and variance for realizations of the process bymore » finite time series are obtained. These results are discussed in the context of single-point measurements of fluctuations in the scrape-off layer, broad density profiles, and implications for plasma–wall interactions due to the transient transport events in fusion grade plasmas. The results may also have wide applications for modelling fluctuations in other magnetized plasmas such as basic laboratory experiments and ionospheric irregularities.« less

  2. Singular boundary value problem for the integrodifferential equation in an insurance model with stochastic premiums: Analysis and numerical solution

    NASA Astrophysics Data System (ADS)

    Belkina, T. A.; Konyukhova, N. B.; Kurochkin, S. V.

    2012-10-01

    A singular boundary value problem for a second-order linear integrodifferential equation with Volterra and non-Volterra integral operators is formulated and analyzed. The equation is defined on ℝ+, has a weak singularity at zero and a strong singularity at infinity, and depends on several positive parameters. Under natural constraints on the coefficients of the equation, existence and uniqueness theorems for this problem with given limit boundary conditions at singular points are proved, asymptotic representations of the solution are given, and an algorithm for its numerical determination is described. Numerical computations are performed and their interpretation is given. The problem arises in the study of the survival probability of an insurance company over infinite time (as a function of its initial surplus) in a dynamic insurance model that is a modification of the classical Cramer-Lundberg model with a stochastic process rate of premium under a certain investment strategy in the financial market. A comparative analysis of the results with those produced by the model with deterministic premiums is given.

  3. Noise-induced transitions and shifts in a climate-vegetation feedback model.

    PubMed

    Alexandrov, Dmitri V; Bashkirtseva, Irina A; Ryashko, Lev B

    2018-04-01

    Motivated by the extremely important role of the Earth's vegetation dynamics in climate changes, we study the stochastic variability of a simple climate-vegetation system. In the case of deterministic dynamics, the system has one stable equilibrium and limit cycle or two stable equilibria corresponding to two opposite (cold and warm) climate-vegetation states. These states are divided by a separatrix going across a point of unstable equilibrium. Some possible stochastic scenarios caused by different externally induced natural and anthropogenic processes inherit properties of deterministic behaviour and drastically change the system dynamics. We demonstrate that the system transitions across its separatrix occur with increasing noise intensity. The climate-vegetation system therewith fluctuates, transits and localizes in the vicinity of its attractor. We show that this phenomenon occurs within some critical range of noise intensities. A noise-induced shift into the range of smaller global average temperatures corresponding to substantial oscillations of the Earth's vegetation cover is revealed. Our analysis demonstrates that the climate-vegetation interactions essentially contribute to climate dynamics and should be taken into account in more precise and complex models of climate variability.

  4. A Stochastic Detection and Retrieval Model for the Study of Metacognition

    ERIC Educational Resources Information Center

    Jang, Yoonhee; Wallsten, Thomas S.; Huber, David E.

    2012-01-01

    We present a signal detection-like model termed the stochastic detection and retrieval model (SDRM) for use in studying metacognition. Focusing on paradigms that relate retrieval (e.g., recall or recognition) and confidence judgments, the SDRM measures (1) variance in the retrieval process, (2) variance in the confidence process, (3) the extent to…

  5. Stochastic processes, estimation theory and image enhancement

    NASA Technical Reports Server (NTRS)

    Assefi, T.

    1978-01-01

    An introductory account of stochastic processes, estimation theory, and image enhancement is presented. The book is primarily intended for first-year graduate students and practicing engineers and scientists whose work requires an acquaintance with the theory. Fundamental concepts of probability were reviewed that are required to support the main topics. The appendices discuss the remaining mathematical background.

  6. Stochastic Multiscale Analysis and Design of Engine Disks

    DTIC Science & Technology

    2010-07-28

    shown recently to fail when used with data-driven non-linear stochastic input models (KPCA, IsoMap, etc.). Need for scalable exascale computing algorithms Materials Process Design and Control Laboratory Cornell University

  7. Transcriptional dynamics with time-dependent reaction rates

    NASA Astrophysics Data System (ADS)

    Nandi, Shubhendu; Ghosh, Anandamohan

    2015-02-01

    Transcription is the first step in the process of gene regulation that controls cell response to varying environmental conditions. Transcription is a stochastic process, involving synthesis and degradation of mRNAs, that can be modeled as a birth-death process. We consider a generic stochastic model, where the fluctuating environment is encoded in the time-dependent reaction rates. We obtain an exact analytical expression for the mRNA probability distribution and are able to analyze the response for arbitrary time-dependent protocols. Our analytical results and stochastic simulations confirm that the transcriptional machinery primarily act as a low-pass filter. We also show that depending on the system parameters, the mRNA levels in a cell population can show synchronous/asynchronous fluctuations and can deviate from Poisson statistics.

  8. Machine learning for inverse lithography: using stochastic gradient descent for robust photomask synthesis

    NASA Astrophysics Data System (ADS)

    Jia, Ningning; Y Lam, Edmund

    2010-04-01

    Inverse lithography technology (ILT) synthesizes photomasks by solving an inverse imaging problem through optimization of an appropriate functional. Much effort on ILT is dedicated to deriving superior masks at a nominal process condition. However, the lower k1 factor causes the mask to be more sensitive to process variations. Robustness to major process variations, such as focus and dose variations, is desired. In this paper, we consider the focus variation as a stochastic variable, and treat the mask design as a machine learning problem. The stochastic gradient descent approach, which is a useful tool in machine learning, is adopted to train the mask design. Compared with previous work, simulation shows that the proposed algorithm is effective in producing robust masks.

  9. An accurate nonlinear stochastic model for MEMS-based inertial sensor error with wavelet networks

    NASA Astrophysics Data System (ADS)

    El-Diasty, Mohammed; El-Rabbany, Ahmed; Pagiatakis, Spiros

    2007-12-01

    The integration of Global Positioning System (GPS) with Inertial Navigation System (INS) has been widely used in many applications for positioning and orientation purposes. Traditionally, random walk (RW), Gauss-Markov (GM), and autoregressive (AR) processes have been used to develop the stochastic model in classical Kalman filters. The main disadvantage of classical Kalman filter is the potentially unstable linearization of the nonlinear dynamic system. Consequently, a nonlinear stochastic model is not optimal in derivative-based filters due to the expected linearization error. With a derivativeless-based filter such as the unscented Kalman filter or the divided difference filter, the filtering process of a complicated highly nonlinear dynamic system is possible without linearization error. This paper develops a novel nonlinear stochastic model for inertial sensor error using a wavelet network (WN). A wavelet network is a highly nonlinear model, which has recently been introduced as a powerful tool for modelling and prediction. Static and kinematic data sets are collected using a MEMS-based IMU (DQI-100) to develop the stochastic model in the static mode and then implement it in the kinematic mode. The derivativeless-based filtering method using GM, AR, and the proposed WN-based processes are used to validate the new model. It is shown that the first-order WN-based nonlinear stochastic model gives superior positioning results to the first-order GM and AR models with an overall improvement of 30% when 30 and 60 seconds GPS outages are introduced.

  10. Exploring Empirical Rank-Frequency Distributions Longitudinally through a Simple Stochastic Process

    PubMed Central

    Finley, Benjamin J.; Kilkki, Kalevi

    2014-01-01

    The frequent appearance of empirical rank-frequency laws, such as Zipf’s law, in a wide range of domains reinforces the importance of understanding and modeling these laws and rank-frequency distributions in general. In this spirit, we utilize a simple stochastic cascade process to simulate several empirical rank-frequency distributions longitudinally. We focus especially on limiting the process’s complexity to increase accessibility for non-experts in mathematics. The process provides a good fit for many empirical distributions because the stochastic multiplicative nature of the process leads to an often observed concave rank-frequency distribution (on a log-log scale) and the finiteness of the cascade replicates real-world finite size effects. Furthermore, we show that repeated trials of the process can roughly simulate the longitudinal variation of empirical ranks. However, we find that the empirical variation is often less that the average simulated process variation, likely due to longitudinal dependencies in the empirical datasets. Finally, we discuss the process limitations and practical applications. PMID:24755621

  11. Hidden symmetries and equilibrium properties of multiplicative white-noise stochastic processes

    NASA Astrophysics Data System (ADS)

    González Arenas, Zochil; Barci, Daniel G.

    2012-12-01

    Multiplicative white-noise stochastic processes continue to attract attention in a wide area of scientific research. The variety of prescriptions available for defining them makes the development of general tools for their characterization difficult. In this work, we study equilibrium properties of Markovian multiplicative white-noise processes. For this, we define the time reversal transformation for such processes, taking into account that the asymptotic stationary probability distribution depends on the prescription. Representing the stochastic process in a functional Grassmann formalism, we avoid the necessity of fixing a particular prescription. In this framework, we analyze equilibrium properties and study hidden symmetries of the process. We show that, using a careful definition of the equilibrium distribution and taking into account the appropriate time reversal transformation, usual equilibrium properties are satisfied for any prescription. Finally, we present a detailed deduction of a covariant supersymmetric formulation of a multiplicative Markovian white-noise process and study some of the constraints that it imposes on correlation functions using Ward-Takahashi identities.

  12. Optimal regulation in systems with stochastic time sampling

    NASA Technical Reports Server (NTRS)

    Montgomery, R. C.; Lee, P. S.

    1980-01-01

    An optimal control theory that accounts for stochastic variable time sampling in a distributed microprocessor based flight control system is presented. The theory is developed by using a linear process model for the airplane dynamics and the information distribution process is modeled as a variable time increment process where, at the time that information is supplied to the control effectors, the control effectors know the time of the next information update only in a stochastic sense. An optimal control problem is formulated and solved for the control law that minimizes the expected value of a quadratic cost function. The optimal cost obtained with a variable time increment Markov information update process where the control effectors know only the past information update intervals and the Markov transition mechanism is almost identical to that obtained with a known and uniform information update interval.

  13. First-passage times for pattern formation in nonlocal partial differential equations

    NASA Astrophysics Data System (ADS)

    Cáceres, Manuel O.; Fuentes, Miguel A.

    2015-10-01

    We describe the lifetimes associated with the stochastic evolution from an unstable uniform state to a patterned one when the time evolution of the field is controlled by a nonlocal Fisher equation. A small noise is added to the evolution equation to define the lifetimes and to calculate the mean first-passage time of the stochastic field through a given threshold value, before the patterned steady state is reached. In order to obtain analytical results we introduce a stochastic multiscale perturbation expansion. This multiscale expansion can also be used to tackle multiplicative stochastic partial differential equations. A critical slowing down is predicted for the marginal case when the Fourier phase of the unstable initial condition is null. We carry out Monte Carlo simulations to show the agreement with our theoretical predictions. Analytic results for the bifurcation point and asymptotic analysis of traveling wave-front solutions are included to get insight into the noise-induced transition phenomena mediated by invading fronts.

  14. First-passage times for pattern formation in nonlocal partial differential equations.

    PubMed

    Cáceres, Manuel O; Fuentes, Miguel A

    2015-10-01

    We describe the lifetimes associated with the stochastic evolution from an unstable uniform state to a patterned one when the time evolution of the field is controlled by a nonlocal Fisher equation. A small noise is added to the evolution equation to define the lifetimes and to calculate the mean first-passage time of the stochastic field through a given threshold value, before the patterned steady state is reached. In order to obtain analytical results we introduce a stochastic multiscale perturbation expansion. This multiscale expansion can also be used to tackle multiplicative stochastic partial differential equations. A critical slowing down is predicted for the marginal case when the Fourier phase of the unstable initial condition is null. We carry out Monte Carlo simulations to show the agreement with our theoretical predictions. Analytic results for the bifurcation point and asymptotic analysis of traveling wave-front solutions are included to get insight into the noise-induced transition phenomena mediated by invading fronts.

  15. A theory of the helical ripple-induced stochastic behavior of fast toroidal bananas in torsatrons and heliotrons

    NASA Astrophysics Data System (ADS)

    Smirnova, M. S.

    2001-05-01

    A theory of the helical ripple-induced stochastic behavior of fast toroidal bananas in torsatrons and heliotrons [K. Uo, J. Phys. Soc. Jpn. 16, 1380 (1961)] is developed. It is supplemented by an analysis of the structure of the secondary magnetic wells along field lines. Conditions, under which these wells are suppressed in torsatrons-heliotrons by poloidally modulated helical field ripple, are found. It is shown that inside the secondary magnetic well-free region, favorable conditions exist for a transition of fast toroidal bananas to stochastic trajectories. The analytical estimation for the value of an additional radial jump of a banana particle near its turning point, induced by the helical field ripple effect, is derived. It is found to be similar to the corresponding banana radial jump in a tokamak with the toroidal field ripple. Critical values of the helical field ripple dangerous from the viewpoint of a banana transition to stochastic behavior are estimated.

  16. A deterministic and stochastic model for the system dynamics of tumor-immune responses to chemotherapy

    NASA Astrophysics Data System (ADS)

    Liu, Xiangdong; Li, Qingze; Pan, Jianxin

    2018-06-01

    Modern medical studies show that chemotherapy can help most cancer patients, especially for those diagnosed early, to stabilize their disease conditions from months to years, which means the population of tumor cells remained nearly unchanged in quite a long time after fighting against immune system and drugs. In order to better understand the dynamics of tumor-immune responses under chemotherapy, deterministic and stochastic differential equation models are constructed to characterize the dynamical change of tumor cells and immune cells in this paper. The basic dynamical properties, such as boundedness, existence and stability of equilibrium points, are investigated in the deterministic model. Extended stochastic models include stochastic differential equations (SDEs) model and continuous-time Markov chain (CTMC) model, which accounts for the variability in cellular reproduction, growth and death, interspecific competitions, and immune response to chemotherapy. The CTMC model is harnessed to estimate the extinction probability of tumor cells. Numerical simulations are performed, which confirms the obtained theoretical results.

  17. How a small noise generates large-amplitude oscillations of volcanic plug and provides high seismicity

    NASA Astrophysics Data System (ADS)

    Alexandrov, Dmitri V.; Bashkirtseva, Irina A.; Ryashko, Lev B.

    2015-04-01

    A non-linear behavior of dynamic model of the magma-plug system under the action of N-shaped friction force and stochastic disturbances is studied. It is shown that the deterministic dynamics essentially depends on the mutual arrangement of an equilibrium point and the friction force branches. Variations of this arrangement imply bifurcations, birth and disappearance of stable limit cycles, changes of the stability of equilibria, system transformations between mono- and bistable regimes. A slope of the right increasing branch of the friction function is responsible for the formation of such regimes. In a bistable zone, the noise generates transitions between small and large amplitude stochastic oscillations. In a monostable zone with single stable equilibrium, a new dynamic phenomenon of noise-induced generation of large amplitude stochastic oscillations in the plug rate and pressure is revealed. A beat-type dynamics of the plug displacement under the influence of stochastic forcing is studied as well.

  18. Inference from clustering with application to gene-expression microarrays.

    PubMed

    Dougherty, Edward R; Barrera, Junior; Brun, Marcel; Kim, Seungchan; Cesar, Roberto M; Chen, Yidong; Bittner, Michael; Trent, Jeffrey M

    2002-01-01

    There are many algorithms to cluster sample data points based on nearness or a similarity measure. Often the implication is that points in different clusters come from different underlying classes, whereas those in the same cluster come from the same class. Stochastically, the underlying classes represent different random processes. The inference is that clusters represent a partition of the sample points according to which process they belong. This paper discusses a model-based clustering toolbox that evaluates cluster accuracy. Each random process is modeled as its mean plus independent noise, sample points are generated, the points are clustered, and the clustering error is the number of points clustered incorrectly according to the generating random processes. Various clustering algorithms are evaluated based on process variance and the key issue of the rate at which algorithmic performance improves with increasing numbers of experimental replications. The model means can be selected by hand to test the separability of expected types of biological expression patterns. Alternatively, the model can be seeded by real data to test the expected precision of that output or the extent of improvement in precision that replication could provide. In the latter case, a clustering algorithm is used to form clusters, and the model is seeded with the means and variances of these clusters. Other algorithms are then tested relative to the seeding algorithm. Results are averaged over various seeds. Output includes error tables and graphs, confusion matrices, principal-component plots, and validation measures. Five algorithms are studied in detail: K-means, fuzzy C-means, self-organizing maps, hierarchical Euclidean-distance-based and correlation-based clustering. The toolbox is applied to gene-expression clustering based on cDNA microarrays using real data. Expression profile graphics are generated and error analysis is displayed within the context of these profile graphics. A large amount of generated output is available over the web.

  19. Constraining Stochastic Parametrisation Schemes Using High-Resolution Model Simulations

    NASA Astrophysics Data System (ADS)

    Christensen, H. M.; Dawson, A.; Palmer, T.

    2017-12-01

    Stochastic parametrisations are used in weather and climate models as a physically motivated way to represent model error due to unresolved processes. Designing new stochastic schemes has been the target of much innovative research over the last decade. While a focus has been on developing physically motivated approaches, many successful stochastic parametrisation schemes are very simple, such as the European Centre for Medium-Range Weather Forecasts (ECMWF) multiplicative scheme `Stochastically Perturbed Parametrisation Tendencies' (SPPT). The SPPT scheme improves the skill of probabilistic weather and seasonal forecasts, and so is widely used. However, little work has focused on assessing the physical basis of the SPPT scheme. We address this matter by using high-resolution model simulations to explicitly measure the `error' in the parametrised tendency that SPPT seeks to represent. The high resolution simulations are first coarse-grained to the desired forecast model resolution before they are used to produce initial conditions and forcing data needed to drive the ECMWF Single Column Model (SCM). By comparing SCM forecast tendencies with the evolution of the high resolution model, we can measure the `error' in the forecast tendencies. In this way, we provide justification for the multiplicative nature of SPPT, and for the temporal and spatial scales of the stochastic perturbations. However, we also identify issues with the SPPT scheme. It is therefore hoped these measurements will improve both holistic and process based approaches to stochastic parametrisation. Figure caption: Instantaneous snapshot of the optimal SPPT stochastic perturbation, derived by comparing high-resolution simulations with a low resolution forecast model.

  20. A new stochastic model considering satellite clock interpolation errors in precise point positioning

    NASA Astrophysics Data System (ADS)

    Wang, Shengli; Yang, Fanlin; Gao, Wang; Yan, Lizi; Ge, Yulong

    2018-03-01

    Precise clock products are typically interpolated based on the sampling interval of the observational data when they are used for in precise point positioning. However, due to the occurrence of white noise in atomic clocks, a residual component of such noise will inevitable reside within the observations when clock errors are interpolated, and such noise will affect the resolution of the positioning results. In this paper, which is based on a twenty-one-week analysis of the atomic clock noise characteristics of numerous satellites, a new stochastic observation model that considers satellite clock interpolation errors is proposed. First, the systematic error of each satellite in the IGR clock product was extracted using a wavelet de-noising method to obtain the empirical characteristics of atomic clock noise within each clock product. Then, based on those empirical characteristics, a stochastic observation model was structured that considered the satellite clock interpolation errors. Subsequently, the IGR and IGS clock products at different time intervals were used for experimental validation. A verification using 179 stations worldwide from the IGS showed that, compared with the conventional model, the convergence times using the stochastic model proposed in this study were respectively shortened by 4.8% and 4.0% when the IGR and IGS 300-s-interval clock products were used and by 19.1% and 19.4% when the 900-s-interval clock products were used. Furthermore, the disturbances during the initial phase of the calculation were also effectively improved.

  1. Finite element modelling of woven composite failure modes at the mesoscopic scale: deterministic versus stochastic approaches

    NASA Astrophysics Data System (ADS)

    Roirand, Q.; Missoum-Benziane, D.; Thionnet, A.; Laiarinandrasana, L.

    2017-09-01

    Textile composites are composed of 3D complex architecture. To assess the durability of such engineering structures, the failure mechanisms must be highlighted. Examinations of the degradation have been carried out thanks to tomography. The present work addresses a numerical damage model dedicated to the simulation of the crack initiation and propagation at the scale of the warp yarns. For the 3D woven composites under study, loadings in tension and combined tension and bending were considered. Based on an erosion procedure of broken elements, the failure mechanisms have been modelled on 3D periodic cells by finite element calculations. The breakage of one element was determined using a failure criterion at the mesoscopic scale based on the yarn stress at failure. The results were found to be in good agreement with the experimental data for the two kinds of macroscopic loadings. The deterministic approach assumed a homogeneously distributed stress at failure all over the integration points in the meshes of woven composites. A stochastic approach was applied to a simple representative elementary periodic cell. The distribution of the Weibull stress at failure was assigned to the integration points using a Monte Carlo simulation. It was shown that this stochastic approach allowed more realistic failure simulations avoiding the idealised symmetry due to the deterministic modelling. In particular, the stochastic simulations performed have shown several variations of the stress as well as strain at failure and the failure modes of the yarn.

  2. Two-strain competition in quasineutral stochastic disease dynamics.

    PubMed

    Kogan, Oleg; Khasin, Michael; Meerson, Baruch; Schneider, David; Myers, Christopher R

    2014-10-01

    We develop a perturbation method for studying quasineutral competition in a broad class of stochastic competition models and apply it to the analysis of fixation of competing strains in two epidemic models. The first model is a two-strain generalization of the stochastic susceptible-infected-susceptible (SIS) model. Here we extend previous results due to Parsons and Quince [Theor. Popul. Biol. 72, 468 (2007)], Parsons et al. [Theor. Popul. Biol. 74, 302 (2008)], and Lin, Kim, and Doering [J. Stat. Phys. 148, 646 (2012)]. The second model, a two-strain generalization of the stochastic susceptible-infected-recovered (SIR) model with population turnover, has not been studied previously. In each of the two models, when the basic reproduction numbers of the two strains are identical, a system with an infinite population size approaches a point on the deterministic coexistence line (CL): a straight line of fixed points in the phase space of subpopulation sizes. Shot noise drives one of the strain populations to fixation, and the other to extinction, on a time scale proportional to the total population size. Our perturbation method explicitly tracks the dynamics of the probability distribution of the subpopulations in the vicinity of the CL. We argue that, whereas the slow strain has a competitive advantage for mathematically "typical" initial conditions, it is the fast strain that is more likely to win in the important situation when a few infectives of both strains are introduced into a susceptible population.

  3. Effects of shear on the magnetic footprint and stochastic layer in double-null divertor tokamak

    NASA Astrophysics Data System (ADS)

    Farhat, Hamidullah; Punjabi, Alkesh; Ali, Halima

    2006-10-01

    We have developed a new area-preserving map, called the Adjustable Shear Map, to calculate effects of shear on the magnetic footprint and stochastic layer in double-null divertor tokamak. The map is given by equationsxn+1=xn-kyn[(1-yn^2 )(1+syn)+sxn+1^2 ),yn+1=yn+kxn+1[1+s(xn+1^2 +yn^2 )]. k is the map parameter and s is the shear parameter. O-point of the map is (0, 0), and the X-points are (0, 1), and (0, -1). For s=0, k=0.6, the last good surface is y=0.9918 with q ˜3. Here we will report on the effects of shear on the stochastic layer and magnetic footprint as the shear parameter is varied from 0 to -1. Here we will report the preliminary results on the effect of shear on the magnetic foot print and the stochastic layer where the shear parameter s has values between -1 and 0. using method of maps [1-4]. This work is done under the DOE grant number DE-FG02-01ER54624. 1. A. Punjabi, A. Boozer, and A. Verma, Phys. Rev. lett., 69, 3322 (1992). 2. H. Ali, A. Punjabi, and A. Boozer, Phys. Plasmas 11, 4527 (2004). 3. A. Punjabi, H. Ali, and A. Boozer, Phys. Plasmas 10, 3992 (2003). 4. A. Punjabi, H. Ali, and A. Boozer, Phys. Plasmas 4, 337 (1997).

  4. Extracting features of Gaussian self-similar stochastic processes via the Bandt-Pompe approach.

    PubMed

    Rosso, O A; Zunino, L; Pérez, D G; Figliola, A; Larrondo, H A; Garavaglia, M; Martín, M T; Plastino, A

    2007-12-01

    By recourse to appropriate information theory quantifiers (normalized Shannon entropy and Martín-Plastino-Rosso intensive statistical complexity measure), we revisit the characterization of Gaussian self-similar stochastic processes from a Bandt-Pompe viewpoint. We show that the ensuing approach exhibits considerable advantages with respect to other treatments. In particular, clear quantifiers gaps are found in the transition between the continuous processes and their associated noises.

  5. Boosting Bayesian parameter inference of stochastic differential equation models with methods from statistical physics

    NASA Astrophysics Data System (ADS)

    Albert, Carlo; Ulzega, Simone; Stoop, Ruedi

    2016-04-01

    Measured time-series of both precipitation and runoff are known to exhibit highly non-trivial statistical properties. For making reliable probabilistic predictions in hydrology, it is therefore desirable to have stochastic models with output distributions that share these properties. When parameters of such models have to be inferred from data, we also need to quantify the associated parametric uncertainty. For non-trivial stochastic models, however, this latter step is typically very demanding, both conceptually and numerically, and always never done in hydrology. Here, we demonstrate that methods developed in statistical physics make a large class of stochastic differential equation (SDE) models amenable to a full-fledged Bayesian parameter inference. For concreteness we demonstrate these methods by means of a simple yet non-trivial toy SDE model. We consider a natural catchment that can be described by a linear reservoir, at the scale of observation. All the neglected processes are assumed to happen at much shorter time-scales and are therefore modeled with a Gaussian white noise term, the standard deviation of which is assumed to scale linearly with the system state (water volume in the catchment). Even for constant input, the outputs of this simple non-linear SDE model show a wealth of desirable statistical properties, such as fat-tailed distributions and long-range correlations. Standard algorithms for Bayesian inference fail, for models of this kind, because their likelihood functions are extremely high-dimensional intractable integrals over all possible model realizations. The use of Kalman filters is illegitimate due to the non-linearity of the model. Particle filters could be used but become increasingly inefficient with growing number of data points. Hamiltonian Monte Carlo algorithms allow us to translate this inference problem to the problem of simulating the dynamics of a statistical mechanics system and give us access to most sophisticated methods that have been developed in the statistical physics community over the last few decades. We demonstrate that such methods, along with automated differentiation algorithms, allow us to perform a full-fledged Bayesian inference, for a large class of SDE models, in a highly efficient and largely automatized manner. Furthermore, our algorithm is highly parallelizable. For our toy model, discretized with a few hundred points, a full Bayesian inference can be performed in a matter of seconds on a standard PC.

  6. The Social Cost of Stochastic and Irreversible Climate Change

    NASA Astrophysics Data System (ADS)

    Cai, Y.; Judd, K. L.; Lontzek, T.

    2013-12-01

    Many scientists are worried about climate change triggering abrupt and irreversible events leading to significant and long-lasting damages. For example, a rapid release of methane from permafrost may lead to amplified global warming, and global warming may increase the frequency and severity of heavy rainfall or typhoon, destroying large cities and killing numerous people. Some elements of the climate system which might exhibit such a triggering effect are called tipping elements. There is great uncertainty about the impact of anthropogenic carbon and tipping elements on future economic wellbeing. Any rational policy choice must consider the great uncertainty about the magnitude and timing of global warming's impact on economic productivity. While the likelihood of tipping points may be a function of contemporaneous temperature, their effects are long lasting and might be independent of future temperatures. It is assumed that some of these tipping points might occur even in this century, but also that their duration and post-tipping impact are uncertain. A faithful representation of the possibility of tipping points for the calculation of social cost of carbon would require a fully stochastic formulation of irreversibility, and accounting for the deep layer of uncertainties regarding the duration of the tipping process and also its economic impact. We use DSICE, a DSGE extension of the DICE2007 model of William Nordhaus, which incorporates beliefs about the uncertain economic impact of possible climate tipping events and uses empirically plausible parameterizations of Epstein-Zin preferences to represent attitudes towards risk. We find that the uncertainty associated with anthropogenic climate change imply carbon taxes much higher than implied by deterministic models. This analysis indicates that the absence of uncertainty in DICE2007 and similar IAM models may result in substantial understatement of the potential benefits of policies to reduce GHG emissions.

  7. Disentangling mechanisms that mediate the balance between stochastic and deterministic processes in microbial succession.

    PubMed

    Dini-Andreote, Francisco; Stegen, James C; van Elsas, Jan Dirk; Salles, Joana Falcão

    2015-03-17

    Ecological succession and the balance between stochastic and deterministic processes are two major themes within microbial ecology, but these conceptual domains have mostly developed independent of each other. Here we provide a framework that integrates shifts in community assembly processes with microbial primary succession to better understand mechanisms governing the stochastic/deterministic balance. Synthesizing previous work, we devised a conceptual model that links ecosystem development to alternative hypotheses related to shifts in ecological assembly processes. Conceptual model hypotheses were tested by coupling spatiotemporal data on soil bacterial communities with environmental conditions in a salt marsh chronosequence spanning 105 years of succession. Analyses within successional stages showed community composition to be initially governed by stochasticity, but as succession proceeded, there was a progressive increase in deterministic selection correlated with increasing sodium concentration. Analyses of community turnover among successional stages--which provide a larger spatiotemporal scale relative to within stage analyses--revealed that changes in the concentration of soil organic matter were the main predictor of the type and relative influence of determinism. Taken together, these results suggest scale-dependency in the mechanisms underlying selection. To better understand mechanisms governing these patterns, we developed an ecological simulation model that revealed how changes in selective environments cause shifts in the stochastic/deterministic balance. Finally, we propose an extended--and experimentally testable--conceptual model integrating ecological assembly processes with primary and secondary succession. This framework provides a priori hypotheses for future experiments, thereby facilitating a systematic approach to understand assembly and succession in microbial communities across ecosystems.

  8. Disentangling mechanisms that mediate the balance between stochastic and deterministic processes in microbial succession

    PubMed Central

    Dini-Andreote, Francisco; Stegen, James C.; van Elsas, Jan Dirk; Salles, Joana Falcão

    2015-01-01

    Ecological succession and the balance between stochastic and deterministic processes are two major themes within microbial ecology, but these conceptual domains have mostly developed independent of each other. Here we provide a framework that integrates shifts in community assembly processes with microbial primary succession to better understand mechanisms governing the stochastic/deterministic balance. Synthesizing previous work, we devised a conceptual model that links ecosystem development to alternative hypotheses related to shifts in ecological assembly processes. Conceptual model hypotheses were tested by coupling spatiotemporal data on soil bacterial communities with environmental conditions in a salt marsh chronosequence spanning 105 years of succession. Analyses within successional stages showed community composition to be initially governed by stochasticity, but as succession proceeded, there was a progressive increase in deterministic selection correlated with increasing sodium concentration. Analyses of community turnover among successional stages—which provide a larger spatiotemporal scale relative to within stage analyses—revealed that changes in the concentration of soil organic matter were the main predictor of the type and relative influence of determinism. Taken together, these results suggest scale-dependency in the mechanisms underlying selection. To better understand mechanisms governing these patterns, we developed an ecological simulation model that revealed how changes in selective environments cause shifts in the stochastic/deterministic balance. Finally, we propose an extended—and experimentally testable—conceptual model integrating ecological assembly processes with primary and secondary succession. This framework provides a priori hypotheses for future experiments, thereby facilitating a systematic approach to understand assembly and succession in microbial communities across ecosystems. PMID:25733885

  9. Non-linear dynamic characteristics and optimal control of giant magnetostrictive film subjected to in-plane stochastic excitation

    NASA Astrophysics Data System (ADS)

    Zhu, Z. W.; Zhang, W. D.; Xu, J.

    2014-03-01

    The non-linear dynamic characteristics and optimal control of a giant magnetostrictive film (GMF) subjected to in-plane stochastic excitation were studied. Non-linear differential items were introduced to interpret the hysteretic phenomena of the GMF, and the non-linear dynamic model of the GMF subjected to in-plane stochastic excitation was developed. The stochastic stability was analysed, and the probability density function was obtained. The condition of stochastic Hopf bifurcation and noise-induced chaotic response were determined, and the fractal boundary of the system's safe basin was provided. The reliability function was solved from the backward Kolmogorov equation, and an optimal control strategy was proposed in the stochastic dynamic programming method. Numerical simulation shows that the system stability varies with the parameters, and stochastic Hopf bifurcation and chaos appear in the process; the area of the safe basin decreases when the noise intensifies, and the boundary of the safe basin becomes fractal; the system reliability improved through stochastic optimal control. Finally, the theoretical and numerical results were proved by experiments. The results are helpful in the engineering applications of GMF.

  10. Simple map in action-angle coordinates

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kerwin, Olivia; Punjabi, Alkesh; Ali, Halima

    A simple map [A. Punjabi, A. Verma, and A. Boozer, Phys. Rev. Lett. 69, 3322 (1992)] is the simplest map that has the topology of divertor tokamaks [A. Punjabi, H. Ali, T. Evans, and A. Boozer, Phys. Lett. A 364, 140 (2007)]. Here, action-angle coordinates, the safety factor, and the equilibrium generating function for the simple map are calculated analytically. The simple map in action-angle coordinates is derived from canonical transformations. This map cannot be integrated across the separatrix surface because of the singularity in the safety factor there. The stochastic broadening of the ideal separatrix surface in action-angle representationmore » is calculated by adding a perturbation to the simple map equilibrium generating function. This perturbation represents the spatial noise and field errors typical of the DIII-D [J. L. Luxon and L. E. Davis, Fusion Technol. 8, 441 (1985)] tokamak. The stationary Fourier modes of the perturbation have poloidal and toroidal mode numbers (m,n,)=((3,1),(4,1),(6,2),(7,2),(8,2),(9,3),(10,3),(11,3)) with amplitude {delta}=0.8x10{sup -5}. Near the X-point, about 0.12% of toroidal magnetic flux inside the separatrix, and about 0.06% of the poloidal flux inside the separatrix is lost. When the distance from the O-point to the X-point is 1 m, the width of stochastic layer near the X-point is about 1.4 cm. The average value of the action on the last good surface is 0.19072 compared to the action value of 3/5{pi} on the separatrix. The average width of stochastic layer in action coordinate is 2.7x10{sup -4}, while the average area of the stochastic layer in action-angle phase space is 1.69017x10{sup -3}. On average, about 0.14% of action or toroidal flux inside the ideal separatrix is lost due to broadening. Roughly five times more toroidal flux is lost in the simple map than in DIII-D for the same perturbation [A. Punjabi, H. Ali, A. Boozer, and T. Evans, Bull. Amer. Phys. Soc. 52, 124 (2007)].« less

  11. Simple map in action-angle coordinates

    NASA Astrophysics Data System (ADS)

    Kerwin, Olivia; Punjabi, Alkesh; Ali, Halima

    2008-07-01

    A simple map [A. Punjabi, A. Verma, and A. Boozer, Phys. Rev. Lett. 69, 3322 (1992)] is the simplest map that has the topology of divertor tokamaks [A. Punjabi, H. Ali, T. Evans, and A. Boozer, Phys. Lett. A 364, 140 (2007)]. Here, action-angle coordinates, the safety factor, and the equilibrium generating function for the simple map are calculated analytically. The simple map in action-angle coordinates is derived from canonical transformations. This map cannot be integrated across the separatrix surface because of the singularity in the safety factor there. The stochastic broadening of the ideal separatrix surface in action-angle representation is calculated by adding a perturbation to the simple map equilibrium generating function. This perturbation represents the spatial noise and field errors typical of the DIII-D [J. L. Luxon and L. E. Davis, Fusion Technol. 8, 441 (1985)] tokamak. The stationary Fourier modes of the perturbation have poloidal and toroidal mode numbers (m,n,)={(3,1),(4,1),(6,2),(7,2),(8,2),(9,3),(10,3),(11,3)} with amplitude δ =0.8×10-5. Near the X-point, about 0.12% of toroidal magnetic flux inside the separatrix, and about 0.06% of the poloidal flux inside the separatrix is lost. When the distance from the O-point to the X-point is 1m, the width of stochastic layer near the X-point is about 1.4cm. The average value of the action on the last good surface is 0.19072 compared to the action value of 3/5π on the separatrix. The average width of stochastic layer in action coordinate is 2.7×10-4, while the average area of the stochastic layer in action-angle phase space is 1.69017×10-3. On average, about 0.14% of action or toroidal flux inside the ideal separatrix is lost due to broadening. Roughly five times more toroidal flux is lost in the simple map than in DIII-D for the same perturbation [A. Punjabi, H. Ali, A. Boozer, and T. Evans, Bull. Amer. Phys. Soc. 52, 124 (2007)].

  12. Information transfer with rate-modulated Poisson processes: a simple model for nonstationary stochastic resonance.

    PubMed

    Goychuk, I

    2001-08-01

    Stochastic resonance in a simple model of information transfer is studied for sensory neurons and ensembles of ion channels. An exact expression for the information gain is obtained for the Poisson process with the signal-modulated spiking rate. This result allows one to generalize the conventional stochastic resonance (SR) problem (with periodic input signal) to the arbitrary signals of finite duration (nonstationary SR). Moreover, in the case of a periodic signal, the rate of information gain is compared with the conventional signal-to-noise ratio. The paper establishes the general nonequivalence between both measures notwithstanding their apparent similarity in the limit of weak signals.

  13. Refractory pulse counting processes in stochastic neural computers.

    PubMed

    McNeill, Dean K; Card, Howard C

    2005-03-01

    This letter quantitiatively investigates the effect of a temporary refractory period or dead time in the ability of a stochastic Bernoulli processor to record subsequent pulse events, following the arrival of a pulse. These effects can arise in either the input detectors of a stochastic neural network or in subsequent processing. A transient period is observed, which increases with both the dead time and the Bernoulli probability of the dead-time free system, during which the system reaches equilibrium. Unless the Bernoulli probability is small compared to the inverse of the dead time, the mean and variance of the pulse count distributions are both appreciably reduced.

  14. Structured Modeling and Analysis of Stochastic Epidemics with Immigration and Demographic Effects

    PubMed Central

    Baumann, Hendrik; Sandmann, Werner

    2016-01-01

    Stochastic epidemics with open populations of variable population sizes are considered where due to immigration and demographic effects the epidemic does not eventually die out forever. The underlying stochastic processes are ergodic multi-dimensional continuous-time Markov chains that possess unique equilibrium probability distributions. Modeling these epidemics as level-dependent quasi-birth-and-death processes enables efficient computations of the equilibrium distributions by matrix-analytic methods. Numerical examples for specific parameter sets are provided, which demonstrates that this approach is particularly well-suited for studying the impact of varying rates for immigration, births, deaths, infection, recovery from infection, and loss of immunity. PMID:27010993

  15. Structured Modeling and Analysis of Stochastic Epidemics with Immigration and Demographic Effects.

    PubMed

    Baumann, Hendrik; Sandmann, Werner

    2016-01-01

    Stochastic epidemics with open populations of variable population sizes are considered where due to immigration and demographic effects the epidemic does not eventually die out forever. The underlying stochastic processes are ergodic multi-dimensional continuous-time Markov chains that possess unique equilibrium probability distributions. Modeling these epidemics as level-dependent quasi-birth-and-death processes enables efficient computations of the equilibrium distributions by matrix-analytic methods. Numerical examples for specific parameter sets are provided, which demonstrates that this approach is particularly well-suited for studying the impact of varying rates for immigration, births, deaths, infection, recovery from infection, and loss of immunity.

  16. Survey and Introduction to Modern Martingale Theory (Stopping Times, Semi-Martingales and Stochastic Integration)

    DTIC Science & Technology

    1986-10-01

    35 ~- 2.3.12. Remark: Let X. Y be the processes given in the example :after Lemma 2.3.3. Take the same probability space as in that example and...z(s) =zjO) + ’ A zis).1als) + ’~ zsMs 0O<s<t 0O<s<t 0<~ Fix n>1I; then if s Is a point of increase of a, (that is. if Aa(s)=Al). then ri(s) = q(s...Absolute Continuity and Singularity of Locally Absolutely Continuous Probability Distributions. I. Math USSR Sbornik Vol. 35 , No 5, 631-680. Kabanov

  17. Exact protein distributions for stochastic models of gene expression using partitioning of Poisson processes.

    PubMed

    Pendar, Hodjat; Platini, Thierry; Kulkarni, Rahul V

    2013-04-01

    Stochasticity in gene expression gives rise to fluctuations in protein levels across a population of genetically identical cells. Such fluctuations can lead to phenotypic variation in clonal populations; hence, there is considerable interest in quantifying noise in gene expression using stochastic models. However, obtaining exact analytical results for protein distributions has been an intractable task for all but the simplest models. Here, we invoke the partitioning property of Poisson processes to develop a mapping that significantly simplifies the analysis of stochastic models of gene expression. The mapping leads to exact protein distributions using results for mRNA distributions in models with promoter-based regulation. Using this approach, we derive exact analytical results for steady-state and time-dependent distributions for the basic two-stage model of gene expression. Furthermore, we show how the mapping leads to exact protein distributions for extensions of the basic model that include the effects of posttranscriptional and posttranslational regulation. The approach developed in this work is widely applicable and can contribute to a quantitative understanding of stochasticity in gene expression and its regulation.

  18. Exact protein distributions for stochastic models of gene expression using partitioning of Poisson processes

    NASA Astrophysics Data System (ADS)

    Pendar, Hodjat; Platini, Thierry; Kulkarni, Rahul V.

    2013-04-01

    Stochasticity in gene expression gives rise to fluctuations in protein levels across a population of genetically identical cells. Such fluctuations can lead to phenotypic variation in clonal populations; hence, there is considerable interest in quantifying noise in gene expression using stochastic models. However, obtaining exact analytical results for protein distributions has been an intractable task for all but the simplest models. Here, we invoke the partitioning property of Poisson processes to develop a mapping that significantly simplifies the analysis of stochastic models of gene expression. The mapping leads to exact protein distributions using results for mRNA distributions in models with promoter-based regulation. Using this approach, we derive exact analytical results for steady-state and time-dependent distributions for the basic two-stage model of gene expression. Furthermore, we show how the mapping leads to exact protein distributions for extensions of the basic model that include the effects of posttranscriptional and posttranslational regulation. The approach developed in this work is widely applicable and can contribute to a quantitative understanding of stochasticity in gene expression and its regulation.

  19. The subtle business of model reduction for stochastic chemical kinetics

    NASA Astrophysics Data System (ADS)

    Gillespie, Dan T.; Cao, Yang; Sanft, Kevin R.; Petzold, Linda R.

    2009-02-01

    This paper addresses the problem of simplifying chemical reaction networks by adroitly reducing the number of reaction channels and chemical species. The analysis adopts a discrete-stochastic point of view and focuses on the model reaction set S1⇌S2→S3, whose simplicity allows all the mathematics to be done exactly. The advantages and disadvantages of replacing this reaction set with a single S3-producing reaction are analyzed quantitatively using novel criteria for measuring simulation accuracy and simulation efficiency. It is shown that in all cases in which such a model reduction can be accomplished accurately and with a significant gain in simulation efficiency, a procedure called the slow-scale stochastic simulation algorithm provides a robust and theoretically transparent way of implementing the reduction.

  20. Stochastic sensitivity of a bistable energy model for visual perception

    NASA Astrophysics Data System (ADS)

    Pisarchik, Alexander N.; Bashkirtseva, Irina; Ryashko, Lev

    2017-01-01

    Modern trends in physiology, psychology and cognitive neuroscience suggest that noise is an essential component of brain functionality and self-organization. With adequate noise the brain as a complex dynamical system can easily access different ordered states and improve signal detection for decision-making by preventing deadlocks. Using a stochastic sensitivity function approach, we analyze how sensitive equilibrium points are to Gaussian noise in a bistable energy model often used for qualitative description of visual perception. The probability distribution of noise-induced transitions between two coexisting percepts is calculated at different noise intensity and system stability. Stochastic squeezing of the hysteresis range and its transition from positive (bistable regime) to negative (intermittency regime) are demonstrated as the noise intensity increases. The hysteresis is more sensitive to noise in the system with higher stability.

  1. The subtle business of model reduction for stochastic chemical kinetics.

    PubMed

    Gillespie, Dan T; Cao, Yang; Sanft, Kevin R; Petzold, Linda R

    2009-02-14

    This paper addresses the problem of simplifying chemical reaction networks by adroitly reducing the number of reaction channels and chemical species. The analysis adopts a discrete-stochastic point of view and focuses on the model reaction set S(1)<=>S(2)-->S(3), whose simplicity allows all the mathematics to be done exactly. The advantages and disadvantages of replacing this reaction set with a single S(3)-producing reaction are analyzed quantitatively using novel criteria for measuring simulation accuracy and simulation efficiency. It is shown that in all cases in which such a model reduction can be accomplished accurately and with a significant gain in simulation efficiency, a procedure called the slow-scale stochastic simulation algorithm provides a robust and theoretically transparent way of implementing the reduction.

  2. A Simplified Treatment of Brownian Motion and Stochastic Differential Equations Arising in Financial Mathematics

    ERIC Educational Resources Information Center

    Parlar, Mahmut

    2004-01-01

    Brownian motion is an important stochastic process used in modelling the random evolution of stock prices. In their 1973 seminal paper--which led to the awarding of the 1997 Nobel prize in Economic Sciences--Fischer Black and Myron Scholes assumed that the random stock price process is described (i.e., generated) by Brownian motion. Despite its…

  3. Mathematics Education. Selected Papers from the Conference on Stochastic Processes and Their Applications. (15th, Nagoya, Japan, July 2-5, 1985).

    ERIC Educational Resources Information Center

    Hida, Takeyuki; Shimizu, Akinobu

    This volume contains the papers and comments from the Workshop on Mathematics Education, a special session of the 15th Conference on Stochastic Processes and Their Applications, held in Nagoya, Japan, July 2-5, 1985. Topics covered include: (1) probability; (2) statistics; (3) deviation; (4) Japanese mathematics curriculum; (5) statistical…

  4. Stochastic Models for Precipitable Water in Convection

    NASA Astrophysics Data System (ADS)

    Leung, Kimberly

    Atmospheric precipitable water vapor (PWV) is the amount of water vapor in the atmosphere within a vertical column of unit cross-sectional area and is a critically important parameter of precipitation processes. However, accurate high-frequency and long-term observations of PWV in the sky were impossible until the availability of modern instruments such as radar. The United States Department of Energy (DOE)'s Atmospheric Radiation Measurement (ARM) Program facility made the first systematic and high-resolution observations of PWV at Darwin, Australia since 2002. At a resolution of 20 seconds, this time series allowed us to examine the volatility of PWV, including fractal behavior with dimension equal to 1.9, higher than the Brownian motion dimension of 1.5. Such strong fractal behavior calls for stochastic differential equation modeling in an attempt to address some of the difficulties of convective parameterization in various kinds of climate models, ranging from general circulation models (GCM) to weather research forecasting (WRF) models. This important observed data at high resolution can capture the fractal behavior of PWV and enables stochastic exploration into the next generation of climate models which considers scales from micrometers to thousands of kilometers. As a first step, this thesis explores a simple stochastic differential equation model of water mass balance for PWV and assesses accuracy, robustness, and sensitivity of the stochastic model. A 1000-day simulation allows for the determination of the best-fitting 25-day period as compared to data from the TWP-ICE field campaign conducted out of Darwin, Australia in early 2006. The observed data and this portion of the simulation had a correlation coefficient of 0.6513 and followed similar statistics and low-resolution temporal trends. Building on the point model foundation, a similar algorithm was applied to the National Center for Atmospheric Research (NCAR)'s existing single-column model as a test-of-concept for eventual inclusion in a general circulation model. The stochastic scheme was designed to be coupled with the deterministic single-column simulation by modifying results of the existing convective scheme (Zhang-McFarlane) and was able to produce a 20-second resolution time series that effectively simulated observed PWV, as measured by correlation coefficient (0.5510), fractal dimension (1.9), statistics, and visual examination of temporal trends. Results indicate that simulation of a highly volatile time series of observed PWV is certainly achievable and has potential to improve prediction capabilities in climate modeling. Further, this study demonstrates the feasibility of adding a mathematics- and statistics-based stochastic scheme to an existing deterministic parameterization to simulate observed fractal behavior.

  5. Quantum stochastic walks on networks for decision-making.

    PubMed

    Martínez-Martínez, Ismael; Sánchez-Burillo, Eduardo

    2016-03-31

    Recent experiments report violations of the classical law of total probability and incompatibility of certain mental representations when humans process and react to information. Evidence shows promise of a more general quantum theory providing a better explanation of the dynamics and structure of real decision-making processes than classical probability theory. Inspired by this, we show how the behavioral choice-probabilities can arise as the unique stationary distribution of quantum stochastic walkers on the classical network defined from Luce's response probabilities. This work is relevant because (i) we provide a very general framework integrating the positive characteristics of both quantum and classical approaches previously in confrontation, and (ii) we define a cognitive network which can be used to bring other connectivist approaches to decision-making into the quantum stochastic realm. We model the decision-maker as an open system in contact with her surrounding environment, and the time-length of the decision-making process reveals to be also a measure of the process' degree of interplay between the unitary and irreversible dynamics. Implementing quantum coherence on classical networks may be a door to better integrate human-like reasoning biases in stochastic models for decision-making.

  6. Time varying moments, regime switch, and crisis warning: The birth-death process with changing transition probability

    NASA Astrophysics Data System (ADS)

    Tang, Yinan; Chen, Ping

    2014-06-01

    The sub-prime crisis in the U.S. reveals the limitation of diversification strategy based on mean-variance analysis. A regime switch and a turning point can be observed using a high moment representation and time-dependent transition probability. Up-down price movements are induced by interactions among agents, which can be described by the birth-death (BD) process. Financial instability is visible by dramatically increasing 3rd to 5th moments one-quarter before and during the crisis. The sudden rising high moments provide effective warning signals of a regime-switch or a coming crisis. The critical condition of a market breakdown can be identified from nonlinear stochastic dynamics. The master equation approach of population dynamics provides a unified theory of a calm and turbulent market.

  7. Renyi entropy measures of heart rate Gaussianity.

    PubMed

    Lake, Douglas E

    2006-01-01

    Sample entropy and approximate entropy are measures that have been successfully utilized to study the deterministic dynamics of heart rate (HR). A complementary stochastic point of view and a heuristic argument using the Central Limit Theorem suggests that the Gaussianity of HR is a complementary measure of the physiological complexity of the underlying signal transduction processes. Renyi entropy (or q-entropy) is a widely used measure of Gaussianity in many applications. Particularly important members of this family are differential (or Shannon) entropy (q = 1) and quadratic entropy (q = 2). We introduce the concepts of differential and conditional Renyi entropy rate and, in conjunction with Burg's theorem, develop a measure of the Gaussianity of a linear random process. Robust algorithms for estimating these quantities are presented along with estimates of their standard errors.

  8. Intrinsic Information Processing and Energy Dissipation in Stochastic Input-Output Dynamical Systems

    DTIC Science & Technology

    2015-07-09

    Crutchfield. Information Anatomy of Stochastic Equilibria, Entropy , (08 2014): 0. doi: 10.3390/e16094713 Virgil Griffith, Edwin Chong, Ryan James...Christopher Ellison, James Crutchfield. Intersection Information Based on Common Randomness, Entropy , (04 2014): 0. doi: 10.3390/e16041985 TOTAL: 5 Number...Learning Group Seminar, Complexity Sciences Center, UC Davis. Korana Burke and Greg Wimsatt (UCD), reviewed PRL “Measurement of Stochastic Entropy

  9. Approximation methods of European option pricing in multiscale stochastic volatility model

    NASA Astrophysics Data System (ADS)

    Ni, Ying; Canhanga, Betuel; Malyarenko, Anatoliy; Silvestrov, Sergei

    2017-01-01

    In the classical Black-Scholes model for financial option pricing, the asset price follows a geometric Brownian motion with constant volatility. Empirical findings such as volatility smile/skew, fat-tailed asset return distributions have suggested that the constant volatility assumption might not be realistic. A general stochastic volatility model, e.g. Heston model, GARCH model and SABR volatility model, in which the variance/volatility itself follows typically a mean-reverting stochastic process, has shown to be superior in terms of capturing the empirical facts. However in order to capture more features of the volatility smile a two-factor, of double Heston type, stochastic volatility model is more useful as shown in Christoffersen, Heston and Jacobs [12]. We consider one modified form of such two-factor volatility models in which the volatility has multiscale mean-reversion rates. Our model contains two mean-reverting volatility processes with a fast and a slow reverting rate respectively. We consider the European option pricing problem under one type of the multiscale stochastic volatility model where the two volatility processes act as independent factors in the asset price process. The novelty in this paper is an approximating analytical solution using asymptotic expansion method which extends the authors earlier research in Canhanga et al. [5, 6]. In addition we propose a numerical approximating solution using Monte-Carlo simulation. For completeness and for comparison we also implement the semi-analytical solution by Chiarella and Ziveyi [11] using method of characteristics, Fourier and bivariate Laplace transforms.

  10. Modeling surface topography of state-of-the-art x-ray mirrors as a result of stochastic polishing process: recent developments

    NASA Astrophysics Data System (ADS)

    Yashchuk, Valeriy V.; Centers, Gary; Tyurin, Yuri N.; Tyurina, Anastasia

    2016-09-01

    Recently, an original method for the statistical modeling of surface topography of state-of-the-art mirrors for usage in xray optical systems at light source facilities and for astronomical telescopes [Opt. Eng. 51(4), 046501, 2012; ibid. 53(8), 084102 (2014); and ibid. 55(7), 074106 (2016)] has been developed. In modeling, the mirror surface topography is considered to be a result of a stationary uniform stochastic polishing process and the best fit time-invariant linear filter (TILF) that optimally parameterizes, with limited number of parameters, the polishing process is determined. The TILF model allows the surface slope profile of an optic with a newly desired specification to be reliably forecast before fabrication. With the forecast data, representative numerical evaluations of expected performance of the prospective mirrors in optical systems under development become possible [Opt. Eng., 54(2), 025108 (2015)]. Here, we suggest and demonstrate an analytical approach for accounting the imperfections of the used metrology instruments, which are described by the instrumental point spread function, in the TILF modeling. The efficacy of the approach is demonstrated with numerical simulations for correction of measurements performed with an autocollimator based surface slope profiler. Besides solving this major metrological problem, the results of the present work open an avenue for developing analytical and computational tools for stitching data in the statistical domain, obtained using multiple metrology instruments measuring significantly different bandwidths of spatial wavelengths.

  11. Framework based on stochastic L-Systems for modeling IP traffic with multifractal behavior

    NASA Astrophysics Data System (ADS)

    Salvador, Paulo S.; Nogueira, Antonio; Valadas, Rui

    2003-08-01

    In a previous work we have introduced a multifractal traffic model based on so-called stochastic L-Systems, which were introduced by biologist A. Lindenmayer as a method to model plant growth. L-Systems are string rewriting techniques, characterized by an alphabet, an axiom (initial string) and a set of production rules. In this paper, we propose a novel traffic model, and an associated parameter fitting procedure, which describes jointly the packet arrival and the packet size processes. The packet arrival process is modeled through a L-System, where the alphabet elements are packet arrival rates. The packet size process is modeled through a set of discrete distributions (of packet sizes), one for each arrival rate. In this way the model is able to capture correlations between arrivals and sizes. We applied the model to measured traffic data: the well-known pOct Bellcore, a trace of aggregate WAN traffic and two traces of specific applications (Kazaa and Operation Flashing Point). We assess the multifractality of these traces using Linear Multiscale Diagrams. The suitability of the traffic model is evaluated by comparing the empirical and fitted probability mass and autocovariance functions; we also compare the packet loss ratio and average packet delay obtained with the measured traces and with traces generated from the fitted model. Our results show that our L-System based traffic model can achieve very good fitting performance in terms of first and second order statistics and queuing behavior.

  12. The response analysis of fractional-order stochastic system via generalized cell mapping method.

    PubMed

    Wang, Liang; Xue, Lili; Sun, Chunyan; Yue, Xiaole; Xu, Wei

    2018-01-01

    This paper is concerned with the response of a fractional-order stochastic system. The short memory principle is introduced to ensure that the response of the system is a Markov process. The generalized cell mapping method is applied to display the global dynamics of the noise-free system, such as attractors, basins of attraction, basin boundary, saddle, and invariant manifolds. The stochastic generalized cell mapping method is employed to obtain the evolutionary process of probability density functions of the response. The fractional-order ϕ 6 oscillator and the fractional-order smooth and discontinuous oscillator are taken as examples to give the implementations of our strategies. Studies have shown that the evolutionary direction of the probability density function of the fractional-order stochastic system is consistent with the unstable manifold. The effectiveness of the method is confirmed using Monte Carlo results.

  13. A DG approach to the numerical solution of the Stein-Stein stochastic volatility option pricing model

    NASA Astrophysics Data System (ADS)

    Hozman, J.; Tichý, T.

    2017-12-01

    Stochastic volatility models enable to capture the real world features of the options better than the classical Black-Scholes treatment. Here we focus on pricing of European-style options under the Stein-Stein stochastic volatility model when the option value depends on the time, on the price of the underlying asset and on the volatility as a function of a mean reverting Orstein-Uhlenbeck process. A standard mathematical approach to this model leads to the non-stationary second-order degenerate partial differential equation of two spatial variables completed by the system of boundary and terminal conditions. In order to improve the numerical valuation process for a such pricing equation, we propose a numerical technique based on the discontinuous Galerkin method and the Crank-Nicolson scheme. Finally, reference numerical experiments on real market data illustrate comprehensive empirical findings on options with stochastic volatility.

  14. A large deviations principle for stochastic flows of viscous fluids

    NASA Astrophysics Data System (ADS)

    Cipriano, Fernanda; Costa, Tiago

    2018-04-01

    We study the well-posedness of a stochastic differential equation on the two dimensional torus T2, driven by an infinite dimensional Wiener process with drift in the Sobolev space L2 (0 , T ;H1 (T2)) . The solution corresponds to a stochastic Lagrangian flow in the sense of DiPerna Lions. By taking into account that the motion of a viscous incompressible fluid on the torus can be described through a suitable stochastic differential equation of the previous type, we study the inviscid limit. By establishing a large deviations principle, we show that, as the viscosity goes to zero, the Lagrangian stochastic Navier-Stokes flow approaches the Euler deterministic Lagrangian flow with an exponential rate function.

  15. The cardiorespiratory interaction: a nonlinear stochastic model and its synchronization properties

    NASA Astrophysics Data System (ADS)

    Bahraminasab, A.; Kenwright, D.; Stefanovska, A.; McClintock, P. V. E.

    2007-06-01

    We address the problem of interactions between the phase of cardiac and respiration oscillatory components. The coupling between these two quantities is experimentally investigated by the theory of stochastic Markovian processes. The so-called Markov analysis allows us to derive nonlinear stochastic equations for the reconstruction of the cardiorespiratory signals. The properties of these equations provide interesting new insights into the strength and direction of coupling which enable us to divide the couplings to two parts: deterministic and stochastic. It is shown that the synchronization behaviors of the reconstructed signals are statistically identical with original one.

  16. A non-stochastic iterative computational method to model light propagation in turbid media

    NASA Astrophysics Data System (ADS)

    McIntyre, Thomas J.; Zemp, Roger J.

    2015-03-01

    Monte Carlo models are widely used to model light transport in turbid media, however their results implicitly contain stochastic variations. These fluctuations are not ideal, especially for inverse problems where Jacobian matrix errors can lead to large uncertainties upon matrix inversion. Yet Monte Carlo approaches are more computationally favorable than solving the full Radiative Transport Equation. Here, a non-stochastic computational method of estimating fluence distributions in turbid media is proposed, which is called the Non-Stochastic Propagation by Iterative Radiance Evaluation method (NSPIRE). Rather than using stochastic means to determine a random walk for each photon packet, the propagation of light from any element to all other elements in a grid is modelled simultaneously. For locally homogeneous anisotropic turbid media, the matrices used to represent scattering and projection are shown to be block Toeplitz, which leads to computational simplifications via convolution operators. To evaluate the accuracy of the algorithm, 2D simulations were done and compared against Monte Carlo models for the cases of an isotropic point source and a pencil beam incident on a semi-infinite turbid medium. The model was shown to have a mean percent error less than 2%. The algorithm represents a new paradigm in radiative transport modelling and may offer a non-stochastic alternative to modeling light transport in anisotropic scattering media for applications where the diffusion approximation is insufficient.

  17. Stochastic reduced order models for inverse problems under uncertainty

    PubMed Central

    Warner, James E.; Aquino, Wilkins; Grigoriu, Mircea D.

    2014-01-01

    This work presents a novel methodology for solving inverse problems under uncertainty using stochastic reduced order models (SROMs). Given statistical information about an observed state variable in a system, unknown parameters are estimated probabilistically through the solution of a model-constrained, stochastic optimization problem. The point of departure and crux of the proposed framework is the representation of a random quantity using a SROM - a low dimensional, discrete approximation to a continuous random element that permits e cient and non-intrusive stochastic computations. Characterizing the uncertainties with SROMs transforms the stochastic optimization problem into a deterministic one. The non-intrusive nature of SROMs facilitates e cient gradient computations for random vector unknowns and relies entirely on calls to existing deterministic solvers. Furthermore, the method is naturally extended to handle multiple sources of uncertainty in cases where state variable data, system parameters, and boundary conditions are all considered random. The new and widely-applicable SROM framework is formulated for a general stochastic optimization problem in terms of an abstract objective function and constraining model. For demonstration purposes, however, we study its performance in the specific case of inverse identification of random material parameters in elastodynamics. We demonstrate the ability to efficiently recover random shear moduli given material displacement statistics as input data. We also show that the approach remains effective for the case where the loading in the problem is random as well. PMID:25558115

  18. Stochastic Models for Precipitable Water in Convection

    NASA Astrophysics Data System (ADS)

    Leung, Kimberly

    Atmospheric precipitable water vapor (PWV) is the amount of water vapor in the atmosphere within a vertical column of unit cross-sectional area and is a critically important parameter of precipitation processes. However, accurate high-frequency and long-term observations of PWV in the sky were impossible until the availability of modern instruments such as radar. The United States Department of Energy (DOE)'s Atmospheric Radiation Measurement (ARM) Program facility made the first systematic and high-resolution observations of PWV at Darwin, Australia since 2002. At a resolution of 20 seconds, this time series allowed us to examine the volatility of PWV, including fractal behavior with dimension equal to 1.9, higher than the Brownian motion dimension of 1.5. Such strong fractal behavior calls for stochastic differential equation modeling in an attempt to address some of the difficulties of convective parameterization in various kinds of climate models, ranging from general circulation models (GCM) to weather research forecasting (WRF) models. This important observed data at high resolution can capture the fractal behavior of PWV and enables stochastic exploration into the next generation of climate models which considers scales from micrometers to thousands of kilometers. As a first step, this thesis explores a simple stochastic differential equation model of water mass balance for PWV and assesses accuracy, robustness, and sensitivity of the stochastic model. A 1000-day simulation allows for the determination of the best-fitting 25-day period as compared to data from the TWP-ICE field campaign conducted out of Darwin, Australia in early 2006. The observed data and this portion of the simulation had a correlation coefficient of 0.6513 and followed similar statistics and low-resolution temporal trends. Building on the point model foundation, a similar algorithm was applied to the National Center for Atmospheric Research (NCAR)'s existing single-column model as a test-of-concept for eventual inclusion in a general circulation model. The stochastic scheme was designed to be coupled with the deterministic single-column simulation by modifying results of the existing convective scheme (Zhang-McFarlane) and was able to produce a 20-second resolution time series that effectively simulated observed PWV, as measured by correlation coefficient (0.5510), fractal dimension (1.9), statistics, and visual examination of temporal trends.

  19. Lumping of degree-based mean-field and pair-approximation equations for multistate contact processes

    NASA Astrophysics Data System (ADS)

    Kyriakopoulos, Charalampos; Grossmann, Gerrit; Wolf, Verena; Bortolussi, Luca

    2018-01-01

    Contact processes form a large and highly interesting class of dynamic processes on networks, including epidemic and information-spreading networks. While devising stochastic models of such processes is relatively easy, analyzing them is very challenging from a computational point of view, particularly for large networks appearing in real applications. One strategy to reduce the complexity of their analysis is to rely on approximations, often in terms of a set of differential equations capturing the evolution of a random node, distinguishing nodes with different topological contexts (i.e., different degrees of different neighborhoods), such as degree-based mean-field (DBMF), approximate-master-equation (AME), or pair-approximation (PA) approaches. The number of differential equations so obtained is typically proportional to the maximum degree kmax of the network, which is much smaller than the size of the master equation of the underlying stochastic model, yet numerically solving these equations can still be problematic for large kmax. In this paper, we consider AME and PA, extended to cope with multiple local states, and we provide an aggregation procedure that clusters together nodes having similar degrees, treating those in the same cluster as indistinguishable, thus reducing the number of equations while preserving an accurate description of global observables of interest. We also provide an automatic way to build such equations and to identify a small number of degree clusters that give accurate results. The method is tested on several case studies, where it shows a high level of compression and a reduction of computational time of several orders of magnitude for large networks, with minimal loss in accuracy.

  20. A Stochastic Fractional Dynamics Model of Space-time Variability of Rain

    NASA Technical Reports Server (NTRS)

    Kundu, Prasun K.; Travis, James E.

    2013-01-01

    Rainfall varies in space and time in a highly irregular manner and is described naturally in terms of a stochastic process. A characteristic feature of rainfall statistics is that they depend strongly on the space-time scales over which rain data are averaged. A spectral model of precipitation has been developed based on a stochastic differential equation of fractional order for the point rain rate, that allows a concise description of the second moment statistics of rain at any prescribed space-time averaging scale. The model is thus capable of providing a unified description of the statistics of both radar and rain gauge data. The underlying dynamical equation can be expressed in terms of space-time derivatives of fractional orders that are adjusted together with other model parameters to fit the data. The form of the resulting spectrum gives the model adequate flexibility to capture the subtle interplay between the spatial and temporal scales of variability of rain but strongly constrains the predicted statistical behavior as a function of the averaging length and times scales. We test the model with radar and gauge data collected contemporaneously at the NASA TRMM ground validation sites located near Melbourne, Florida and in Kwajalein Atoll, Marshall Islands in the tropical Pacific. We estimate the parameters by tuning them to the second moment statistics of radar data. The model predictions are then found to fit the second moment statistics of the gauge data reasonably well without any further adjustment.

  1. Bond-based linear indices of the non-stochastic and stochastic edge-adjacency matrix. 1. Theory and modeling of ChemPhys properties of organic molecules.

    PubMed

    Marrero-Ponce, Yovani; Martínez-Albelo, Eugenio R; Casañola-Martín, Gerardo M; Castillo-Garit, Juan A; Echevería-Díaz, Yunaimy; Zaldivar, Vicente Romero; Tygat, Jan; Borges, José E Rodriguez; García-Domenech, Ramón; Torrens, Francisco; Pérez-Giménez, Facundo

    2010-11-01

    Novel bond-level molecular descriptors are proposed, based on linear maps similar to the ones defined in algebra theory. The kth edge-adjacency matrix (E(k)) denotes the matrix of bond linear indices (non-stochastic) with regard to canonical basis set. The kth stochastic edge-adjacency matrix, ES(k), is here proposed as a new molecular representation easily calculated from E(k). Then, the kth stochastic bond linear indices are calculated using ES(k) as operators of linear transformations. In both cases, the bond-type formalism is developed. The kth non-stochastic and stochastic total linear indices are calculated by adding the kth non-stochastic and stochastic bond linear indices, respectively, of all bonds in molecule. First, the new bond-based molecular descriptors (MDs) are tested for suitability, for the QSPRs, by analyzing regressions of novel indices for selected physicochemical properties of octane isomers (first round). General performance of the new descriptors in this QSPR studies is evaluated with regard to the well-known sets of 2D/3D MDs. From the analysis, we can conclude that the non-stochastic and stochastic bond-based linear indices have an overall good modeling capability proving their usefulness in QSPR studies. Later, the novel bond-level MDs are also used for the description and prediction of the boiling point of 28 alkyl-alcohols (second round), and to the modeling of the specific rate constant (log k), partition coefficient (log P), as well as the antibacterial activity of 34 derivatives of 2-furylethylenes (third round). The comparison with other approaches (edge- and vertices-based connectivity indices, total and local spectral moments, and quantum chemical descriptors as well as E-state/biomolecular encounter parameters) exposes a good behavior of our method in this QSPR studies. Finally, the approach described in this study appears to be a very promising structural invariant, useful not only for QSPR studies but also for similarity/diversity analysis and drug discovery protocols.

  2. Stochastic IMT (Insulator-Metal-Transition) Neurons: An Interplay of Thermal and Threshold Noise at Bifurcation

    PubMed Central

    Parihar, Abhinav; Jerry, Matthew; Datta, Suman; Raychowdhury, Arijit

    2018-01-01

    Artificial neural networks can harness stochasticity in multiple ways to enable a vast class of computationally powerful models. Boltzmann machines and other stochastic neural networks have been shown to outperform their deterministic counterparts by allowing dynamical systems to escape local energy minima. Electronic implementation of such stochastic networks is currently limited to addition of algorithmic noise to digital machines which is inherently inefficient; albeit recent efforts to harness physical noise in devices for stochasticity have shown promise. To succeed in fabricating electronic neuromorphic networks we need experimental evidence of devices with measurable and controllable stochasticity which is complemented with the development of reliable statistical models of such observed stochasticity. Current research literature has sparse evidence of the former and a complete lack of the latter. This motivates the current article where we demonstrate a stochastic neuron using an insulator-metal-transition (IMT) device, based on electrically induced phase-transition, in series with a tunable resistance. We show that an IMT neuron has dynamics similar to a piecewise linear FitzHugh-Nagumo (FHN) neuron and incorporates all characteristics of a spiking neuron in the device phenomena. We experimentally demonstrate spontaneous stochastic spiking along with electrically controllable firing probabilities using Vanadium Dioxide (VO2) based IMT neurons which show a sigmoid-like transfer function. The stochastic spiking is explained by two noise sources - thermal noise and threshold fluctuations, which act as precursors of bifurcation. As such, the IMT neuron is modeled as an Ornstein-Uhlenbeck (OU) process with a fluctuating boundary resulting in transfer curves that closely match experiments. The moments of interspike intervals are calculated analytically by extending the first-passage-time (FPT) models for Ornstein-Uhlenbeck (OU) process to include a fluctuating boundary. We find that the coefficient of variation of interspike intervals depend on the relative proportion of thermal and threshold noise, where threshold noise is the dominant source in the current experimental demonstrations. As one of the first comprehensive studies of a stochastic neuron hardware and its statistical properties, this article would enable efficient implementation of a large class of neuro-mimetic networks and algorithms. PMID:29670508

  3. Stochastic IMT (Insulator-Metal-Transition) Neurons: An Interplay of Thermal and Threshold Noise at Bifurcation.

    PubMed

    Parihar, Abhinav; Jerry, Matthew; Datta, Suman; Raychowdhury, Arijit

    2018-01-01

    Artificial neural networks can harness stochasticity in multiple ways to enable a vast class of computationally powerful models. Boltzmann machines and other stochastic neural networks have been shown to outperform their deterministic counterparts by allowing dynamical systems to escape local energy minima. Electronic implementation of such stochastic networks is currently limited to addition of algorithmic noise to digital machines which is inherently inefficient; albeit recent efforts to harness physical noise in devices for stochasticity have shown promise. To succeed in fabricating electronic neuromorphic networks we need experimental evidence of devices with measurable and controllable stochasticity which is complemented with the development of reliable statistical models of such observed stochasticity. Current research literature has sparse evidence of the former and a complete lack of the latter. This motivates the current article where we demonstrate a stochastic neuron using an insulator-metal-transition (IMT) device, based on electrically induced phase-transition, in series with a tunable resistance. We show that an IMT neuron has dynamics similar to a piecewise linear FitzHugh-Nagumo (FHN) neuron and incorporates all characteristics of a spiking neuron in the device phenomena. We experimentally demonstrate spontaneous stochastic spiking along with electrically controllable firing probabilities using Vanadium Dioxide (VO 2 ) based IMT neurons which show a sigmoid-like transfer function. The stochastic spiking is explained by two noise sources - thermal noise and threshold fluctuations, which act as precursors of bifurcation. As such, the IMT neuron is modeled as an Ornstein-Uhlenbeck (OU) process with a fluctuating boundary resulting in transfer curves that closely match experiments. The moments of interspike intervals are calculated analytically by extending the first-passage-time (FPT) models for Ornstein-Uhlenbeck (OU) process to include a fluctuating boundary. We find that the coefficient of variation of interspike intervals depend on the relative proportion of thermal and threshold noise, where threshold noise is the dominant source in the current experimental demonstrations. As one of the first comprehensive studies of a stochastic neuron hardware and its statistical properties, this article would enable efficient implementation of a large class of neuro-mimetic networks and algorithms.

  4. On convergence of the unscented Kalman-Bucy filter using contraction theory

    NASA Astrophysics Data System (ADS)

    Maree, J. P.; Imsland, L.; Jouffroy, J.

    2016-06-01

    Contraction theory entails a theoretical framework in which convergence of a nonlinear system can be analysed differentially in an appropriate contraction metric. This paper is concerned with utilising stochastic contraction theory to conclude on exponential convergence of the unscented Kalman-Bucy filter. The underlying process and measurement models of interest are Itô-type stochastic differential equations. In particular, statistical linearisation techniques are employed in a virtual-actual systems framework to establish deterministic contraction of the estimated expected mean of process values. Under mild conditions of bounded process noise, we extend the results on deterministic contraction to stochastic contraction of the estimated expected mean of the process state. It follows that for the regions of contraction, a result on convergence, and thereby incremental stability, is concluded for the unscented Kalman-Bucy filter. The theoretical concepts are illustrated in two case studies.

  5. From stochastic processes to numerical methods: A new scheme for solving reaction subdiffusion fractional partial differential equations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Angstmann, C.N.; Donnelly, I.C.; Henry, B.I., E-mail: B.Henry@unsw.edu.au

    We have introduced a new explicit numerical method, based on a discrete stochastic process, for solving a class of fractional partial differential equations that model reaction subdiffusion. The scheme is derived from the master equations for the evolution of the probability density of a sum of discrete time random walks. We show that the diffusion limit of the master equations recovers the fractional partial differential equation of interest. This limiting procedure guarantees the consistency of the numerical scheme. The positivity of the solution and stability results are simply obtained, provided that the underlying process is well posed. We also showmore » that the method can be applied to standard reaction–diffusion equations. This work highlights the broader applicability of using discrete stochastic processes to provide numerical schemes for partial differential equations, including fractional partial differential equations.« less

  6. Random-order fractional bistable system and its stochastic resonance

    NASA Astrophysics Data System (ADS)

    Gao, Shilong; Zhang, Li; Liu, Hui; Kan, Bixia

    2017-01-01

    In this paper, the diffusion motion of Brownian particles in a viscous liquid suffering from stochastic fluctuations of the external environment is modeled as a random-order fractional bistable equation, and as a typical nonlinear dynamic behavior, the stochastic resonance phenomena in this system are investigated. At first, the derivation process of the random-order fractional bistable system is given. In particular, the random-power-law memory is deeply discussed to obtain the physical interpretation of the random-order fractional derivative. Secondly, the stochastic resonance evoked by random-order and external periodic force is mainly studied by numerical simulation. In particular, the frequency shifting phenomena of the periodical output are observed in SR induced by the excitation of the random order. Finally, the stochastic resonance of the system under the double stochastic excitations of the random order and the internal color noise is also investigated.

  7. Variance decomposition in stochastic simulators.

    PubMed

    Le Maître, O P; Knio, O M; Moraes, A

    2015-06-28

    This work aims at the development of a mathematical and computational approach that enables quantification of the inherent sources of stochasticity and of the corresponding sensitivities in stochastic simulations of chemical reaction networks. The approach is based on reformulating the system dynamics as being generated by independent standardized Poisson processes. This reformulation affords a straightforward identification of individual realizations for the stochastic dynamics of each reaction channel, and consequently a quantitative characterization of the inherent sources of stochasticity in the system. By relying on the Sobol-Hoeffding decomposition, the reformulation enables us to perform an orthogonal decomposition of the solution variance. Thus, by judiciously exploiting the inherent stochasticity of the system, one is able to quantify the variance-based sensitivities associated with individual reaction channels, as well as the importance of channel interactions. Implementation of the algorithms is illustrated in light of simulations of simplified systems, including the birth-death, Schlögl, and Michaelis-Menten models.

  8. Stochastic goal-oriented error estimation with memory

    NASA Astrophysics Data System (ADS)

    Ackmann, Jan; Marotzke, Jochem; Korn, Peter

    2017-11-01

    We propose a stochastic dual-weighted error estimator for the viscous shallow-water equation with boundaries. For this purpose, previous work on memory-less stochastic dual-weighted error estimation is extended by incorporating memory effects. The memory is introduced by describing the local truncation error as a sum of time-correlated random variables. The random variables itself represent the temporal fluctuations in local truncation errors and are estimated from high-resolution information at near-initial times. The resulting error estimator is evaluated experimentally in two classical ocean-type experiments, the Munk gyre and the flow around an island. In these experiments, the stochastic process is adapted locally to the respective dynamical flow regime. Our stochastic dual-weighted error estimator is shown to provide meaningful error bounds for a range of physically relevant goals. We prove, as well as show numerically, that our approach can be interpreted as a linearized stochastic-physics ensemble.

  9. Variance decomposition in stochastic simulators

    NASA Astrophysics Data System (ADS)

    Le Maître, O. P.; Knio, O. M.; Moraes, A.

    2015-06-01

    This work aims at the development of a mathematical and computational approach that enables quantification of the inherent sources of stochasticity and of the corresponding sensitivities in stochastic simulations of chemical reaction networks. The approach is based on reformulating the system dynamics as being generated by independent standardized Poisson processes. This reformulation affords a straightforward identification of individual realizations for the stochastic dynamics of each reaction channel, and consequently a quantitative characterization of the inherent sources of stochasticity in the system. By relying on the Sobol-Hoeffding decomposition, the reformulation enables us to perform an orthogonal decomposition of the solution variance. Thus, by judiciously exploiting the inherent stochasticity of the system, one is able to quantify the variance-based sensitivities associated with individual reaction channels, as well as the importance of channel interactions. Implementation of the algorithms is illustrated in light of simulations of simplified systems, including the birth-death, Schlögl, and Michaelis-Menten models.

  10. GPU-powered Shotgun Stochastic Search for Dirichlet process mixtures of Gaussian Graphical Models

    PubMed Central

    Mukherjee, Chiranjit; Rodriguez, Abel

    2016-01-01

    Gaussian graphical models are popular for modeling high-dimensional multivariate data with sparse conditional dependencies. A mixture of Gaussian graphical models extends this model to the more realistic scenario where observations come from a heterogenous population composed of a small number of homogeneous sub-groups. In this paper we present a novel stochastic search algorithm for finding the posterior mode of high-dimensional Dirichlet process mixtures of decomposable Gaussian graphical models. Further, we investigate how to harness the massive thread-parallelization capabilities of graphical processing units to accelerate computation. The computational advantages of our algorithms are demonstrated with various simulated data examples in which we compare our stochastic search with a Markov chain Monte Carlo algorithm in moderate dimensional data examples. These experiments show that our stochastic search largely outperforms the Markov chain Monte Carlo algorithm in terms of computing-times and in terms of the quality of the posterior mode discovered. Finally, we analyze a gene expression dataset in which Markov chain Monte Carlo algorithms are too slow to be practically useful. PMID:28626348

  11. GPU-powered Shotgun Stochastic Search for Dirichlet process mixtures of Gaussian Graphical Models.

    PubMed

    Mukherjee, Chiranjit; Rodriguez, Abel

    2016-01-01

    Gaussian graphical models are popular for modeling high-dimensional multivariate data with sparse conditional dependencies. A mixture of Gaussian graphical models extends this model to the more realistic scenario where observations come from a heterogenous population composed of a small number of homogeneous sub-groups. In this paper we present a novel stochastic search algorithm for finding the posterior mode of high-dimensional Dirichlet process mixtures of decomposable Gaussian graphical models. Further, we investigate how to harness the massive thread-parallelization capabilities of graphical processing units to accelerate computation. The computational advantages of our algorithms are demonstrated with various simulated data examples in which we compare our stochastic search with a Markov chain Monte Carlo algorithm in moderate dimensional data examples. These experiments show that our stochastic search largely outperforms the Markov chain Monte Carlo algorithm in terms of computing-times and in terms of the quality of the posterior mode discovered. Finally, we analyze a gene expression dataset in which Markov chain Monte Carlo algorithms are too slow to be practically useful.

  12. Stochastic density waves of granular flows: strong-intermittent dissipation fields with self-organization

    NASA Astrophysics Data System (ADS)

    Bershadskii, A.

    1994-10-01

    The quantitative (scaling) results of a recent lattice-gas simulation of granular flows [1] are interpreted in terms of Kolmogorov-Obukhov approach revised for strong space-intermittent systems. Renormalised power spectrum with exponent '-4/3' seems to be an universal spectrum of scalar fluctuations convected by stochastic velocity fields in dissipative systems with inverse energy transfer (some other laboratory and geophysic turbulent flows with this power spectrum as well as an analogy between this phenomenon and turbulent percolation on elastic backbone are pointed out).

  13. Existence, uniqueness, and stability of stochastic neutral functional differential equations of Sobolev-type

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Yang, Xuetao; Zhu, Quanxin, E-mail: zqx22@126.com

    2015-12-15

    In this paper, we are mainly concerned with a class of stochastic neutral functional differential equations of Sobolev-type with Poisson jumps. Under two different sets of conditions, we establish the existence of the mild solution by applying the Leray-Schauder alternative theory and the Sadakovskii’s fixed point theorem, respectively. Furthermore, we use the Bihari’s inequality to prove the Osgood type uniqueness. Also, the mean square exponential stability is investigated by applying the Gronwall inequality. Finally, two examples are given to illustrate the theory results.

  14. Noise kernels of stochastic gravity in conformally-flat spacetimes

    NASA Astrophysics Data System (ADS)

    Cho, H. T.; Hu, B. L.

    2015-03-01

    The central object in the theory of semiclassical stochastic gravity is the noise kernel, which is the symmetric two point correlation function of the stress-energy tensor. Using the corresponding Wightman functions in Minkowski, Einstein and open Einstein spaces, we construct the noise kernels of a conformally coupled scalar field in these spacetimes. From them we show that the noise kernels in conformally-flat spacetimes, including the Friedmann-Robertson-Walker universes, can be obtained in closed analytic forms by using a combination of conformal and coordinate transformations.

  15. Adaptive multiple super fast simulated annealing for stochastic microstructure reconstruction

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ryu, Seun; Lin, Guang; Sun, Xin

    2013-01-01

    Fast image reconstruction from statistical information is critical in image fusion from multimodality chemical imaging instrumentation to create high resolution image with large domain. Stochastic methods have been used widely in image reconstruction from two point correlation function. The main challenge is to increase the efficiency of reconstruction. A novel simulated annealing method is proposed for fast solution of image reconstruction. Combining the advantage of very fast cooling schedules, dynamic adaption and parallelization, the new simulation annealing algorithm increases the efficiencies by several orders of magnitude, making the large domain image fusion feasible.

  16. Impact of AGN and nebular emission on the estimation of stellar properties of galaxies

    NASA Astrophysics Data System (ADS)

    Cardoso, Leandro Saul Machado

    The aim of this PhD thesis is to apply tools from stochastic modeling to wind power, speed and direction data, in order to reproduce their empirically observed statistical features. In particular, the wind energy conversion process is modeled as a Langevin process, which allows to describe its dynamics with only two coefficients, namely the drift and the diffusion coefficients. Both coefficients can be directly derived from collected time-series and this so-called Langevin method has proved to be successful in several cases. However, the application to empirical data subjected to measurement noise sources in general and the case of wind turbines in particular poses several challenges and this thesis proposes methods to tackle them. To apply the Langevin method it is necessary to have data that is both stationary and Markovian, which is typically not the case. Moreover, the available time-series are often short and have missing data points, which affects the estimation of the coefficients. This thesis proposes a new methodology to overcome these issues by modeling the original data with a Markov chain prior to the Langevin analysis. The latter is then performed on data synthesized from the Markov chain model of wind data. Moreover, it is shown that the Langevin method can be applied to low sample rate wind data, namely 10-minute average data. The method is then extended in two different directions. First, to tackle non-stationary data sets. Wind data often exhibits daily patterns due to the solar cycle and this thesis proposes a method to consider these daily patterns in the analysis of the timeseries. For that, a cyclic Markov model is developed for the data synthesis step and subsequently, for each time of the day, a separate Langevin analysis of the wind energy conversion system is performed. Second, to resolve the dynamical stochastic process in the case it is spoiled by measurement noise. When working with measurement data a challenge can be posed by the quality of the data in itself. Often measurement devices add noise to the time-series that is different from the intrinsic noise of the underlying stochastic process and can even be time-correlated. This spoiled data, analyzed with the Langevin method leads to distorted drift and diffusion coefficients. This thesis proposes a direct, parameter-free way to extract the Langevin coefficients as well as the parameters of the measurement noise from spoiled data. Put in a more general context, the method allows to disentangle two superposed independent stochastic processes. Finally, since a characteristic of wind energy that motivates this stochastic modeling framework is the fluctuating nature of wind itself, several issues raise when it comes to reserve commitment or bidding on the liberalized energy market. This thesis proposes a measure to quantify the risk-returnratio that is associated to wind power production conditioned to a wind park state. The proposed state of the wind park takes into account data from all wind turbines constituting the park and also their correlations at different time lags. None

  17. Stochastic driven systems far from equilibrium

    NASA Astrophysics Data System (ADS)

    Kim, Kyung Hyuk

    We study the dynamics and steady states of two systems far from equilibrium: a 1-D driven lattice gas and a driven Brownian particle with inertia. (1) We investigate the dynamical scaling behavior of a 1-D driven lattice gas model with two species of particles hopping in opposite directions. We confirm numerically that the dynamic exponent is equal to z = 1.5. We show analytically that a quasi-particle representation relates all phase points to a special phase line directly related to the single-species asymmetric simple exclusion process. Quasi-particle two-point correlations decay exponentially, and in such a manner that quasi-particles of opposite charge dynamically screen each other with a special balance. The balance encompasses all over the phase space. These results indicate that the model belongs to the Kardar-Parisi-Zhang (KPZ) universality class. (2) We investigate the non-equilibrium thermodynamics of a Brownian particle with inertia under feedback control of its inertia. We find such open systems can act as a molecular refrigerator due to an entropy pumping mechanism. We extend the fluctuation theorems to the refrigerator. The entropy pumping modifies both the Jarzynski equality and the fluctuation theorems. We discover that the entropy pumping has a dual role of work and heat. We also investigate the thermodynamics of the particle under a hydrodynamic interaction described by a Langevin equation with a multiplicative noise. The Stratonovich stochastic integration prescription involved in the definition of heat is shown to be the unique physical choice.

  18. Parallel, stochastic measurement of molecular surface area.

    PubMed

    Juba, Derek; Varshney, Amitabh

    2008-08-01

    Biochemists often wish to compute surface areas of proteins. A variety of algorithms have been developed for this task, but they are designed for traditional single-processor architectures. The current trend in computer hardware is towards increasingly parallel architectures for which these algorithms are not well suited. We describe a parallel, stochastic algorithm for molecular surface area computation that maps well to the emerging multi-core architectures. Our algorithm is also progressive, providing a rough estimate of surface area immediately and refining this estimate as time goes on. Furthermore, the algorithm generates points on the molecular surface which can be used for point-based rendering. We demonstrate a GPU implementation of our algorithm and show that it compares favorably with several existing molecular surface computation programs, giving fast estimates of the molecular surface area with good accuracy.

  19. Lack of Critical Slowing Down Suggests that Financial Meltdowns Are Not Critical Transitions, yet Rising Variability Could Signal Systemic Risk.

    PubMed

    Guttal, Vishwesha; Raghavendra, Srinivas; Goel, Nikunj; Hoarau, Quentin

    2016-01-01

    Complex systems inspired analysis suggests a hypothesis that financial meltdowns are abrupt critical transitions that occur when the system reaches a tipping point. Theoretical and empirical studies on climatic and ecological dynamical systems have shown that approach to tipping points is preceded by a generic phenomenon called critical slowing down, i.e. an increasingly slow response of the system to perturbations. Therefore, it has been suggested that critical slowing down may be used as an early warning signal of imminent critical transitions. Whether financial markets exhibit critical slowing down prior to meltdowns remains unclear. Here, our analysis reveals that three major US (Dow Jones Index, S&P 500 and NASDAQ) and two European markets (DAX and FTSE) did not exhibit critical slowing down prior to major financial crashes over the last century. However, all markets showed strong trends of rising variability, quantified by time series variance and spectral function at low frequencies, prior to crashes. These results suggest that financial crashes are not critical transitions that occur in the vicinity of a tipping point. Using a simple model, we argue that financial crashes are likely to be stochastic transitions which can occur even when the system is far away from the tipping point. Specifically, we show that a gradually increasing strength of stochastic perturbations may have caused to abrupt transitions in the financial markets. Broadly, our results highlight the importance of stochastically driven abrupt transitions in real world scenarios. Our study offers rising variability as a precursor of financial meltdowns albeit with a limitation that they may signal false alarms.

  20. Lack of Critical Slowing Down Suggests that Financial Meltdowns Are Not Critical Transitions, yet Rising Variability Could Signal Systemic Risk

    PubMed Central

    Hoarau, Quentin

    2016-01-01

    Complex systems inspired analysis suggests a hypothesis that financial meltdowns are abrupt critical transitions that occur when the system reaches a tipping point. Theoretical and empirical studies on climatic and ecological dynamical systems have shown that approach to tipping points is preceded by a generic phenomenon called critical slowing down, i.e. an increasingly slow response of the system to perturbations. Therefore, it has been suggested that critical slowing down may be used as an early warning signal of imminent critical transitions. Whether financial markets exhibit critical slowing down prior to meltdowns remains unclear. Here, our analysis reveals that three major US (Dow Jones Index, S&P 500 and NASDAQ) and two European markets (DAX and FTSE) did not exhibit critical slowing down prior to major financial crashes over the last century. However, all markets showed strong trends of rising variability, quantified by time series variance and spectral function at low frequencies, prior to crashes. These results suggest that financial crashes are not critical transitions that occur in the vicinity of a tipping point. Using a simple model, we argue that financial crashes are likely to be stochastic transitions which can occur even when the system is far away from the tipping point. Specifically, we show that a gradually increasing strength of stochastic perturbations may have caused to abrupt transitions in the financial markets. Broadly, our results highlight the importance of stochastically driven abrupt transitions in real world scenarios. Our study offers rising variability as a precursor of financial meltdowns albeit with a limitation that they may signal false alarms. PMID:26761792

  1. Stochastic analysis of uncertain thermal parameters for random thermal regime of frozen soil around a single freezing pipe

    NASA Astrophysics Data System (ADS)

    Wang, Tao; Zhou, Guoqing; Wang, Jianzhou; Zhou, Lei

    2018-03-01

    The artificial ground freezing method (AGF) is widely used in civil and mining engineering, and the thermal regime of frozen soil around the freezing pipe affects the safety of design and construction. The thermal parameters can be truly random due to heterogeneity of the soil properties, which lead to the randomness of thermal regime of frozen soil around the freezing pipe. The purpose of this paper is to study the one-dimensional (1D) random thermal regime problem on the basis of a stochastic analysis model and the Monte Carlo (MC) method. Considering the uncertain thermal parameters of frozen soil as random variables, stochastic processes and random fields, the corresponding stochastic thermal regime of frozen soil around a single freezing pipe are obtained and analyzed. Taking the variability of each stochastic parameter into account individually, the influences of each stochastic thermal parameter on stochastic thermal regime are investigated. The results show that the mean temperatures of frozen soil around the single freezing pipe with three analogy method are the same while the standard deviations are different. The distributions of standard deviation have a great difference at different radial coordinate location and the larger standard deviations are mainly at the phase change area. The computed data with random variable method and stochastic process method have a great difference from the measured data while the computed data with random field method well agree with the measured data. Each uncertain thermal parameter has a different effect on the standard deviation of frozen soil temperature around the single freezing pipe. These results can provide a theoretical basis for the design and construction of AGF.

  2. Ignition probability of polymer-bonded explosives accounting for multiple sources of material stochasticity

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kim, S.; Barua, A.; Zhou, M., E-mail: min.zhou@me.gatech.edu

    2014-05-07

    Accounting for the combined effect of multiple sources of stochasticity in material attributes, we develop an approach that computationally predicts the probability of ignition of polymer-bonded explosives (PBXs) under impact loading. The probabilistic nature of the specific ignition processes is assumed to arise from two sources of stochasticity. The first source involves random variations in material microstructural morphology; the second source involves random fluctuations in grain-binder interfacial bonding strength. The effect of the first source of stochasticity is analyzed with multiple sets of statistically similar microstructures and constant interfacial bonding strength. Subsequently, each of the microstructures in the multiple setsmore » is assigned multiple instantiations of randomly varying grain-binder interfacial strengths to analyze the effect of the second source of stochasticity. Critical hotspot size-temperature states reaching the threshold for ignition are calculated through finite element simulations that explicitly account for microstructure and bulk and interfacial dissipation to quantify the time to criticality (t{sub c}) of individual samples, allowing the probability distribution of the time to criticality that results from each source of stochastic variation for a material to be analyzed. Two probability superposition models are considered to combine the effects of the multiple sources of stochasticity. The first is a parallel and series combination model, and the second is a nested probability function model. Results show that the nested Weibull distribution provides an accurate description of the combined ignition probability. The approach developed here represents a general framework for analyzing the stochasticity in the material behavior that arises out of multiple types of uncertainty associated with the structure, design, synthesis and processing of materials.« less

  3. Conference on Stochastic Processes and their Applications (16th) Held in Stanford, California on 16-21 August 1987.

    DTIC Science & Technology

    1987-08-21

    property. 3.. 32’ " ~a-CHAOS " by-" Ron C. BMe ". University of Connecticut f.Storrs, CT l. 𔃾 ABSTRACT Although presented from two different vantage...either an abort or a restart fashion. *1 pal 58.- S~. , 2~ ./ ON CRITERIA OF OPTIMALITY IN ESTIMATION FOR STOCHASTIC PROCESSES by C. C. Heyde Australian

  4. Mathematical Sciences Division 1992 Programs

    DTIC Science & Technology

    1992-10-01

    statistical theory that underlies modern signal analysis . There is a strong emphasis on stochastic processes and time series , particularly those which...include optimal resource planning and real- time scheduling of stochastic shop-floor processes. Scheduling systems will be developed that can adapt to...make forecasts for the length-of-service time series . Protocol analysis of these sessions will be used to idenify relevant contextual features and to

  5. Modeling spiking behavior of neurons with time-dependent Poisson processes.

    PubMed

    Shinomoto, S; Tsubo, Y

    2001-10-01

    Three kinds of interval statistics, as represented by the coefficient of variation, the skewness coefficient, and the correlation coefficient of consecutive intervals, are evaluated for three kinds of time-dependent Poisson processes: pulse regulated, sinusoidally regulated, and doubly stochastic. Among these three processes, the sinusoidally regulated and doubly stochastic Poisson processes, in the case when the spike rate varies slowly compared with the mean interval between spikes, are found to be consistent with the three statistical coefficients exhibited by data recorded from neurons in the prefrontal cortex of monkeys.

  6. Power Laws in Stochastic Processes for Social Phenomena: An Introductory Review

    NASA Astrophysics Data System (ADS)

    Kumamoto, Shin-Ichiro; Kamihigashi, Takashi

    2018-03-01

    Many phenomena with power laws have been observed in various fields of the natural and social sciences, and these power laws are often interpreted as the macro behaviors of systems that consist of micro units. In this paper, we review some basic mathematical mechanisms that are known to generate power laws. In particular, we focus on stochastic processes including the Yule process and the Simon process as well as some recent models. The main purpose of this paper is to explain the mathematical details of their mechanisms in a self-contained manner.

  7. Codifference as a practical tool to measure interdependence

    NASA Astrophysics Data System (ADS)

    Wyłomańska, Agnieszka; Chechkin, Aleksei; Gajda, Janusz; Sokolov, Igor M.

    2015-03-01

    Correlation and spectral analysis represent the standard tools to study interdependence in statistical data. However, for the stochastic processes with heavy-tailed distributions such that the variance diverges, these tools are inadequate. The heavy-tailed processes are ubiquitous in nature and finance. We here discuss codifference as a convenient measure to study statistical interdependence, and we aim to give a short introductory review of its properties. By taking different known stochastic processes as generic examples, we present explicit formulas for their codifferences. We show that for the Gaussian processes codifference is equivalent to covariance. For processes with finite variance these two measures behave similarly with time. For the processes with infinite variance the covariance does not exist, however, the codifference is relevant. We demonstrate the practical importance of the codifference by extracting this function from simulated as well as real data taken from turbulent plasma of fusion device and financial market. We conclude that the codifference serves as a convenient practical tool to study interdependence for stochastic processes with both infinite and finite variances as well.

  8. Estimating continuous floodplain and major river bed topography mixing ordinal coutour lines and topographic points

    NASA Astrophysics Data System (ADS)

    Bailly, J. S.; Dartevelle, M.; Delenne, C.; Rousseau, A.

    2017-12-01

    Floodplain and major river bed topography govern many river biophysical processes during floods. Despite the grow of direct topographic measurements from LiDARS on riverine systems, it still room to develop methods for large (e.g. deltas) or very local (e.g. ponds) riverine systems that take advantage of information coming from simple SAR or optical image processing on floodplain, resulting from waterbodies delineation during flood up or down, and producing ordered coutour lines. The next challenge is thus to exploit such data in order to estimate continuous topography on the floodplain combining heterogeneous data: a topographic points dataset and a located but unknown and ordered contourline dataset. This article is comparing two methods designed to estimate continuous topography on the floodplain mixing ordinal coutour lines and continuous topographic points. For both methods a first estimation step is to value each contourline with elevation and a second step is next to estimate the continuous field from both topographic points and valued contourlines. The first proposed method is a stochastic method starting from multigaussian random-fields and conditional simualtion. The second is a deterministic method based on radial spline fonction for thin layers used for approximated bivariate surface construction. Results are first shown and discussed from a set of synoptic case studies presenting various topographic points density and topographic smoothness. Next, results are shown and discuss on an actual case study in the Montagua laguna, located in the north of Valparaiso, Chile.

  9. Estimating continuous floodplain and major river bed topography mixing ordinal coutour lines and topographic points

    NASA Astrophysics Data System (ADS)

    Brown, T. G.; Lespez, L.; Sear, D. A.; Houben, P.; Klimek, K.

    2016-12-01

    Floodplain and major river bed topography govern many river biophysical processes during floods. Despite the grow of direct topographic measurements from LiDARS on riverine systems, it still room to develop methods for large (e.g. deltas) or very local (e.g. ponds) riverine systems that take advantage of information coming from simple SAR or optical image processing on floodplain, resulting from waterbodies delineation during flood up or down, and producing ordered coutour lines. The next challenge is thus to exploit such data in order to estimate continuous topography on the floodplain combining heterogeneous data: a topographic points dataset and a located but unknown and ordered contourline dataset. This article is comparing two methods designed to estimate continuous topography on the floodplain mixing ordinal coutour lines and continuous topographic points. For both methods a first estimation step is to value each contourline with elevation and a second step is next to estimate the continuous field from both topographic points and valued contourlines. The first proposed method is a stochastic method starting from multigaussian random-fields and conditional simualtion. The second is a deterministic method based on radial spline fonction for thin layers used for approximated bivariate surface construction. Results are first shown and discussed from a set of synoptic case studies presenting various topographic points density and topographic smoothness. Next, results are shown and discuss on an actual case study in the Montagua laguna, located in the north of Valparaiso, Chile.

  10. Laws of Large Numbers and Langevin Approximations for Stochastic Neural Field Equations

    PubMed Central

    2013-01-01

    In this study, we consider limit theorems for microscopic stochastic models of neural fields. We show that the Wilson–Cowan equation can be obtained as the limit in uniform convergence on compacts in probability for a sequence of microscopic models when the number of neuron populations distributed in space and the number of neurons per population tend to infinity. This result also allows to obtain limits for qualitatively different stochastic convergence concepts, e.g., convergence in the mean. Further, we present a central limit theorem for the martingale part of the microscopic models which, suitably re-scaled, converges to a centred Gaussian process with independent increments. These two results provide the basis for presenting the neural field Langevin equation, a stochastic differential equation taking values in a Hilbert space, which is the infinite-dimensional analogue of the chemical Langevin equation in the present setting. On a technical level, we apply recently developed law of large numbers and central limit theorems for piecewise deterministic processes taking values in Hilbert spaces to a master equation formulation of stochastic neuronal network models. These theorems are valid for processes taking values in Hilbert spaces, and by this are able to incorporate spatial structures of the underlying model. Mathematics Subject Classification (2000): 60F05, 60J25, 60J75, 92C20. PMID:23343328

  11. Nonequilibrium magnetic properties in a two-dimensional kinetic mixed Ising system within the effective-field theory and Glauber-type stochastic dynamics approach.

    PubMed

    Ertaş, Mehmet; Deviren, Bayram; Keskin, Mustafa

    2012-11-01

    Nonequilibrium magnetic properties in a two-dimensional kinetic mixed spin-2 and spin-5/2 Ising system in the presence of a time-varying (sinusoidal) magnetic field are studied within the effective-field theory (EFT) with correlations. The time evolution of the system is described by using Glauber-type stochastic dynamics. The dynamic EFT equations are derived by employing the Glauber transition rates for two interpenetrating square lattices. We investigate the time dependence of the magnetizations for different interaction parameter values in order to find the phases in the system. We also study the thermal behavior of the dynamic magnetizations, the hysteresis loop area, and dynamic correlation. The dynamic phase diagrams are presented in the reduced magnetic field amplitude and reduced temperature plane and we observe that the system exhibits dynamic tricritical and reentrant behaviors. Moreover, the system also displays a double critical end point (B), a zero-temperature critical point (Z), a critical end point (E), and a triple point (TP). We also performed a comparison with the mean-field prediction in order to point out the effects of correlations and found that some of the dynamic first-order phase lines, which are artifacts of the mean-field approach, disappeared.

  12. On time-dependent diffusion coefficients arising from stochastic processes with memory

    NASA Astrophysics Data System (ADS)

    Carpio-Bernido, M. Victoria; Barredo, Wilson I.; Bernido, Christopher C.

    2017-08-01

    Time-dependent diffusion coefficients arise from anomalous diffusion encountered in many physical systems such as protein transport in cells. We compare these coefficients with those arising from analysis of stochastic processes with memory that go beyond fractional Brownian motion. Facilitated by the Hida white noise functional integral approach, diffusion propagators or probability density functions (pdf) are obtained and shown to be solutions of modified diffusion equations with time-dependent diffusion coefficients. This should be useful in the study of complex transport processes.

  13. Stochastic Analysis and Applied Probability(3.3.1): Topics in the Theory and Applications of Stochastic Analysis

    DTIC Science & Technology

    2015-08-13

    is due to Reiman [36] who considered the case where the arrivals and services are mutually independent renewal processes with square integrable summands...to a reflected diffusion process with drift and diffusion coefficients that depend on the state of the process. In models considered in works of Reiman ...the infinity Laplacian. Jour. AMS, to appear [36] M. I. Reiman . Open queueing networks in heavy traffic. Mathematics of Operations Research, 9(3): 441

  14. Simulation of anaerobic digestion processes using stochastic algorithm.

    PubMed

    Palanichamy, Jegathambal; Palani, Sundarambal

    2014-01-01

    The Anaerobic Digestion (AD) processes involve numerous complex biological and chemical reactions occurring simultaneously. Appropriate and efficient models are to be developed for simulation of anaerobic digestion systems. Although several models have been developed, mostly they suffer from lack of knowledge on constants, complexity and weak generalization. The basis of the deterministic approach for modelling the physico and bio-chemical reactions occurring in the AD system is the law of mass action, which gives the simple relationship between the reaction rates and the species concentrations. The assumptions made in the deterministic models are not hold true for the reactions involving chemical species of low concentration. The stochastic behaviour of the physicochemical processes can be modeled at mesoscopic level by application of the stochastic algorithms. In this paper a stochastic algorithm (Gillespie Tau Leap Method) developed in MATLAB was applied to predict the concentration of glucose, acids and methane formation at different time intervals. By this the performance of the digester system can be controlled. The processes given by ADM1 (Anaerobic Digestion Model 1) were taken for verification of the model. The proposed model was verified by comparing the results of Gillespie's algorithms with the deterministic solution for conversion of glucose into methane through degraders. At higher value of 'τ' (timestep), the computational time required for reaching the steady state is more since the number of chosen reactions is less. When the simulation time step is reduced, the results are similar to ODE solver. It was concluded that the stochastic algorithm is a suitable approach for the simulation of complex anaerobic digestion processes. The accuracy of the results depends on the optimum selection of tau value.

  15. Stochastic model for fatigue crack size and cost effective design decisions. [for aerospace structures

    NASA Technical Reports Server (NTRS)

    Hanagud, S.; Uppaluri, B.

    1975-01-01

    This paper describes a methodology for making cost effective fatigue design decisions. The methodology is based on a probabilistic model for the stochastic process of fatigue crack growth with time. The development of a particular model for the stochastic process is also discussed in the paper. The model is based on the assumption of continuous time and discrete space of crack lengths. Statistical decision theory and the developed probabilistic model are used to develop the procedure for making fatigue design decisions on the basis of minimum expected cost or risk function and reliability bounds. Selections of initial flaw size distribution, NDT, repair threshold crack lengths, and inspection intervals are discussed.

  16. Asymptotic behavior of distributions of mRNA and protein levels in a model of stochastic gene expression

    NASA Astrophysics Data System (ADS)

    Bobrowski, Adam; Lipniacki, Tomasz; Pichór, Katarzyna; Rudnicki, Ryszard

    2007-09-01

    The paper is devoted to a stochastic process introduced in the recent paper by Lipniacki et al. [T. Lipniacki, P. Paszek, A. Marciniak-Czochra, A.RE Brasier, M. Kimmel, Transcriptional stochasticity in gene expression, JE Theor. Biol. 238 (2006) 348-367] in modelling gene expression in eukaryotes. Starting from the full generator of the process we show that its distributions satisfy a (Fokker-Planck-type) system of partial differential equations. Then, we construct a c0 Markov semigroup in L1 space corresponding to this system. The main result of the paper is asymptotic stability of the involved semigroup in the set of densities.

  17. Constraints on Fluctuations in Sparsely Characterized Biological Systems.

    PubMed

    Hilfinger, Andreas; Norman, Thomas M; Vinnicombe, Glenn; Paulsson, Johan

    2016-02-05

    Biochemical processes are inherently stochastic, creating molecular fluctuations in otherwise identical cells. Such "noise" is widespread but has proven difficult to analyze because most systems are sparsely characterized at the single cell level and because nonlinear stochastic models are analytically intractable. Here, we exactly relate average abundances, lifetimes, step sizes, and covariances for any pair of components in complex stochastic reaction systems even when the dynamics of other components are left unspecified. Using basic mathematical inequalities, we then establish bounds for whole classes of systems. These bounds highlight fundamental trade-offs that show how efficient assembly processes must invariably exhibit large fluctuations in subunit levels and how eliminating fluctuations in one cellular component requires creating heterogeneity in another.

  18. Stochastic phase segregation on surfaces

    PubMed Central

    Gera, Prerna

    2017-01-01

    Phase separation and coarsening is a phenomenon commonly seen in binary physical and chemical systems that occur in nature. Often, thermal fluctuations, modelled as stochastic noise, are present in the system and the phase segregation process occurs on a surface. In this work, the segregation process is modelled via the Cahn–Hilliard–Cook model, which is a fourth-order parabolic stochastic system. Coarsening is analysed on two sample surfaces: a unit sphere and a dumbbell. On both surfaces, a statistical analysis of the growth rate is performed, and the influence of noise level and mobility is also investigated. For the spherical interface, it is also shown that a lognormal distribution fits the growth rate well. PMID:28878994

  19. Constraints on Fluctuations in Sparsely Characterized Biological Systems

    NASA Astrophysics Data System (ADS)

    Hilfinger, Andreas; Norman, Thomas M.; Vinnicombe, Glenn; Paulsson, Johan

    2016-02-01

    Biochemical processes are inherently stochastic, creating molecular fluctuations in otherwise identical cells. Such "noise" is widespread but has proven difficult to analyze because most systems are sparsely characterized at the single cell level and because nonlinear stochastic models are analytically intractable. Here, we exactly relate average abundances, lifetimes, step sizes, and covariances for any pair of components in complex stochastic reaction systems even when the dynamics of other components are left unspecified. Using basic mathematical inequalities, we then establish bounds for whole classes of systems. These bounds highlight fundamental trade-offs that show how efficient assembly processes must invariably exhibit large fluctuations in subunit levels and how eliminating fluctuations in one cellular component requires creating heterogeneity in another.

  20. Stochastic sensitivity analysis of the variability of dynamics and transition to chaos in the business cycles model

    NASA Astrophysics Data System (ADS)

    Bashkirtseva, Irina; Ryashko, Lev; Ryazanova, Tatyana

    2018-01-01

    A problem of mathematical modeling of complex stochastic processes in macroeconomics is discussed. For the description of dynamics of income and capital stock, the well-known Kaldor model of business cycles is used as a basic example. The aim of the paper is to give an overview of the variety of stochastic phenomena which occur in Kaldor model forced by additive and parametric random noise. We study a generation of small- and large-amplitude stochastic oscillations, and their mixed-mode intermittency. To analyze these phenomena, we suggest a constructive approach combining the study of the peculiarities of deterministic phase portrait, and stochastic sensitivity of attractors. We show how parametric noise can stabilize the unstable equilibrium and transform dynamics of Kaldor system from order to chaos.

  1. Estimating Function Approaches for Spatial Point Processes

    NASA Astrophysics Data System (ADS)

    Deng, Chong

    Spatial point pattern data consist of locations of events that are often of interest in biological and ecological studies. Such data are commonly viewed as a realization from a stochastic process called spatial point process. To fit a parametric spatial point process model to such data, likelihood-based methods have been widely studied. However, while maximum likelihood estimation is often too computationally intensive for Cox and cluster processes, pairwise likelihood methods such as composite likelihood, Palm likelihood usually suffer from the loss of information due to the ignorance of correlation among pairs. For many types of correlated data other than spatial point processes, when likelihood-based approaches are not desirable, estimating functions have been widely used for model fitting. In this dissertation, we explore the estimating function approaches for fitting spatial point process models. These approaches, which are based on the asymptotic optimal estimating function theories, can be used to incorporate the correlation among data and yield more efficient estimators. We conducted a series of studies to demonstrate that these estmating function approaches are good alternatives to balance the trade-off between computation complexity and estimating efficiency. First, we propose a new estimating procedure that improves the efficiency of pairwise composite likelihood method in estimating clustering parameters. Our approach combines estimating functions derived from pairwise composite likeli-hood estimation and estimating functions that account for correlations among the pairwise contributions. Our method can be used to fit a variety of parametric spatial point process models and can yield more efficient estimators for the clustering parameters than pairwise composite likelihood estimation. We demonstrate its efficacy through a simulation study and an application to the longleaf pine data. Second, we further explore the quasi-likelihood approach on fitting second-order intensity function of spatial point processes. However, the original second-order quasi-likelihood is barely feasible due to the intense computation and high memory requirement needed to solve a large linear system. Motivated by the existence of geometric regular patterns in the stationary point processes, we find a lower dimension representation of the optimal weight function and propose a reduced second-order quasi-likelihood approach. Through a simulation study, we show that the proposed method not only demonstrates superior performance in fitting the clustering parameter but also merits in the relaxation of the constraint of the tuning parameter, H. Third, we studied the quasi-likelihood type estimating funciton that is optimal in a certain class of first-order estimating functions for estimating the regression parameter in spatial point process models. Then, by using a novel spectral representation, we construct an implementation that is computationally much more efficient and can be applied to more general setup than the original quasi-likelihood method.

  2. 3D aquifer characterization using stochastic streamline calibration

    NASA Astrophysics Data System (ADS)

    Jang, Minchul

    2007-03-01

    In this study, a new inverse approach, stochastic streamline calibration is proposed. Using both a streamline concept and a stochastic technique, stochastic streamline calibration optimizes an identified field to fit in given observation data in a exceptionally fast and stable fashion. In the stochastic streamline calibration, streamlines are adopted as basic elements not only for describing fluid flow but also for identifying the permeability distribution. Based on the streamline-based inversion by Agarwal et al. [Agarwal B, Blunt MJ. Streamline-based method with full-physics forward simulation for history matching performance data of a North sea field. SPE J 2003;8(2):171-80], Wang and Kovscek [Wang Y, Kovscek AR. Streamline approach for history matching production data. SPE J 2000;5(4):353-62], permeability is modified rather along streamlines than at the individual gridblocks. Permeabilities in the gridblocks which a streamline passes are adjusted by being multiplied by some factor such that we can match flow and transport properties of the streamline. This enables the inverse process to achieve fast convergence. In addition, equipped with a stochastic module, the proposed technique supportively calibrates the identified field in a stochastic manner, while incorporating spatial information into the field. This prevents the inverse process from being stuck in local minima and helps search for a globally optimized solution. Simulation results indicate that stochastic streamline calibration identifies an unknown permeability exceptionally quickly. More notably, the identified permeability distribution reflected realistic geological features, which had not been achieved in the original work by Agarwal et al. with the limitations of the large modifications along streamlines for matching production data only. The constructed model by stochastic streamline calibration forecasted transport of plume which was similar to that of a reference model. By this, we can expect the proposed approach to be applied to the construction of an aquifer model and forecasting of the aquifer performances of interest.

  3. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Cameron, Maria K., E-mail: cameron@math.umd.edu

    We develop computational tools for spectral analysis of stochastic networks representing energy landscapes of atomic and molecular clusters. Physical meaning and some properties of eigenvalues, left and right eigenvectors, and eigencurrents are discussed. We propose an approach to compute a collection of eigenpairs and corresponding eigencurrents describing the most important relaxation processes taking place in the system on its way to the equilibrium. It is suitable for large and complex stochastic networks where pairwise transition rates, given by the Arrhenius law, vary by orders of magnitude. The proposed methodology is applied to the network representing the Lennard-Jones-38 cluster created bymore » Wales's group. Its energy landscape has a double funnel structure with a deep and narrow face-centered cubic funnel and a shallower and wider icosahedral funnel. However, the complete spectrum of the generator matrix of the Lennard-Jones-38 network has no appreciable spectral gap separating the eigenvalue corresponding to the escape from the icosahedral funnel. We provide a detailed description of the escape process from the icosahedral funnel using the eigencurrent and demonstrate a superexponential growth of the corresponding eigenvalue. The proposed spectral approach is compared to the methodology of the Transition Path Theory. Finally, we discuss whether the Lennard-Jones-38 cluster is metastable from the points of view of a mathematician and a chemical physicist, and make a connection with experimental works.« less

  4. Non-linear dynamic characteristics and optimal control of giant magnetostrictive film subjected to in-plane stochastic excitation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zhu, Z. W., E-mail: zhuzhiwen@tju.edu.cn; Tianjin Key Laboratory of Non-linear Dynamics and Chaos Control, 300072, Tianjin; Zhang, W. D., E-mail: zhangwenditju@126.com

    2014-03-15

    The non-linear dynamic characteristics and optimal control of a giant magnetostrictive film (GMF) subjected to in-plane stochastic excitation were studied. Non-linear differential items were introduced to interpret the hysteretic phenomena of the GMF, and the non-linear dynamic model of the GMF subjected to in-plane stochastic excitation was developed. The stochastic stability was analysed, and the probability density function was obtained. The condition of stochastic Hopf bifurcation and noise-induced chaotic response were determined, and the fractal boundary of the system's safe basin was provided. The reliability function was solved from the backward Kolmogorov equation, and an optimal control strategy was proposedmore » in the stochastic dynamic programming method. Numerical simulation shows that the system stability varies with the parameters, and stochastic Hopf bifurcation and chaos appear in the process; the area of the safe basin decreases when the noise intensifies, and the boundary of the safe basin becomes fractal; the system reliability improved through stochastic optimal control. Finally, the theoretical and numerical results were proved by experiments. The results are helpful in the engineering applications of GMF.« less

  5. A Functional Central Limit Theorem for the Becker-Döring Model

    NASA Astrophysics Data System (ADS)

    Sun, Wen

    2018-04-01

    We investigate the fluctuations of the stochastic Becker-Döring model of polymerization when the initial size of the system converges to infinity. A functional central limit problem is proved for the vector of the number of polymers of a given size. It is shown that the stochastic process associated to fluctuations is converging to the strong solution of an infinite dimensional stochastic differential equation (SDE) in a Hilbert space. We also prove that, at equilibrium, the solution of this SDE is a Gaussian process. The proofs are based on a specific representation of the evolution equations, the introduction of a convenient Hilbert space and several technical estimates to control the fluctuations, especially of the first coordinate which interacts with all components of the infinite dimensional vector representing the state of the process.

  6. Predicting the process of extinction in experimental microcosms and accounting for interspecific interactions in single-species time series

    PubMed Central

    Ferguson, Jake M; Ponciano, José M

    2014-01-01

    Predicting population extinction risk is a fundamental application of ecological theory to the practice of conservation biology. Here, we compared the prediction performance of a wide array of stochastic, population dynamics models against direct observations of the extinction process from an extensive experimental data set. By varying a series of biological and statistical assumptions in the proposed models, we were able to identify the assumptions that affected predictions about population extinction. We also show how certain autocorrelation structures can emerge due to interspecific interactions, and that accounting for the stochastic effect of these interactions can improve predictions of the extinction process. We conclude that it is possible to account for the stochastic effects of community interactions on extinction when using single-species time series. PMID:24304946

  7. Entropy production in mesoscopic stochastic thermodynamics: nonequilibrium kinetic cycles driven by chemical potentials, temperatures, and mechanical forces

    NASA Astrophysics Data System (ADS)

    Qian, Hong; Kjelstrup, Signe; Kolomeisky, Anatoly B.; Bedeaux, Dick

    2016-04-01

    Nonequilibrium thermodynamics (NET) investigates processes in systems out of global equilibrium. On a mesoscopic level, it provides a statistical dynamic description of various complex phenomena such as chemical reactions, ion transport, diffusion, thermochemical, thermomechanical and mechanochemical fluxes. In the present review, we introduce a mesoscopic stochastic formulation of NET by analyzing entropy production in several simple examples. The fundamental role of nonequilibrium steady-state cycle kinetics is emphasized. The statistical mechanics of Onsager’s reciprocal relations in this context is elucidated. Chemomechanical, thermomechanical, and enzyme-catalyzed thermochemical energy transduction processes are discussed. It is argued that mesoscopic stochastic NET in phase space provides a rigorous mathematical basis of fundamental concepts needed for understanding complex processes in chemistry, physics and biology. This theory is also relevant for nanoscale technological advances.

  8. Bi-Objective Flexible Job-Shop Scheduling Problem Considering Energy Consumption under Stochastic Processing Times.

    PubMed

    Yang, Xin; Zeng, Zhenxiang; Wang, Ruidong; Sun, Xueshan

    2016-01-01

    This paper presents a novel method on the optimization of bi-objective Flexible Job-shop Scheduling Problem (FJSP) under stochastic processing times. The robust counterpart model and the Non-dominated Sorting Genetic Algorithm II (NSGA-II) are used to solve the bi-objective FJSP with consideration of the completion time and the total energy consumption under stochastic processing times. The case study on GM Corporation verifies that the NSGA-II used in this paper is effective and has advantages to solve the proposed model comparing with HPSO and PSO+SA. The idea and method of the paper can be generalized widely in the manufacturing industry, because it can reduce the energy consumption of the energy-intensive manufacturing enterprise with less investment when the new approach is applied in existing systems.

  9. Bi-Objective Flexible Job-Shop Scheduling Problem Considering Energy Consumption under Stochastic Processing Times

    PubMed Central

    Zeng, Zhenxiang; Wang, Ruidong; Sun, Xueshan

    2016-01-01

    This paper presents a novel method on the optimization of bi-objective Flexible Job-shop Scheduling Problem (FJSP) under stochastic processing times. The robust counterpart model and the Non-dominated Sorting Genetic Algorithm II (NSGA-II) are used to solve the bi-objective FJSP with consideration of the completion time and the total energy consumption under stochastic processing times. The case study on GM Corporation verifies that the NSGA-II used in this paper is effective and has advantages to solve the proposed model comparing with HPSO and PSO+SA. The idea and method of the paper can be generalized widely in the manufacturing industry, because it can reduce the energy consumption of the energy-intensive manufacturing enterprise with less investment when the new approach is applied in existing systems. PMID:27907163

  10. On the origins of approximations for stochastic chemical kinetics.

    PubMed

    Haseltine, Eric L; Rawlings, James B

    2005-10-22

    This paper considers the derivation of approximations for stochastic chemical kinetics governed by the discrete master equation. Here, the concepts of (1) partitioning on the basis of fast and slow reactions as opposed to fast and slow species and (2) conditional probability densities are used to derive approximate, partitioned master equations, which are Markovian in nature, from the original master equation. Under different conditions dictated by relaxation time arguments, such approximations give rise to both the equilibrium and hybrid (deterministic or Langevin equations coupled with discrete stochastic simulation) approximations previously reported. In addition, the derivation points out several weaknesses in previous justifications of both the hybrid and equilibrium systems and demonstrates the connection between the original and approximate master equations. Two simple examples illustrate situations in which these two approximate methods are applicable and demonstrate the two methods' efficiencies.

  11. Stochastic Multi-Commodity Facility Location Based on a New Scenario Generation Technique

    NASA Astrophysics Data System (ADS)

    Mahootchi, M.; Fattahi, M.; Khakbazan, E.

    2011-11-01

    This paper extends two models for stochastic multi-commodity facility location problem. The problem is formulated as two-stage stochastic programming. As a main point of this study, a new algorithm is applied to efficiently generate scenarios for uncertain correlated customers' demands. This algorithm uses Latin Hypercube Sampling (LHS) and a scenario reduction approach. The relation between customer satisfaction level and cost are considered in model I. The risk measure using Conditional Value-at-Risk (CVaR) is embedded into the optimization model II. Here, the structure of the network contains three facility layers including plants, distribution centers, and retailers. The first stage decisions are the number, locations, and the capacity of distribution centers. In the second stage, the decisions are the amount of productions, the volume of transportation between plants and customers.

  12. Chemical event chain model of coupled genetic oscillators.

    PubMed

    Jörg, David J; Morelli, Luis G; Jülicher, Frank

    2018-03-01

    We introduce a stochastic model of coupled genetic oscillators in which chains of chemical events involved in gene regulation and expression are represented as sequences of Poisson processes. We characterize steady states by their frequency, their quality factor, and their synchrony by the oscillator cross correlation. The steady state is determined by coupling and exhibits stochastic transitions between different modes. The interplay of stochasticity and nonlinearity leads to isolated regions in parameter space in which the coupled system works best as a biological pacemaker. Key features of the stochastic oscillations can be captured by an effective model for phase oscillators that are coupled by signals with distributed delays.

  13. Stochastic growth logistic model with aftereffect for batch fermentation process

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Rosli, Norhayati; Ayoubi, Tawfiqullah; Bahar, Arifah

    2014-06-19

    In this paper, the stochastic growth logistic model with aftereffect for the cell growth of C. acetobutylicum P262 and Luedeking-Piret equations for solvent production in batch fermentation system is introduced. The parameters values of the mathematical models are estimated via Levenberg-Marquardt optimization method of non-linear least squares. We apply Milstein scheme for solving the stochastic models numerically. The effciency of mathematical models is measured by comparing the simulated result and the experimental data of the microbial growth and solvent production in batch system. Low values of Root Mean-Square Error (RMSE) of stochastic models with aftereffect indicate good fits.

  14. Chemical event chain model of coupled genetic oscillators

    NASA Astrophysics Data System (ADS)

    Jörg, David J.; Morelli, Luis G.; Jülicher, Frank

    2018-03-01

    We introduce a stochastic model of coupled genetic oscillators in which chains of chemical events involved in gene regulation and expression are represented as sequences of Poisson processes. We characterize steady states by their frequency, their quality factor, and their synchrony by the oscillator cross correlation. The steady state is determined by coupling and exhibits stochastic transitions between different modes. The interplay of stochasticity and nonlinearity leads to isolated regions in parameter space in which the coupled system works best as a biological pacemaker. Key features of the stochastic oscillations can be captured by an effective model for phase oscillators that are coupled by signals with distributed delays.

  15. Stochastic growth logistic model with aftereffect for batch fermentation process

    NASA Astrophysics Data System (ADS)

    Rosli, Norhayati; Ayoubi, Tawfiqullah; Bahar, Arifah; Rahman, Haliza Abdul; Salleh, Madihah Md

    2014-06-01

    In this paper, the stochastic growth logistic model with aftereffect for the cell growth of C. acetobutylicum P262 and Luedeking-Piret equations for solvent production in batch fermentation system is introduced. The parameters values of the mathematical models are estimated via Levenberg-Marquardt optimization method of non-linear least squares. We apply Milstein scheme for solving the stochastic models numerically. The effciency of mathematical models is measured by comparing the simulated result and the experimental data of the microbial growth and solvent production in batch system. Low values of Root Mean-Square Error (RMSE) of stochastic models with aftereffect indicate good fits.

  16. Analytical pricing formulas for hybrid variance swaps with regime-switching

    NASA Astrophysics Data System (ADS)

    Roslan, Teh Raihana Nazirah; Cao, Jiling; Zhang, Wenjun

    2017-11-01

    The problem of pricing discretely-sampled variance swaps under stochastic volatility, stochastic interest rate and regime-switching is being considered in this paper. An extension of the Heston stochastic volatility model structure is done by adding the Cox-Ingersoll-Ross (CIR) stochastic interest rate model. In addition, the parameters of the model are permitted to have transitions following a Markov chain process which is continuous and discoverable. This hybrid model can be used to illustrate certain macroeconomic conditions, for example the changing phases of business stages. The outcome of our regime-switching hybrid model is presented in terms of analytical pricing formulas for variance swaps.

  17. Slow-fast stochastic diffusion dynamics and quasi-stationarity for diploid populations with varying size.

    PubMed

    Coron, Camille

    2016-01-01

    We are interested in the long-time behavior of a diploid population with sexual reproduction and randomly varying population size, characterized by its genotype composition at one bi-allelic locus. The population is modeled by a 3-dimensional birth-and-death process with competition, weak cooperation and Mendelian reproduction. This stochastic process is indexed by a scaling parameter K that goes to infinity, following a large population assumption. When the individual birth and natural death rates are of order K, the sequence of stochastic processes indexed by K converges toward a new slow-fast dynamics with variable population size. We indeed prove the convergence toward 0 of a fast variable giving the deviation of the population from quasi Hardy-Weinberg equilibrium, while the sequence of slow variables giving the respective numbers of occurrences of each allele converges toward a 2-dimensional diffusion process that reaches (0,0) almost surely in finite time. The population size and the proportion of a given allele converge toward a Wright-Fisher diffusion with stochastically varying population size and diploid selection. We insist on differences between haploid and diploid populations due to population size stochastic variability. Using a non trivial change of variables, we study the absorption of this diffusion and its long time behavior conditioned on non-extinction. In particular we prove that this diffusion starting from any non-trivial state and conditioned on not hitting (0,0) admits a unique quasi-stationary distribution. We give numerical approximations of this quasi-stationary behavior in three biologically relevant cases: neutrality, overdominance, and separate niches.

  18. Introduction to Focus Issue: nonlinear and stochastic physics in biology.

    PubMed

    Bahar, Sonya; Neiman, Alexander B; Jung, Peter; Kurths, Jürgen; Schimansky-Geier, Lutz; Showalter, Kenneth

    2011-12-01

    Frank Moss was a leading figure in the study of nonlinear and stochastic processes in biological systems. His work, particularly in the area of stochastic resonance, has been highly influential to the interdisciplinary scientific community. This Focus Issue pays tribute to Moss with articles that describe the most recent advances in the field he helped to create. In this Introduction, we review Moss's seminal scientific contributions and introduce the articles that make up this Focus Issue.

  19. Oscillatory regulation of Hes1: Discrete stochastic delay modelling and simulation.

    PubMed

    Barrio, Manuel; Burrage, Kevin; Leier, André; Tian, Tianhai

    2006-09-08

    Discrete stochastic simulations are a powerful tool for understanding the dynamics of chemical kinetics when there are small-to-moderate numbers of certain molecular species. In this paper we introduce delays into the stochastic simulation algorithm, thus mimicking delays associated with transcription and translation. We then show that this process may well explain more faithfully than continuous deterministic models the observed sustained oscillations in expression levels of hes1 mRNA and Hes1 protein.

  20. CERENA: ChEmical REaction Network Analyzer--A Toolbox for the Simulation and Analysis of Stochastic Chemical Kinetics.

    PubMed

    Kazeroonian, Atefeh; Fröhlich, Fabian; Raue, Andreas; Theis, Fabian J; Hasenauer, Jan

    2016-01-01

    Gene expression, signal transduction and many other cellular processes are subject to stochastic fluctuations. The analysis of these stochastic chemical kinetics is important for understanding cell-to-cell variability and its functional implications, but it is also challenging. A multitude of exact and approximate descriptions of stochastic chemical kinetics have been developed, however, tools to automatically generate the descriptions and compare their accuracy and computational efficiency are missing. In this manuscript we introduced CERENA, a toolbox for the analysis of stochastic chemical kinetics using Approximations of the Chemical Master Equation solution statistics. CERENA implements stochastic simulation algorithms and the finite state projection for microscopic descriptions of processes, the system size expansion and moment equations for meso- and macroscopic descriptions, as well as the novel conditional moment equations for a hybrid description. This unique collection of descriptions in a single toolbox facilitates the selection of appropriate modeling approaches. Unlike other software packages, the implementation of CERENA is completely general and allows, e.g., for time-dependent propensities and non-mass action kinetics. By providing SBML import, symbolic model generation and simulation using MEX-files, CERENA is user-friendly and computationally efficient. The availability of forward and adjoint sensitivity analyses allows for further studies such as parameter estimation and uncertainty analysis. The MATLAB code implementing CERENA is freely available from http://cerenadevelopers.github.io/CERENA/.

Top