Sample records for large scale stochastic

  1. Stochastic partial differential fluid equations as a diffusive limit of deterministic Lagrangian multi-time dynamics.

    PubMed

    Cotter, C J; Gottwald, G A; Holm, D D

    2017-09-01

    In Holm (Holm 2015 Proc. R. Soc. A 471 , 20140963. (doi:10.1098/rspa.2014.0963)), stochastic fluid equations were derived by employing a variational principle with an assumed stochastic Lagrangian particle dynamics. Here we show that the same stochastic Lagrangian dynamics naturally arises in a multi-scale decomposition of the deterministic Lagrangian flow map into a slow large-scale mean and a rapidly fluctuating small-scale map. We employ homogenization theory to derive effective slow stochastic particle dynamics for the resolved mean part, thereby obtaining stochastic fluid partial equations in the Eulerian formulation. To justify the application of rigorous homogenization theory, we assume mildly chaotic fast small-scale dynamics, as well as a centring condition. The latter requires that the mean of the fluctuating deviations is small, when pulled back to the mean flow.

  2. Stochastic partial differential fluid equations as a diffusive limit of deterministic Lagrangian multi-time dynamics

    PubMed Central

    Cotter, C. J.

    2017-01-01

    In Holm (Holm 2015 Proc. R. Soc. A 471, 20140963. (doi:10.1098/rspa.2014.0963)), stochastic fluid equations were derived by employing a variational principle with an assumed stochastic Lagrangian particle dynamics. Here we show that the same stochastic Lagrangian dynamics naturally arises in a multi-scale decomposition of the deterministic Lagrangian flow map into a slow large-scale mean and a rapidly fluctuating small-scale map. We employ homogenization theory to derive effective slow stochastic particle dynamics for the resolved mean part, thereby obtaining stochastic fluid partial equations in the Eulerian formulation. To justify the application of rigorous homogenization theory, we assume mildly chaotic fast small-scale dynamics, as well as a centring condition. The latter requires that the mean of the fluctuating deviations is small, when pulled back to the mean flow. PMID:28989316

  3. Methods for High-Order Multi-Scale and Stochastic Problems Analysis, Algorithms, and Applications

    DTIC Science & Technology

    2016-10-17

    finite volume schemes, discontinuous Galerkin finite element method, and related methods, for solving computational fluid dynamics (CFD) problems and...approximation for finite element methods. (3) The development of methods of simulation and analysis for the study of large scale stochastic systems of...laws, finite element method, Bernstein-Bezier finite elements , weakly interacting particle systems, accelerated Monte Carlo, stochastic networks 16

  4. A stochastic thermostat algorithm for coarse-grained thermomechanical modeling of large-scale soft matters: Theory and application to microfilaments

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Li, Tong; Gu, YuanTong, E-mail: yuantong.gu@qut.edu.au

    As all-atom molecular dynamics method is limited by its enormous computational cost, various coarse-grained strategies have been developed to extend the length scale of soft matters in the modeling of mechanical behaviors. However, the classical thermostat algorithm in highly coarse-grained molecular dynamics method would underestimate the thermodynamic behaviors of soft matters (e.g. microfilaments in cells), which can weaken the ability of materials to overcome local energy traps in granular modeling. Based on all-atom molecular dynamics modeling of microfilament fragments (G-actin clusters), a new stochastic thermostat algorithm is developed to retain the representation of thermodynamic properties of microfilaments at extra coarse-grainedmore » level. The accuracy of this stochastic thermostat algorithm is validated by all-atom MD simulation. This new stochastic thermostat algorithm provides an efficient way to investigate the thermomechanical properties of large-scale soft matters.« less

  5. The scaling of population persistence with carrying capacity does not asymptote in populations of a fish experiencing extreme climate variability.

    PubMed

    White, Richard S A; Wintle, Brendan A; McHugh, Peter A; Booker, Douglas J; McIntosh, Angus R

    2017-06-14

    Despite growing concerns regarding increasing frequency of extreme climate events and declining population sizes, the influence of environmental stochasticity on the relationship between population carrying capacity and time-to-extinction has received little empirical attention. While time-to-extinction increases exponentially with carrying capacity in constant environments, theoretical models suggest increasing environmental stochasticity causes asymptotic scaling, thus making minimum viable carrying capacity vastly uncertain in variable environments. Using empirical estimates of environmental stochasticity in fish metapopulations, we showed that increasing environmental stochasticity resulting from extreme droughts was insufficient to create asymptotic scaling of time-to-extinction with carrying capacity in local populations as predicted by theory. Local time-to-extinction increased with carrying capacity due to declining sensitivity to demographic stochasticity, and the slope of this relationship declined significantly as environmental stochasticity increased. However, recent 1 in 25 yr extreme droughts were insufficient to extirpate populations with large carrying capacity. Consequently, large populations may be more resilient to environmental stochasticity than previously thought. The lack of carrying capacity-related asymptotes in persistence under extreme climate variability reveals how small populations affected by habitat loss or overharvesting, may be disproportionately threatened by increases in extreme climate events with global warming. © 2017 The Author(s).

  6. On the statistical mechanics of the 2D stochastic Euler equation

    NASA Astrophysics Data System (ADS)

    Bouchet, Freddy; Laurie, Jason; Zaboronski, Oleg

    2011-12-01

    The dynamics of vortices and large scale structures is qualitatively very different in two dimensional flows compared to its three dimensional counterparts, due to the presence of multiple integrals of motion. These are believed to be responsible for a variety of phenomena observed in Euler flow such as the formation of large scale coherent structures, the existence of meta-stable states and random abrupt changes in the topology of the flow. In this paper we study stochastic dynamics of the finite dimensional approximation of the 2D Euler flow based on Lie algebra su(N) which preserves all integrals of motion. In particular, we exploit rich algebraic structure responsible for the existence of Euler's conservation laws to calculate the invariant measures and explore their properties and also study the approach to equilibrium. Unexpectedly, we find deep connections between equilibrium measures of finite dimensional su(N) truncations of the stochastic Euler equations and random matrix models. Our work can be regarded as a preparation for addressing the questions of large scale structures, meta-stability and the dynamics of random transitions between different flow topologies in stochastic 2D Euler flows.

  7. Reduced linear noise approximation for biochemical reaction networks with time-scale separation: The stochastic tQSSA+

    NASA Astrophysics Data System (ADS)

    Herath, Narmada; Del Vecchio, Domitilla

    2018-03-01

    Biochemical reaction networks often involve reactions that take place on different time scales, giving rise to "slow" and "fast" system variables. This property is widely used in the analysis of systems to obtain dynamical models with reduced dimensions. In this paper, we consider stochastic dynamics of biochemical reaction networks modeled using the Linear Noise Approximation (LNA). Under time-scale separation conditions, we obtain a reduced-order LNA that approximates both the slow and fast variables in the system. We mathematically prove that the first and second moments of this reduced-order model converge to those of the full system as the time-scale separation becomes large. These mathematical results, in particular, provide a rigorous justification to the accuracy of LNA models derived using the stochastic total quasi-steady state approximation (tQSSA). Since, in contrast to the stochastic tQSSA, our reduced-order model also provides approximations for the fast variable stochastic properties, we term our method the "stochastic tQSSA+". Finally, we demonstrate the application of our approach on two biochemical network motifs found in gene-regulatory and signal transduction networks.

  8. Limitations and tradeoffs in synchronization of large-scale networks with uncertain links

    PubMed Central

    Diwadkar, Amit; Vaidya, Umesh

    2016-01-01

    The synchronization of nonlinear systems connected over large-scale networks has gained popularity in a variety of applications, such as power grids, sensor networks, and biology. Stochastic uncertainty in the interconnections is a ubiquitous phenomenon observed in these physical and biological networks. We provide a size-independent network sufficient condition for the synchronization of scalar nonlinear systems with stochastic linear interactions over large-scale networks. This sufficient condition, expressed in terms of nonlinear dynamics, the Laplacian eigenvalues of the nominal interconnections, and the variance and location of the stochastic uncertainty, allows us to define a synchronization margin. We provide an analytical characterization of important trade-offs between the internal nonlinear dynamics, network topology, and uncertainty in synchronization. For nearest neighbour networks, the existence of an optimal number of neighbours with a maximum synchronization margin is demonstrated. An analytical formula for the optimal gain that produces the maximum synchronization margin allows us to compare the synchronization properties of various complex network topologies. PMID:27067994

  9. Fractional Stochastic Field Theory

    NASA Astrophysics Data System (ADS)

    Honkonen, Juha

    2018-02-01

    Models describing evolution of physical, chemical, biological, social and financial processes are often formulated as differential equations with the understanding that they are large-scale equations for averages of quantities describing intrinsically random processes. Explicit account of randomness may lead to significant changes in the asymptotic behaviour (anomalous scaling) in such models especially in low spatial dimensions, which in many cases may be captured with the use of the renormalization group. Anomalous scaling and memory effects may also be introduced with the use of fractional derivatives and fractional noise. Construction of renormalized stochastic field theory with fractional derivatives and fractional noise in the underlying stochastic differential equations and master equations and the interplay between fluctuation-induced and built-in anomalous scaling behaviour is reviewed and discussed.

  10. A stochastic method for computing hadronic matrix elements

    DOE PAGES

    Alexandrou, Constantia; Constantinou, Martha; Dinter, Simon; ...

    2014-01-24

    In this study, we present a stochastic method for the calculation of baryon 3-point functions which is an alternative to the typically used sequential method offering more versatility. We analyze the scaling of the error of the stochastically evaluated 3-point function with the lattice volume and find a favorable signal to noise ratio suggesting that the stochastic method can be extended to large volumes providing an efficient approach to compute hadronic matrix elements and form factors.

  11. Computational singular perturbation analysis of stochastic chemical systems with stiffness

    NASA Astrophysics Data System (ADS)

    Wang, Lijin; Han, Xiaoying; Cao, Yanzhao; Najm, Habib N.

    2017-04-01

    Computational singular perturbation (CSP) is a useful method for analysis, reduction, and time integration of stiff ordinary differential equation systems. It has found dominant utility, in particular, in chemical reaction systems with a large range of time scales at continuum and deterministic level. On the other hand, CSP is not directly applicable to chemical reaction systems at micro or meso-scale, where stochasticity plays an non-negligible role and thus has to be taken into account. In this work we develop a novel stochastic computational singular perturbation (SCSP) analysis and time integration framework, and associated algorithm, that can be used to not only construct accurately and efficiently the numerical solutions to stiff stochastic chemical reaction systems, but also analyze the dynamics of the reduced stochastic reaction systems. The algorithm is illustrated by an application to a benchmark stochastic differential equation model, and numerical experiments are carried out to demonstrate the effectiveness of the construction.

  12. A HIERARCHIAL STOCHASTIC MODEL OF LARGE SCALE ATMOSPHERIC CIRCULATION PATTERNS AND MULTIPLE STATION DAILY PRECIPITATION

    EPA Science Inventory

    A stochastic model of weather states and concurrent daily precipitation at multiple precipitation stations is described. our algorithms are invested for classification of daily weather states; k means, fuzzy clustering, principal components, and principal components coupled with ...

  13. Multi-period natural gas market modeling Applications, stochastic extensions and solution approaches

    NASA Astrophysics Data System (ADS)

    Egging, Rudolf Gerardus

    This dissertation develops deterministic and stochastic multi-period mixed complementarity problems (MCP) for the global natural gas market, as well as solution approaches for large-scale stochastic MCP. The deterministic model is unique in the combination of the level of detail of the actors in the natural gas markets and the transport options, the detailed regional and global coverage, the multi-period approach with endogenous capacity expansions for transportation and storage infrastructure, the seasonal variation in demand and the representation of market power according to Nash-Cournot theory. The model is applied to several scenarios for the natural gas market that cover the formation of a cartel by the members of the Gas Exporting Countries Forum, a low availability of unconventional gas in the United States, and cost reductions in long-distance gas transportation. 1 The results provide insights in how different regions are affected by various developments, in terms of production, consumption, traded volumes, prices and profits of market participants. The stochastic MCP is developed and applied to a global natural gas market problem with four scenarios for a time horizon until 2050 with nineteen regions and containing 78,768 variables. The scenarios vary in the possibility of a gas market cartel formation and varying depletion rates of gas reserves in the major gas importing regions. Outcomes for hedging decisions of market participants show some significant shifts in the timing and location of infrastructure investments, thereby affecting local market situations. A first application of Benders decomposition (BD) is presented to solve a large-scale stochastic MCP for the global gas market with many hundreds of first-stage capacity expansion variables and market players exerting various levels of market power. The largest problem solved successfully using BD contained 47,373 variables of which 763 first-stage variables, however using BD did not result in shorter solution times relative to solving the extensive-forms. Larger problems, up to 117,481 variables, were solved in extensive-form, but not when applying BD due to numerical issues. It is discussed how BD could significantly reduce the solution time of large-scale stochastic models, but various challenges remain and more research is needed to assess the potential of Benders decomposition for solving large-scale stochastic MCP. 1 www.gecforum.org

  14. Stochastic Reconnection for Large Magnetic Prandtl Numbers

    NASA Astrophysics Data System (ADS)

    Jafari, Amir; Vishniac, Ethan T.; Kowal, Grzegorz; Lazarian, Alex

    2018-06-01

    We consider stochastic magnetic reconnection in high-β plasmas with large magnetic Prandtl numbers, Pr m > 1. For large Pr m , field line stochasticity is suppressed at very small scales, impeding diffusion. In addition, viscosity suppresses very small-scale differential motions and therefore also the local reconnection. Here we consider the effect of high magnetic Prandtl numbers on the global reconnection rate in a turbulent medium and provide a diffusion equation for the magnetic field lines considering both resistive and viscous dissipation. We find that the width of the outflow region is unaffected unless Pr m is exponentially larger than the Reynolds number Re. The ejection velocity of matter from the reconnection region is also unaffected by viscosity unless Re ∼ 1. By these criteria the reconnection rate in typical astrophysical systems is almost independent of viscosity. This remains true for reconnection in quiet environments where current sheet instabilities drive reconnection. However, if Pr m > 1, viscosity can suppress small-scale reconnection events near and below the Kolmogorov or viscous damping scale. This will produce a threshold for the suppression of large-scale reconnection by viscosity when {\\Pr }m> \\sqrt{Re}}. In any case, for Pr m > 1 this leads to a flattening of the magnetic fluctuation power spectrum, so that its spectral index is ∼‑4/3 for length scales between the viscous dissipation scale and eddies larger by roughly {{\\Pr }}m3/2. Current numerical simulations are insensitive to this effect. We suggest that the dependence of reconnection on viscosity in these simulations may be due to insufficient resolution for the turbulent inertial range rather than a guide to the large Re limit.

  15. Global climate impacts of stochastic deep convection parameterization in the NCAR CAM5

    DOE PAGES

    Wang, Yong; Zhang, Guang J.

    2016-09-29

    In this paper, the stochastic deep convection parameterization of Plant and Craig (PC) is implemented in the Community Atmospheric Model version 5 (CAM5) to incorporate the stochastic processes of convection into the Zhang-McFarlane (ZM) deterministic deep convective scheme. Its impacts on deep convection, shallow convection, large-scale precipitation and associated dynamic and thermodynamic fields are investigated. Results show that with the introduction of the PC stochastic parameterization, deep convection is decreased while shallow convection is enhanced. The decrease in deep convection is mainly caused by the stochastic process and the spatial averaging of input quantities for the PC scheme. More detrainedmore » liquid water associated with more shallow convection leads to significant increase in liquid water and ice water paths, which increases large-scale precipitation in tropical regions. Specific humidity, relative humidity, zonal wind in the tropics, and precipitable water are all improved. The simulation of shortwave cloud forcing (SWCF) is also improved. The PC stochastic parameterization decreases the global mean SWCF from -52.25 W/m 2 in the standard CAM5 to -48.86 W/m 2, close to -47.16 W/m 2 in observations. The improvement in SWCF over the tropics is due to decreased low cloud fraction simulated by the stochastic scheme. Sensitivity tests of tuning parameters are also performed to investigate the sensitivity of simulated climatology to uncertain parameters in the stochastic deep convection scheme.« less

  16. Global climate impacts of stochastic deep convection parameterization in the NCAR CAM5

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wang, Yong; Zhang, Guang J.

    In this paper, the stochastic deep convection parameterization of Plant and Craig (PC) is implemented in the Community Atmospheric Model version 5 (CAM5) to incorporate the stochastic processes of convection into the Zhang-McFarlane (ZM) deterministic deep convective scheme. Its impacts on deep convection, shallow convection, large-scale precipitation and associated dynamic and thermodynamic fields are investigated. Results show that with the introduction of the PC stochastic parameterization, deep convection is decreased while shallow convection is enhanced. The decrease in deep convection is mainly caused by the stochastic process and the spatial averaging of input quantities for the PC scheme. More detrainedmore » liquid water associated with more shallow convection leads to significant increase in liquid water and ice water paths, which increases large-scale precipitation in tropical regions. Specific humidity, relative humidity, zonal wind in the tropics, and precipitable water are all improved. The simulation of shortwave cloud forcing (SWCF) is also improved. The PC stochastic parameterization decreases the global mean SWCF from -52.25 W/m 2 in the standard CAM5 to -48.86 W/m 2, close to -47.16 W/m 2 in observations. The improvement in SWCF over the tropics is due to decreased low cloud fraction simulated by the stochastic scheme. Sensitivity tests of tuning parameters are also performed to investigate the sensitivity of simulated climatology to uncertain parameters in the stochastic deep convection scheme.« less

  17. Stochastic Dynamic Mixed-Integer Programming (SD-MIP)

    DTIC Science & Technology

    2015-05-05

    stochastic linear programming ( SLP ) problems. By using a combination of ideas from cutting plane theory of deterministic MIP (especially disjunctive...developed to date. b) As part of this project, we have also developed tools for very large scale Stochastic Linear Programming ( SLP ). There are...several reasons for this. First, SLP models continue to challenge many of the fastest computers to date, and many applications within the DoD (e.g

  18. Stochasticity and determinism in models of hematopoiesis.

    PubMed

    Kimmel, Marek

    2014-01-01

    This chapter represents a novel view of modeling in hematopoiesis, synthesizing both deterministic and stochastic approaches. Whereas the stochastic models work in situations where chance dominates, for example when the number of cells is small, or under random mutations, the deterministic models are more important for large-scale, normal hematopoiesis. New types of models are on the horizon. These models attempt to account for distributed environments such as hematopoietic niches and their impact on dynamics. Mixed effects of such structures and chance events are largely unknown and constitute both a challenge and promise for modeling. Our discussion is presented under the separate headings of deterministic and stochastic modeling; however, the connections between both are frequently mentioned. Four case studies are included to elucidate important examples. We also include a primer of deterministic and stochastic dynamics for the reader's use.

  19. STOCHASTIC OPTICS: A SCATTERING MITIGATION FRAMEWORK FOR RADIO INTERFEROMETRIC IMAGING

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Johnson, Michael D., E-mail: mjohnson@cfa.harvard.edu

    2016-12-10

    Just as turbulence in the Earth’s atmosphere can severely limit the angular resolution of optical telescopes, turbulence in the ionized interstellar medium fundamentally limits the resolution of radio telescopes. We present a scattering mitigation framework for radio imaging with very long baseline interferometry (VLBI) that partially overcomes this limitation. Our framework, “stochastic optics,” derives from a simplification of strong interstellar scattering to separate small-scale (“diffractive”) effects from large-scale (“refractive”) effects, thereby separating deterministic and random contributions to the scattering. Stochastic optics extends traditional synthesis imaging by simultaneously reconstructing an unscattered image and its refractive perturbations. Its advantages over direct imagingmore » come from utilizing the many deterministic properties of the scattering—such as the time-averaged “blurring,” polarization independence, and the deterministic evolution in frequency and time—while still accounting for the stochastic image distortions on large scales. These distortions are identified in the image reconstructions through regularization by their time-averaged power spectrum. Using synthetic data, we show that this framework effectively removes the blurring from diffractive scattering while reducing the spurious image features from refractive scattering. Stochastic optics can provide significant improvements over existing scattering mitigation strategies and is especially promising for imaging the Galactic Center supermassive black hole, Sagittarius A*, with the Global mm-VLBI Array and with the Event Horizon Telescope.« less

  20. Computational singular perturbation analysis of stochastic chemical systems with stiffness

    DOE PAGES

    Wang, Lijin; Han, Xiaoying; Cao, Yanzhao; ...

    2017-01-25

    Computational singular perturbation (CSP) is a useful method for analysis, reduction, and time integration of stiff ordinary differential equation systems. It has found dominant utility, in particular, in chemical reaction systems with a large range of time scales at continuum and deterministic level. On the other hand, CSP is not directly applicable to chemical reaction systems at micro or meso-scale, where stochasticity plays an non-negligible role and thus has to be taken into account. In this work we develop a novel stochastic computational singular perturbation (SCSP) analysis and time integration framework, and associated algorithm, that can be used to notmore » only construct accurately and efficiently the numerical solutions to stiff stochastic chemical reaction systems, but also analyze the dynamics of the reduced stochastic reaction systems. Furthermore, the algorithm is illustrated by an application to a benchmark stochastic differential equation model, and numerical experiments are carried out to demonstrate the effectiveness of the construction.« less

  1. MOLNs: A CLOUD PLATFORM FOR INTERACTIVE, REPRODUCIBLE, AND SCALABLE SPATIAL STOCHASTIC COMPUTATIONAL EXPERIMENTS IN SYSTEMS BIOLOGY USING PyURDME.

    PubMed

    Drawert, Brian; Trogdon, Michael; Toor, Salman; Petzold, Linda; Hellander, Andreas

    2016-01-01

    Computational experiments using spatial stochastic simulations have led to important new biological insights, but they require specialized tools and a complex software stack, as well as large and scalable compute and data analysis resources due to the large computational cost associated with Monte Carlo computational workflows. The complexity of setting up and managing a large-scale distributed computation environment to support productive and reproducible modeling can be prohibitive for practitioners in systems biology. This results in a barrier to the adoption of spatial stochastic simulation tools, effectively limiting the type of biological questions addressed by quantitative modeling. In this paper, we present PyURDME, a new, user-friendly spatial modeling and simulation package, and MOLNs, a cloud computing appliance for distributed simulation of stochastic reaction-diffusion models. MOLNs is based on IPython and provides an interactive programming platform for development of sharable and reproducible distributed parallel computational experiments.

  2. Moderate deviations-based importance sampling for stochastic recursive equations

    DOE PAGES

    Dupuis, Paul; Johnson, Dane

    2017-11-17

    Abstract Subsolutions to the Hamilton–Jacobi–Bellman equation associated with a moderate deviations approximation are used to design importance sampling changes of measure for stochastic recursive equations. Analogous to what has been done for large deviations subsolution-based importance sampling, these schemes are shown to be asymptotically optimal under the moderate deviations scaling. We present various implementations and numerical results to contrast their performance, and also discuss the circumstances under which a moderate deviation scaling might be appropriate.

  3. Moderate deviations-based importance sampling for stochastic recursive equations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dupuis, Paul; Johnson, Dane

    Abstract Subsolutions to the Hamilton–Jacobi–Bellman equation associated with a moderate deviations approximation are used to design importance sampling changes of measure for stochastic recursive equations. Analogous to what has been done for large deviations subsolution-based importance sampling, these schemes are shown to be asymptotically optimal under the moderate deviations scaling. We present various implementations and numerical results to contrast their performance, and also discuss the circumstances under which a moderate deviation scaling might be appropriate.

  4. Decentralized adaptive neural control for high-order interconnected stochastic nonlinear time-delay systems with unknown system dynamics.

    PubMed

    Si, Wenjie; Dong, Xunde; Yang, Feifei

    2018-03-01

    This paper is concerned with the problem of decentralized adaptive backstepping state-feedback control for uncertain high-order large-scale stochastic nonlinear time-delay systems. For the control design of high-order large-scale nonlinear systems, only one adaptive parameter is constructed to overcome the over-parameterization, and neural networks are employed to cope with the difficulties raised by completely unknown system dynamics and stochastic disturbances. And then, the appropriate Lyapunov-Krasovskii functional and the property of hyperbolic tangent functions are used to deal with the unknown unmatched time-delay interactions of high-order large-scale systems for the first time. At last, on the basis of Lyapunov stability theory, the decentralized adaptive neural controller was developed, and it decreases the number of learning parameters. The actual controller can be designed so as to ensure that all the signals in the closed-loop system are semi-globally uniformly ultimately bounded (SGUUB) and the tracking error converges in the small neighborhood of zero. The simulation example is used to further show the validity of the design method. Copyright © 2018 Elsevier Ltd. All rights reserved.

  5. Adaptive Fuzzy Output-Constrained Fault-Tolerant Control of Nonlinear Stochastic Large-Scale Systems With Actuator Faults.

    PubMed

    Li, Yongming; Ma, Zhiyao; Tong, Shaocheng

    2017-09-01

    The problem of adaptive fuzzy output-constrained tracking fault-tolerant control (FTC) is investigated for the large-scale stochastic nonlinear systems of pure-feedback form. The nonlinear systems considered in this paper possess the unstructured uncertainties, unknown interconnected terms and unknown nonaffine nonlinear faults. The fuzzy logic systems are employed to identify the unknown lumped nonlinear functions so that the problems of structured uncertainties can be solved. An adaptive fuzzy state observer is designed to solve the nonmeasurable state problem. By combining the barrier Lyapunov function theory, adaptive decentralized and stochastic control principles, a novel fuzzy adaptive output-constrained FTC approach is constructed. All the signals in the closed-loop system are proved to be bounded in probability and the system outputs are constrained in a given compact set. Finally, the applicability of the proposed controller is well carried out by a simulation example.

  6. Nonlocal and collective relaxation in stellar systems

    NASA Technical Reports Server (NTRS)

    Weinberg, Martin D.

    1993-01-01

    The modal response of stellar systems to fluctuations at large scales is presently investigated by means of analytic theory and n-body simulation; the stochastic excitation of these modes is shown to increase the relaxation rate even for a system which is moderately far from instability. The n-body simulations, when designed to suppress relaxation at small scales, clearly show the effects of large-scale fluctuations. It is predicted that large-scale fluctuations will be largest for such marginally bound systems as forming star clusters and associations.

  7. Particle Acceleration in Mildly Relativistic Shearing Flows: The Interplay of Systematic and Stochastic Effects, and the Origin of the Extended High-energy Emission in AGN Jets

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Liu, Ruo-Yu; Rieger, F. M.; Aharonian, F. A., E-mail: ruoyu@mpi-hd.mpg.de, E-mail: frank.rieger@mpi-hd.mpg.de, E-mail: aharon@mpi-hd.mpg.de

    The origin of the extended X-ray emission in the large-scale jets of active galactic nuclei (AGNs) poses challenges to conventional models of acceleration and emission. Although electron synchrotron radiation is considered the most feasible radiation mechanism, the formation of the continuous large-scale X-ray structure remains an open issue. As astrophysical jets are expected to exhibit some turbulence and shearing motion, we here investigate the potential of shearing flows to facilitate an extended acceleration of particles and evaluate its impact on the resultant particle distribution. Our treatment incorporates systematic shear and stochastic second-order Fermi effects. We show that for typical parametersmore » applicable to large-scale AGN jets, stochastic second-order Fermi acceleration, which always accompanies shear particle acceleration, can play an important role in facilitating the whole process of particle energization. We study the time-dependent evolution of the resultant particle distribution in the presence of second-order Fermi acceleration, shear acceleration, and synchrotron losses using a simple Fokker–Planck approach and provide illustrations for the possible emergence of a complex (multicomponent) particle energy distribution with different spectral branches. We present examples for typical parameters applicable to large-scale AGN jets, indicating the relevance of the underlying processes for understanding the extended X-ray emission and the origin of ultrahigh-energy cosmic rays.« less

  8. Statistical Compression of Wind Speed Data

    NASA Astrophysics Data System (ADS)

    Tagle, F.; Castruccio, S.; Crippa, P.; Genton, M.

    2017-12-01

    In this work we introduce a lossy compression approach that utilizes a stochastic wind generator based on a non-Gaussian distribution to reproduce the internal climate variability of daily wind speed as represented by the CESM Large Ensemble over Saudi Arabia. Stochastic wind generators, and stochastic weather generators more generally, are statistical models that aim to match certain statistical properties of the data on which they are trained. They have been used extensively in applications ranging from agricultural models to climate impact studies. In this novel context, the parameters of the fitted model can be interpreted as encoding the information contained in the original uncompressed data. The statistical model is fit to only 3 of the 30 ensemble members and it adequately captures the variability of the ensemble in terms of seasonal internannual variability of daily wind speed. To deal with such a large spatial domain, it is partitioned into 9 region, and the model is fit independently to each of these. We further discuss a recent refinement of the model, which relaxes this assumption of regional independence, by introducing a large-scale component that interacts with the fine-scale regional effects.

  9. The use of imprecise processing to improve accuracy in weather & climate prediction

    NASA Astrophysics Data System (ADS)

    Düben, Peter D.; McNamara, Hugh; Palmer, T. N.

    2014-08-01

    The use of stochastic processing hardware and low precision arithmetic in atmospheric models is investigated. Stochastic processors allow hardware-induced faults in calculations, sacrificing bit-reproducibility and precision in exchange for improvements in performance and potentially accuracy of forecasts, due to a reduction in power consumption that could allow higher resolution. A similar trade-off is achieved using low precision arithmetic, with improvements in computation and communication speed and savings in storage and memory requirements. As high-performance computing becomes more massively parallel and power intensive, these two approaches may be important stepping stones in the pursuit of global cloud-resolving atmospheric modelling. The impact of both hardware induced faults and low precision arithmetic is tested using the Lorenz '96 model and the dynamical core of a global atmosphere model. In the Lorenz '96 model there is a natural scale separation; the spectral discretisation used in the dynamical core also allows large and small scale dynamics to be treated separately within the code. Such scale separation allows the impact of lower-accuracy arithmetic to be restricted to components close to the truncation scales and hence close to the necessarily inexact parametrised representations of unresolved processes. By contrast, the larger scales are calculated using high precision deterministic arithmetic. Hardware faults from stochastic processors are emulated using a bit-flip model with different fault rates. Our simulations show that both approaches to inexact calculations do not substantially affect the large scale behaviour, provided they are restricted to act only on smaller scales. By contrast, results from the Lorenz '96 simulations are superior when small scales are calculated on an emulated stochastic processor than when those small scales are parametrised. This suggests that inexact calculations at the small scale could reduce computation and power costs without adversely affecting the quality of the simulations. This would allow higher resolution models to be run at the same computational cost.

  10. Effect of the heterogeneous neuron and information transmission delay on stochastic resonance of neuronal networks

    NASA Astrophysics Data System (ADS)

    Wang, Qingyun; Zhang, Honghui; Chen, Guanrong

    2012-12-01

    We study the effect of heterogeneous neuron and information transmission delay on stochastic resonance of scale-free neuronal networks. For this purpose, we introduce the heterogeneity to the specified neuron with the highest degree. It is shown that in the absence of delay, an intermediate noise level can optimally assist spike firings of collective neurons so as to achieve stochastic resonance on scale-free neuronal networks for small and intermediate αh, which plays a heterogeneous role. Maxima of stochastic resonance measure are enhanced as αh increases, which implies that the heterogeneity can improve stochastic resonance. However, as αh is beyond a certain large value, no obvious stochastic resonance can be observed. If the information transmission delay is introduced to neuronal networks, stochastic resonance is dramatically affected. In particular, the tuned information transmission delay can induce multiple stochastic resonance, which can be manifested as well-expressed maximum in the measure for stochastic resonance, appearing every multiple of one half of the subthreshold stimulus period. Furthermore, we can observe that stochastic resonance at odd multiple of one half of the subthreshold stimulus period is subharmonic, as opposed to the case of even multiple of one half of the subthreshold stimulus period. More interestingly, multiple stochastic resonance can also be improved by the suitable heterogeneous neuron. Presented results can provide good insights into the understanding of the heterogeneous neuron and information transmission delay on realistic neuronal networks.

  11. MOLNs: A CLOUD PLATFORM FOR INTERACTIVE, REPRODUCIBLE, AND SCALABLE SPATIAL STOCHASTIC COMPUTATIONAL EXPERIMENTS IN SYSTEMS BIOLOGY USING PyURDME

    PubMed Central

    Drawert, Brian; Trogdon, Michael; Toor, Salman; Petzold, Linda; Hellander, Andreas

    2017-01-01

    Computational experiments using spatial stochastic simulations have led to important new biological insights, but they require specialized tools and a complex software stack, as well as large and scalable compute and data analysis resources due to the large computational cost associated with Monte Carlo computational workflows. The complexity of setting up and managing a large-scale distributed computation environment to support productive and reproducible modeling can be prohibitive for practitioners in systems biology. This results in a barrier to the adoption of spatial stochastic simulation tools, effectively limiting the type of biological questions addressed by quantitative modeling. In this paper, we present PyURDME, a new, user-friendly spatial modeling and simulation package, and MOLNs, a cloud computing appliance for distributed simulation of stochastic reaction-diffusion models. MOLNs is based on IPython and provides an interactive programming platform for development of sharable and reproducible distributed parallel computational experiments. PMID:28190948

  12. A Multilevel, Hierarchical Sampling Technique for Spatially Correlated Random Fields

    DOE PAGES

    Osborn, Sarah; Vassilevski, Panayot S.; Villa, Umberto

    2017-10-26

    In this paper, we propose an alternative method to generate samples of a spatially correlated random field with applications to large-scale problems for forward propagation of uncertainty. A classical approach for generating these samples is the Karhunen--Loève (KL) decomposition. However, the KL expansion requires solving a dense eigenvalue problem and is therefore computationally infeasible for large-scale problems. Sampling methods based on stochastic partial differential equations provide a highly scalable way to sample Gaussian fields, but the resulting parametrization is mesh dependent. We propose a multilevel decomposition of the stochastic field to allow for scalable, hierarchical sampling based on solving amore » mixed finite element formulation of a stochastic reaction-diffusion equation with a random, white noise source function. Lastly, numerical experiments are presented to demonstrate the scalability of the sampling method as well as numerical results of multilevel Monte Carlo simulations for a subsurface porous media flow application using the proposed sampling method.« less

  13. A Multilevel, Hierarchical Sampling Technique for Spatially Correlated Random Fields

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Osborn, Sarah; Vassilevski, Panayot S.; Villa, Umberto

    In this paper, we propose an alternative method to generate samples of a spatially correlated random field with applications to large-scale problems for forward propagation of uncertainty. A classical approach for generating these samples is the Karhunen--Loève (KL) decomposition. However, the KL expansion requires solving a dense eigenvalue problem and is therefore computationally infeasible for large-scale problems. Sampling methods based on stochastic partial differential equations provide a highly scalable way to sample Gaussian fields, but the resulting parametrization is mesh dependent. We propose a multilevel decomposition of the stochastic field to allow for scalable, hierarchical sampling based on solving amore » mixed finite element formulation of a stochastic reaction-diffusion equation with a random, white noise source function. Lastly, numerical experiments are presented to demonstrate the scalability of the sampling method as well as numerical results of multilevel Monte Carlo simulations for a subsurface porous media flow application using the proposed sampling method.« less

  14. On the decentralized control of large-scale systems. Ph.D. Thesis

    NASA Technical Reports Server (NTRS)

    Chong, C.

    1973-01-01

    The decentralized control of stochastic large scale systems was considered. Particular emphasis was given to control strategies which utilize decentralized information and can be computed in a decentralized manner. The deterministic constrained optimization problem is generalized to the stochastic case when each decision variable depends on different information and the constraint is only required to be satisfied on the average. For problems with a particular structure, a hierarchical decomposition is obtained. For the stochastic control of dynamic systems with different information sets, a new kind of optimality is proposed which exploits the coupled nature of the dynamic system. The subsystems are assumed to be uncoupled and then certain constraints are required to be satisfied, either in a off-line or on-line fashion. For off-line coordination, a hierarchical approach of solving the problem is obtained. The lower level problems are all uncoupled. For on-line coordination, distinction is made between open loop feedback optimal coordination and closed loop optimal coordination.

  15. Stochastic Ocean Eddy Perturbations in a Coupled General Circulation Model.

    NASA Astrophysics Data System (ADS)

    Howe, N.; Williams, P. D.; Gregory, J. M.; Smith, R. S.

    2014-12-01

    High-resolution ocean models, which are eddy permitting and resolving, require large computing resources to produce centuries worth of data. Also, some previous studies have suggested that increasing resolution does not necessarily solve the problem of unresolved scales, because it simply introduces a new set of unresolved scales. Applying stochastic parameterisations to ocean models is one solution that is expected to improve the representation of small-scale (eddy) effects without increasing run-time. Stochastic parameterisation has been shown to have an impact in atmosphere-only models and idealised ocean models, but has not previously been studied in ocean general circulation models. Here we apply simple stochastic perturbations to the ocean temperature and salinity tendencies in the low-resolution coupled climate model, FAMOUS. The stochastic perturbations are implemented according to T(t) = T(t-1) + (ΔT(t) + ξ(t)), where T is temperature or salinity, ΔT is the corresponding deterministic increment in one time step, and ξ(t) is Gaussian noise. We use high-resolution HiGEM data coarse-grained to the FAMOUS grid to provide information about the magnitude and spatio-temporal correlation structure of the noise to be added to the lower resolution model. Here we present results of adding white and red noise, showing the impacts of an additive stochastic perturbation on mean climate state and variability in an AOGCM.

  16. Stochastic models for regulatory networks of the genetic toggle switch.

    PubMed

    Tian, Tianhai; Burrage, Kevin

    2006-05-30

    Bistability arises within a wide range of biological systems from the lambda phage switch in bacteria to cellular signal transduction pathways in mammalian cells. Changes in regulatory mechanisms may result in genetic switching in a bistable system. Recently, more and more experimental evidence in the form of bimodal population distributions indicates that noise plays a very important role in the switching of bistable systems. Although deterministic models have been used for studying the existence of bistability properties under various system conditions, these models cannot realize cell-to-cell fluctuations in genetic switching. However, there is a lag in the development of stochastic models for studying the impact of noise in bistable systems because of the lack of detailed knowledge of biochemical reactions, kinetic rates, and molecular numbers. In this work, we develop a previously undescribed general technique for developing quantitative stochastic models for large-scale genetic regulatory networks by introducing Poisson random variables into deterministic models described by ordinary differential equations. Two stochastic models have been proposed for the genetic toggle switch interfaced with either the SOS signaling pathway or a quorum-sensing signaling pathway, and we have successfully realized experimental results showing bimodal population distributions. Because the introduced stochastic models are based on widely used ordinary differential equation models, the success of this work suggests that this approach is a very promising one for studying noise in large-scale genetic regulatory networks.

  17. Stochastic models for regulatory networks of the genetic toggle switch

    PubMed Central

    Tian, Tianhai; Burrage, Kevin

    2006-01-01

    Bistability arises within a wide range of biological systems from the λ phage switch in bacteria to cellular signal transduction pathways in mammalian cells. Changes in regulatory mechanisms may result in genetic switching in a bistable system. Recently, more and more experimental evidence in the form of bimodal population distributions indicates that noise plays a very important role in the switching of bistable systems. Although deterministic models have been used for studying the existence of bistability properties under various system conditions, these models cannot realize cell-to-cell fluctuations in genetic switching. However, there is a lag in the development of stochastic models for studying the impact of noise in bistable systems because of the lack of detailed knowledge of biochemical reactions, kinetic rates, and molecular numbers. In this work, we develop a previously undescribed general technique for developing quantitative stochastic models for large-scale genetic regulatory networks by introducing Poisson random variables into deterministic models described by ordinary differential equations. Two stochastic models have been proposed for the genetic toggle switch interfaced with either the SOS signaling pathway or a quorum-sensing signaling pathway, and we have successfully realized experimental results showing bimodal population distributions. Because the introduced stochastic models are based on widely used ordinary differential equation models, the success of this work suggests that this approach is a very promising one for studying noise in large-scale genetic regulatory networks. PMID:16714385

  18. Scalable domain decomposition solvers for stochastic PDEs in high performance computing

    DOE PAGES

    Desai, Ajit; Khalil, Mohammad; Pettit, Chris; ...

    2017-09-21

    Stochastic spectral finite element models of practical engineering systems may involve solutions of linear systems or linearized systems for non-linear problems with billions of unknowns. For stochastic modeling, it is therefore essential to design robust, parallel and scalable algorithms that can efficiently utilize high-performance computing to tackle such large-scale systems. Domain decomposition based iterative solvers can handle such systems. And though these algorithms exhibit excellent scalabilities, significant algorithmic and implementational challenges exist to extend them to solve extreme-scale stochastic systems using emerging computing platforms. Intrusive polynomial chaos expansion based domain decomposition algorithms are extended here to concurrently handle high resolutionmore » in both spatial and stochastic domains using an in-house implementation. Sparse iterative solvers with efficient preconditioners are employed to solve the resulting global and subdomain level local systems through multi-level iterative solvers. We also use parallel sparse matrix–vector operations to reduce the floating-point operations and memory requirements. Numerical and parallel scalabilities of these algorithms are presented for the diffusion equation having spatially varying diffusion coefficient modeled by a non-Gaussian stochastic process. Scalability of the solvers with respect to the number of random variables is also investigated.« less

  19. Scalable domain decomposition solvers for stochastic PDEs in high performance computing

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Desai, Ajit; Khalil, Mohammad; Pettit, Chris

    Stochastic spectral finite element models of practical engineering systems may involve solutions of linear systems or linearized systems for non-linear problems with billions of unknowns. For stochastic modeling, it is therefore essential to design robust, parallel and scalable algorithms that can efficiently utilize high-performance computing to tackle such large-scale systems. Domain decomposition based iterative solvers can handle such systems. And though these algorithms exhibit excellent scalabilities, significant algorithmic and implementational challenges exist to extend them to solve extreme-scale stochastic systems using emerging computing platforms. Intrusive polynomial chaos expansion based domain decomposition algorithms are extended here to concurrently handle high resolutionmore » in both spatial and stochastic domains using an in-house implementation. Sparse iterative solvers with efficient preconditioners are employed to solve the resulting global and subdomain level local systems through multi-level iterative solvers. We also use parallel sparse matrix–vector operations to reduce the floating-point operations and memory requirements. Numerical and parallel scalabilities of these algorithms are presented for the diffusion equation having spatially varying diffusion coefficient modeled by a non-Gaussian stochastic process. Scalability of the solvers with respect to the number of random variables is also investigated.« less

  20. Synergy of Stochastic and Systematic Energization of Plasmas during Turbulent Reconnection

    NASA Astrophysics Data System (ADS)

    Pisokas, Theophilos; Vlahos, Loukas; Isliker, Heinz

    2018-01-01

    The important characteristic of turbulent reconnection is that it combines large-scale magnetic disturbances (δ B/B∼ 1) with randomly distributed unstable current sheets (UCSs). Many well-known nonlinear MHD structures (strong turbulence, current sheet(s), shock(s)) lead asymptotically to the state of turbulent reconnection. We analyze in this article, for the first time, the energization of electrons and ions in a large-scale environment that combines large-amplitude disturbances propagating with sub-Alfvénic speed with UCSs. The magnetic disturbances interact stochastically (second-order Fermi) with the charged particles and play a crucial role in the heating of the particles, while the UCSs interact systematically (first-order Fermi) and play a crucial role in the formation of the high-energy tail. The synergy of stochastic and systematic acceleration provided by the mixture of magnetic disturbances and UCSs influences the energetics of the thermal and nonthermal particles, the power-law index, and the length of time the particles remain inside the energy release volume. We show that this synergy can explain the observed very fast and impulsive particle acceleration and the slightly delayed formation of a superhot particle population.

  1. Accurate hybrid stochastic simulation of a system of coupled chemical or biochemical reactions.

    PubMed

    Salis, Howard; Kaznessis, Yiannis

    2005-02-01

    The dynamical solution of a well-mixed, nonlinear stochastic chemical kinetic system, described by the Master equation, may be exactly computed using the stochastic simulation algorithm. However, because the computational cost scales with the number of reaction occurrences, systems with one or more "fast" reactions become costly to simulate. This paper describes a hybrid stochastic method that partitions the system into subsets of fast and slow reactions, approximates the fast reactions as a continuous Markov process, using a chemical Langevin equation, and accurately describes the slow dynamics using the integral form of the "Next Reaction" variant of the stochastic simulation algorithm. The key innovation of this method is its mechanism of efficiently monitoring the occurrences of slow, discrete events while simultaneously simulating the dynamics of a continuous, stochastic or deterministic process. In addition, by introducing an approximation in which multiple slow reactions may occur within a time step of the numerical integration of the chemical Langevin equation, the hybrid stochastic method performs much faster with only a marginal decrease in accuracy. Multiple examples, including a biological pulse generator and a large-scale system benchmark, are simulated using the exact and proposed hybrid methods as well as, for comparison, a previous hybrid stochastic method. Probability distributions of the solutions are compared and the weak errors of the first two moments are computed. In general, these hybrid methods may be applied to the simulation of the dynamics of a system described by stochastic differential, ordinary differential, and Master equations.

  2. Stochastic dynamics of genetic broadcasting networks

    NASA Astrophysics Data System (ADS)

    Potoyan, Davit; Wolynes, Peter

    The complex genetic programs of eukaryotic cells are often regulated by key transcription factors occupying or clearing out of a large number of genomic locations. Orchestrating the residence times of these factors is therefore important for the well organized functioning of a large network. The classic models of genetic switches sidestep this timing issue by assuming the binding of transcription factors to be governed entirely by thermodynamic protein-DNA affinities. Here we show that relying on passive thermodynamics and random release times can lead to a ''time-scale crisis'' of master genes that broadcast their signals to large number of binding sites. We demonstrate that this ''time-scale crisis'' can be resolved by actively regulating residence times through molecular stripping. We illustrate these ideas by studying the stochastic dynamics of the genetic network of the central eukaryotic master regulator NFκB which broadcasts its signals to many downstream genes that regulate immune response, apoptosis etc.

  3. A scalable moment-closure approximation for large-scale biochemical reaction networks

    PubMed Central

    Kazeroonian, Atefeh; Theis, Fabian J.; Hasenauer, Jan

    2017-01-01

    Abstract Motivation: Stochastic molecular processes are a leading cause of cell-to-cell variability. Their dynamics are often described by continuous-time discrete-state Markov chains and simulated using stochastic simulation algorithms. As these stochastic simulations are computationally demanding, ordinary differential equation models for the dynamics of the statistical moments have been developed. The number of state variables of these approximating models, however, grows at least quadratically with the number of biochemical species. This limits their application to small- and medium-sized processes. Results: In this article, we present a scalable moment-closure approximation (sMA) for the simulation of statistical moments of large-scale stochastic processes. The sMA exploits the structure of the biochemical reaction network to reduce the covariance matrix. We prove that sMA yields approximating models whose number of state variables depends predominantly on local properties, i.e. the average node degree of the reaction network, instead of the overall network size. The resulting complexity reduction is assessed by studying a range of medium- and large-scale biochemical reaction networks. To evaluate the approximation accuracy and the improvement in computational efficiency, we study models for JAK2/STAT5 signalling and NFκB signalling. Our method is applicable to generic biochemical reaction networks and we provide an implementation, including an SBML interface, which renders the sMA easily accessible. Availability and implementation: The sMA is implemented in the open-source MATLAB toolbox CERENA and is available from https://github.com/CERENADevelopers/CERENA. Contact: jan.hasenauer@helmholtz-muenchen.de or atefeh.kazeroonian@tum.de Supplementary information: Supplementary data are available at Bioinformatics online. PMID:28881983

  4. Survey of decentralized control methods. [for large scale dynamic systems

    NASA Technical Reports Server (NTRS)

    Athans, M.

    1975-01-01

    An overview is presented of the types of problems that are being considered by control theorists in the area of dynamic large scale systems with emphasis on decentralized control strategies. Approaches that deal directly with decentralized decision making for large scale systems are discussed. It is shown that future advances in decentralized system theory are intimately connected with advances in the stochastic control problem with nonclassical information pattern. The basic assumptions and mathematical tools associated with the latter are summarized, and recommendations concerning future research are presented.

  5. LASER APPLICATIONS AND OTHER TOPICS IN QUANTUM ELECTRONICS: Application of the stochastic parallel gradient descent algorithm for numerical simulation and analysis of the coherent summation of radiation from fibre amplifiers

    NASA Astrophysics Data System (ADS)

    Zhou, Pu; Wang, Xiaolin; Li, Xiao; Chen, Zilum; Xu, Xiaojun; Liu, Zejin

    2009-10-01

    Coherent summation of fibre laser beams, which can be scaled to a relatively large number of elements, is simulated by using the stochastic parallel gradient descent (SPGD) algorithm. The applicability of this algorithm for coherent summation is analysed and its optimisaton parameters and bandwidth limitations are studied.

  6. Stochastic four-way coupling of gas-solid flows for Large Eddy Simulations

    NASA Astrophysics Data System (ADS)

    Curran, Thomas; Denner, Fabian; van Wachem, Berend

    2017-11-01

    The interaction of solid particles with turbulence has for long been a topic of interest for predicting the behavior of industrially relevant flows. For the turbulent fluid phase, Large Eddy Simulation (LES) methods are widely used for their low computational cost, leaving only the sub-grid scales (SGS) of turbulence to be modelled. Although LES has seen great success in predicting the behavior of turbulent single-phase flows, the development of LES for turbulent gas-solid flows is still in its infancy. This contribution aims at constructing a model to describe the four-way coupling of particles in an LES framework, by considering the role particles play in the transport of turbulent kinetic energy across the scales. Firstly, a stochastic model reconstructing the sub-grid velocities for the particle tracking is presented. Secondly, to solve particle-particle interaction, most models involve a deterministic treatment of the collisions. We finally introduce a stochastic model for estimating the collision probability. All results are validated against fully resolved DNS-DPS simulations. The final goal of this contribution is to propose a global stochastic method adapted to two-phase LES simulation where the number of particles considered can be significantly increased. Financial support from PetroBras is gratefully acknowledged.

  7. Ensemble Kalman filters for dynamical systems with unresolved turbulence

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Grooms, Ian, E-mail: grooms@cims.nyu.edu; Lee, Yoonsang; Majda, Andrew J.

    Ensemble Kalman filters are developed for turbulent dynamical systems where the forecast model does not resolve all the active scales of motion. Coarse-resolution models are intended to predict the large-scale part of the true dynamics, but observations invariably include contributions from both the resolved large scales and the unresolved small scales. The error due to the contribution of unresolved scales to the observations, called ‘representation’ or ‘representativeness’ error, is often included as part of the observation error, in addition to the raw measurement error, when estimating the large-scale part of the system. It is here shown how stochastic superparameterization (amore » multiscale method for subgridscale parameterization) can be used to provide estimates of the statistics of the unresolved scales. In addition, a new framework is developed wherein small-scale statistics can be used to estimate both the resolved and unresolved components of the solution. The one-dimensional test problem from dispersive wave turbulence used here is computationally tractable yet is particularly difficult for filtering because of the non-Gaussian extreme event statistics and substantial small scale turbulence: a shallow energy spectrum proportional to k{sup −5/6} (where k is the wavenumber) results in two-thirds of the climatological variance being carried by the unresolved small scales. Because the unresolved scales contain so much energy, filters that ignore the representation error fail utterly to provide meaningful estimates of the system state. Inclusion of a time-independent climatological estimate of the representation error in a standard framework leads to inaccurate estimates of the large-scale part of the signal; accurate estimates of the large scales are only achieved by using stochastic superparameterization to provide evolving, large-scale dependent predictions of the small-scale statistics. Again, because the unresolved scales contain so much energy, even an accurate estimate of the large-scale part of the system does not provide an accurate estimate of the true state. By providing simultaneous estimates of both the large- and small-scale parts of the solution, the new framework is able to provide accurate estimates of the true system state.« less

  8. Stochastic dynamics of genetic broadcasting networks

    NASA Astrophysics Data System (ADS)

    Potoyan, Davit A.; Wolynes, Peter G.

    2017-11-01

    The complex genetic programs of eukaryotic cells are often regulated by key transcription factors occupying or clearing out of a large number of genomic locations. Orchestrating the residence times of these factors is therefore important for the well organized functioning of a large network. The classic models of genetic switches sidestep this timing issue by assuming the binding of transcription factors to be governed entirely by thermodynamic protein-DNA affinities. Here we show that relying on passive thermodynamics and random release times can lead to a "time-scale crisis" for master genes that broadcast their signals to a large number of binding sites. We demonstrate that this time-scale crisis for clearance in a large broadcasting network can be resolved by actively regulating residence times through molecular stripping. We illustrate these ideas by studying a model of the stochastic dynamics of the genetic network of the central eukaryotic master regulator NFκ B which broadcasts its signals to many downstream genes that regulate immune response, apoptosis, etc.

  9. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wu, Fuke, E-mail: wufuke@mail.hust.edu.cn; Tian, Tianhai, E-mail: tianhai.tian@sci.monash.edu.au; Rawlings, James B., E-mail: james.rawlings@wisc.edu

    The frequently used reduction technique is based on the chemical master equation for stochastic chemical kinetics with two-time scales, which yields the modified stochastic simulation algorithm (SSA). For the chemical reaction processes involving a large number of molecular species and reactions, the collection of slow reactions may still include a large number of molecular species and reactions. Consequently, the SSA is still computationally expensive. Because the chemical Langevin equations (CLEs) can effectively work for a large number of molecular species and reactions, this paper develops a reduction method based on the CLE by the stochastic averaging principle developed in themore » work of Khasminskii and Yin [SIAM J. Appl. Math. 56, 1766–1793 (1996); ibid. 56, 1794–1819 (1996)] to average out the fast-reacting variables. This reduction method leads to a limit averaging system, which is an approximation of the slow reactions. Because in the stochastic chemical kinetics, the CLE is seen as the approximation of the SSA, the limit averaging system can be treated as the approximation of the slow reactions. As an application, we examine the reduction of computation complexity for the gene regulatory networks with two-time scales driven by intrinsic noise. For linear and nonlinear protein production functions, the simulations show that the sample average (expectation) of the limit averaging system is close to that of the slow-reaction process based on the SSA. It demonstrates that the limit averaging system is an efficient approximation of the slow-reaction process in the sense of the weak convergence.« less

  10. Coarse-graining and hybrid methods for efficient simulation of stochastic multi-scale models of tumour growth.

    PubMed

    de la Cruz, Roberto; Guerrero, Pilar; Calvo, Juan; Alarcón, Tomás

    2017-12-01

    The development of hybrid methodologies is of current interest in both multi-scale modelling and stochastic reaction-diffusion systems regarding their applications to biology. We formulate a hybrid method for stochastic multi-scale models of cells populations that extends the remit of existing hybrid methods for reaction-diffusion systems. Such method is developed for a stochastic multi-scale model of tumour growth, i.e. population-dynamical models which account for the effects of intrinsic noise affecting both the number of cells and the intracellular dynamics. In order to formulate this method, we develop a coarse-grained approximation for both the full stochastic model and its mean-field limit. Such approximation involves averaging out the age-structure (which accounts for the multi-scale nature of the model) by assuming that the age distribution of the population settles onto equilibrium very fast. We then couple the coarse-grained mean-field model to the full stochastic multi-scale model. By doing so, within the mean-field region, we are neglecting noise in both cell numbers (population) and their birth rates (structure). This implies that, in addition to the issues that arise in stochastic-reaction diffusion systems, we need to account for the age-structure of the population when attempting to couple both descriptions. We exploit our coarse-graining model so that, within the mean-field region, the age-distribution is in equilibrium and we know its explicit form. This allows us to couple both domains consistently, as upon transference of cells from the mean-field to the stochastic region, we sample the equilibrium age distribution. Furthermore, our method allows us to investigate the effects of intracellular noise, i.e. fluctuations of the birth rate, on collective properties such as travelling wave velocity. We show that the combination of population and birth-rate noise gives rise to large fluctuations of the birth rate in the region at the leading edge of front, which cannot be accounted for by the coarse-grained model. Such fluctuations have non-trivial effects on the wave velocity. Beyond the development of a new hybrid method, we thus conclude that birth-rate fluctuations are central to a quantitatively accurate description of invasive phenomena such as tumour growth.

  11. Coarse-graining and hybrid methods for efficient simulation of stochastic multi-scale models of tumour growth

    NASA Astrophysics Data System (ADS)

    de la Cruz, Roberto; Guerrero, Pilar; Calvo, Juan; Alarcón, Tomás

    2017-12-01

    The development of hybrid methodologies is of current interest in both multi-scale modelling and stochastic reaction-diffusion systems regarding their applications to biology. We formulate a hybrid method for stochastic multi-scale models of cells populations that extends the remit of existing hybrid methods for reaction-diffusion systems. Such method is developed for a stochastic multi-scale model of tumour growth, i.e. population-dynamical models which account for the effects of intrinsic noise affecting both the number of cells and the intracellular dynamics. In order to formulate this method, we develop a coarse-grained approximation for both the full stochastic model and its mean-field limit. Such approximation involves averaging out the age-structure (which accounts for the multi-scale nature of the model) by assuming that the age distribution of the population settles onto equilibrium very fast. We then couple the coarse-grained mean-field model to the full stochastic multi-scale model. By doing so, within the mean-field region, we are neglecting noise in both cell numbers (population) and their birth rates (structure). This implies that, in addition to the issues that arise in stochastic-reaction diffusion systems, we need to account for the age-structure of the population when attempting to couple both descriptions. We exploit our coarse-graining model so that, within the mean-field region, the age-distribution is in equilibrium and we know its explicit form. This allows us to couple both domains consistently, as upon transference of cells from the mean-field to the stochastic region, we sample the equilibrium age distribution. Furthermore, our method allows us to investigate the effects of intracellular noise, i.e. fluctuations of the birth rate, on collective properties such as travelling wave velocity. We show that the combination of population and birth-rate noise gives rise to large fluctuations of the birth rate in the region at the leading edge of front, which cannot be accounted for by the coarse-grained model. Such fluctuations have non-trivial effects on the wave velocity. Beyond the development of a new hybrid method, we thus conclude that birth-rate fluctuations are central to a quantitatively accurate description of invasive phenomena such as tumour growth.

  12. The role of the airline transportation network in the prediction and predictability of global epidemics.

    PubMed

    Colizza, Vittoria; Barrat, Alain; Barthélemy, Marc; Vespignani, Alessandro

    2006-02-14

    The systematic study of large-scale networks has unveiled the ubiquitous presence of connectivity patterns characterized by large-scale heterogeneities and unbounded statistical fluctuations. These features affect dramatically the behavior of the diffusion processes occurring on networks, determining the ensuing statistical properties of their evolution pattern and dynamics. In this article, we present a stochastic computational framework for the forecast of global epidemics that considers the complete worldwide air travel infrastructure complemented with census population data. We address two basic issues in global epidemic modeling: (i) we study the role of the large scale properties of the airline transportation network in determining the global diffusion pattern of emerging diseases; and (ii) we evaluate the reliability of forecasts and outbreak scenarios with respect to the intrinsic stochasticity of disease transmission and traffic flows. To address these issues we define a set of quantitative measures able to characterize the level of heterogeneity and predictability of the epidemic pattern. These measures may be used for the analysis of containment policies and epidemic risk assessment.

  13. Trend assessment: applications for hydrology and climate research

    NASA Astrophysics Data System (ADS)

    Kallache, M.; Rust, H. W.; Kropp, J.

    2005-02-01

    The assessment of trends in climatology and hydrology still is a matter of debate. Capturing typical properties of time series, like trends, is highly relevant for the discussion of potential impacts of global warming or flood occurrences. It provides indicators for the separation of anthropogenic signals and natural forcing factors by distinguishing between deterministic trends and stochastic variability. In this contribution river run-off data from gauges in Southern Germany are analysed regarding their trend behaviour by combining a deterministic trend component and a stochastic model part in a semi-parametric approach. In this way the trade-off between trend and autocorrelation structure can be considered explicitly. A test for a significant trend is introduced via three steps: First, a stochastic fractional ARIMA model, which is able to reproduce short-term as well as long-term correlations, is fitted to the empirical data. In a second step, wavelet analysis is used to separate the variability of small and large time-scales assuming that the trend component is part of the latter. Finally, a comparison of the overall variability to that restricted to small scales results in a test for a trend. The extraction of the large-scale behaviour by wavelet analysis provides a clue concerning the shape of the trend.

  14. Dynamically Consistent Parameterization of Mesoscale Eddies This work aims at parameterization of eddy effects for use in non-eddy-resolving ocean models and focuses on the effect of the stochastic part of the eddy forcing that backscatters and induces eastward jet extension of the western boundary currents and its adjacent recirculation zones.

    NASA Astrophysics Data System (ADS)

    Berloff, P. S.

    2016-12-01

    This work aims at developing a framework for dynamically consistent parameterization of mesoscale eddy effects for use in non-eddy-resolving ocean circulation models. The proposed eddy parameterization framework is successfully tested on the classical, wind-driven double-gyre model, which is solved both with explicitly resolved vigorous eddy field and in the non-eddy-resolving configuration with the eddy parameterization replacing the eddy effects. The parameterization focuses on the effect of the stochastic part of the eddy forcing that backscatters and induces eastward jet extension of the western boundary currents and its adjacent recirculation zones. The parameterization locally approximates transient eddy flux divergence by spatially localized and temporally periodic forcing, referred to as the plunger, and focuses on the linear-dynamics flow solution induced by it. The nonlinear self-interaction of this solution, referred to as the footprint, characterizes and quantifies the induced eddy forcing exerted on the large-scale flow. We find that spatial pattern and amplitude of each footprint strongly depend on the underlying large-scale flow, and the corresponding relationships provide the basis for the eddy parameterization and its closure on the large-scale flow properties. Dependencies of the footprints on other important parameters of the problem are also systematically analyzed. The parameterization utilizes the local large-scale flow information, constructs and scales the corresponding footprints, and then sums them up over the gyres to produce the resulting eddy forcing field, which is interactively added to the model as an extra forcing. Thus, the assumed ensemble of plunger solutions can be viewed as a simple model for the cumulative effect of the stochastic eddy forcing. The parameterization framework is implemented in the simplest way, but it provides a systematic strategy for improving the implementation algorithm.

  15. Hill functions for stochastic gene regulatory networks from master equations with split nodes and time-scale separation

    NASA Astrophysics Data System (ADS)

    Lipan, Ovidiu; Ferwerda, Cameron

    2018-02-01

    The deterministic Hill function depends only on the average values of molecule numbers. To account for the fluctuations in the molecule numbers, the argument of the Hill function needs to contain the means, the standard deviations, and the correlations. Here we present a method that allows for stochastic Hill functions to be constructed from the dynamical evolution of stochastic biocircuits with specific topologies. These stochastic Hill functions are presented in a closed analytical form so that they can be easily incorporated in models for large genetic regulatory networks. Using a repressive biocircuit as an example, we show by Monte Carlo simulations that the traditional deterministic Hill function inaccurately predicts time of repression by an order of two magnitudes. However, the stochastic Hill function was able to capture the fluctuations and thus accurately predicted the time of repression.

  16. Charge and energy migration in molecular clusters: A stochastic Schrödinger equation approach.

    PubMed

    Plehn, Thomas; May, Volkhard

    2017-01-21

    The performance of stochastic Schrödinger equations for simulating dynamic phenomena in large scale open quantum systems is studied. Going beyond small system sizes, commonly used master equation approaches become inadequate. In this regime, wave function based methods profit from their inherent scaling benefit and present a promising tool to study, for example, exciton and charge carrier dynamics in huge and complex molecular structures. In the first part of this work, a strict analytic derivation is presented. It starts with the finite temperature reduced density operator expanded in coherent reservoir states and ends up with two linear stochastic Schrödinger equations. Both equations are valid in the weak and intermediate coupling limit and can be properly related to two existing approaches in literature. In the second part, we focus on the numerical solution of these equations. The main issue is the missing norm conservation of the wave function propagation which may lead to numerical discrepancies. To illustrate this, we simulate the exciton dynamics in the Fenna-Matthews-Olson complex in direct comparison with the data from literature. Subsequently a strategy for the proper computational handling of the linear stochastic Schrödinger equation is exposed particularly with regard to large systems. Here, we study charge carrier transfer kinetics in realistic hybrid organic/inorganic para-sexiphenyl/ZnO systems of different extension.

  17. Charge and energy migration in molecular clusters: A stochastic Schrödinger equation approach

    NASA Astrophysics Data System (ADS)

    Plehn, Thomas; May, Volkhard

    2017-01-01

    The performance of stochastic Schrödinger equations for simulating dynamic phenomena in large scale open quantum systems is studied. Going beyond small system sizes, commonly used master equation approaches become inadequate. In this regime, wave function based methods profit from their inherent scaling benefit and present a promising tool to study, for example, exciton and charge carrier dynamics in huge and complex molecular structures. In the first part of this work, a strict analytic derivation is presented. It starts with the finite temperature reduced density operator expanded in coherent reservoir states and ends up with two linear stochastic Schrödinger equations. Both equations are valid in the weak and intermediate coupling limit and can be properly related to two existing approaches in literature. In the second part, we focus on the numerical solution of these equations. The main issue is the missing norm conservation of the wave function propagation which may lead to numerical discrepancies. To illustrate this, we simulate the exciton dynamics in the Fenna-Matthews-Olson complex in direct comparison with the data from literature. Subsequently a strategy for the proper computational handling of the linear stochastic Schrödinger equation is exposed particularly with regard to large systems. Here, we study charge carrier transfer kinetics in realistic hybrid organic/inorganic para-sexiphenyl/ZnO systems of different extension.

  18. WKB theory of large deviations in stochastic populations

    NASA Astrophysics Data System (ADS)

    Assaf, Michael; Meerson, Baruch

    2017-06-01

    Stochasticity can play an important role in the dynamics of biologically relevant populations. These span a broad range of scales: from intra-cellular populations of molecules to population of cells and then to groups of plants, animals and people. Large deviations in stochastic population dynamics—such as those determining population extinction, fixation or switching between different states—are presently in a focus of attention of statistical physicists. We review recent progress in applying different variants of dissipative WKB approximation (after Wentzel, Kramers and Brillouin) to this class of problems. The WKB approximation allows one to evaluate the mean time and/or probability of population extinction, fixation and switches resulting from either intrinsic (demographic) noise, or a combination of the demographic noise and environmental variations, deterministic or random. We mostly cover well-mixed populations, single and multiple, but also briefly consider populations on heterogeneous networks and spatial populations. The spatial setting also allows one to study large fluctuations of the speed of biological invasions. Finally, we briefly discuss possible directions of future work.

  19. Data-driven Climate Modeling and Prediction

    NASA Astrophysics Data System (ADS)

    Kondrashov, D. A.; Chekroun, M.

    2016-12-01

    Global climate models aim to simulate a broad range of spatio-temporal scales of climate variability with state vector having many millions of degrees of freedom. On the other hand, while detailed weather prediction out to a few days requires high numerical resolution, it is fairly clear that a major fraction of large-scale climate variability can be predicted in a much lower-dimensional phase space. Low-dimensional models can simulate and predict this fraction of climate variability, provided they are able to account for linear and nonlinear interactions between the modes representing large scales of climate dynamics, as well as their interactions with a much larger number of modes representing fast and small scales. This presentation will highlight several new applications by Multilayered Stochastic Modeling (MSM) [Kondrashov, Chekroun and Ghil, 2015] framework that has abundantly proven its efficiency in the modeling and real-time forecasting of various climate phenomena. MSM is a data-driven inverse modeling technique that aims to obtain a low-order nonlinear system of prognostic equations driven by stochastic forcing, and estimates both the dynamical operator and the properties of the driving noise from multivariate time series of observations or a high-end model's simulation. MSM leads to a system of stochastic differential equations (SDEs) involving hidden (auxiliary) variables of fast-small scales ranked by layers, which interact with the macroscopic (observed) variables of large-slow scales to model the dynamics of the latter, and thus convey memory effects. New MSM climate applications focus on development of computationally efficient low-order models by using data-adaptive decomposition methods that convey memory effects by time-embedding techniques, such as Multichannel Singular Spectrum Analysis (M-SSA) [Ghil et al. 2002] and recently developed Data-Adaptive Harmonic (DAH) decomposition method [Chekroun and Kondrashov, 2016]. In particular, new results by DAH-MSM modeling and prediction of Arctic Sea Ice, as well as decadal predictions of near-surface Earth temperatures will be presented.

  20. Walking the Filament of Feasibility: Global Optimization of Highly-Constrained, Multi-Modal Interplanetary Trajectories Using a Novel Stochastic Search Technique

    NASA Technical Reports Server (NTRS)

    Englander, Arnold C.; Englander, Jacob A.

    2017-01-01

    Interplanetary trajectory optimization problems are highly complex and are characterized by a large number of decision variables and equality and inequality constraints as well as many locally optimal solutions. Stochastic global search techniques, coupled with a large-scale NLP solver, have been shown to solve such problems but are inadequately robust when the problem constraints become very complex. In this work, we present a novel search algorithm that takes advantage of the fact that equality constraints effectively collapse the solution space to lower dimensionality. This new approach walks the filament'' of feasibility to efficiently find the global optimal solution.

  1. A stochastic two-scale model for pressure-driven flow between rough surfaces

    PubMed Central

    Larsson, Roland; Lundström, Staffan; Wall, Peter; Almqvist, Andreas

    2016-01-01

    Seal surface topography typically consists of global-scale geometric features as well as local-scale roughness details and homogenization-based approaches are, therefore, readily applied. These provide for resolving the global scale (large domain) with a relatively coarse mesh, while resolving the local scale (small domain) in high detail. As the total flow decreases, however, the flow pattern becomes tortuous and this requires a larger local-scale domain to obtain a converged solution. Therefore, a classical homogenization-based approach might not be feasible for simulation of very small flows. In order to study small flows, a model allowing feasibly-sized local domains, for really small flow rates, is developed. Realization was made possible by coupling the two scales with a stochastic element. Results from numerical experiments, show that the present model is in better agreement with the direct deterministic one than the conventional homogenization type of model, both quantitatively in terms of flow rate and qualitatively in reflecting the flow pattern. PMID:27436975

  2. A Lagrangian stochastic model to demonstrate multi-scale interactions between convection and land surface heterogeneity in the atmospheric boundary layer

    NASA Astrophysics Data System (ADS)

    Parsakhoo, Zahra; Shao, Yaping

    2017-04-01

    Near-surface turbulent mixing has considerable effect on surface fluxes, cloud formation and convection in the atmospheric boundary layer (ABL). Its quantifications is however a modeling and computational challenge since the small eddies are not fully resolved in Eulerian models directly. We have developed a Lagrangian stochastic model to demonstrate multi-scale interactions between convection and land surface heterogeneity in the atmospheric boundary layer based on the Ito Stochastic Differential Equation (SDE) for air parcels (particles). Due to the complexity of the mixing in the ABL, we find that linear Ito SDE cannot represent convections properly. Three strategies have been tested to solve the problem: 1) to make the deterministic term in the Ito equation non-linear; 2) to change the random term in the Ito equation fractional, and 3) to modify the Ito equation by including Levy flights. We focus on the third strategy and interpret mixing as interaction between at least two stochastic processes with different Lagrangian time scales. The model is in progress to include the collisions among the particles with different characteristic and to apply the 3D model for real cases. One application of the model is emphasized: some land surface patterns are generated and then coupled with the Large Eddy Simulation (LES).

  3. A stochastic multi-scale method for turbulent premixed combustion

    NASA Astrophysics Data System (ADS)

    Cha, Chong M.

    2002-11-01

    The stochastic chemistry algorithm of Bunker et al. and Gillespie is used to perform the chemical reactions in a transported probability density function (PDF) modeling approach of turbulent combustion. Recently, Kraft & Wagner have demonstrated a 100-fold gain in computational speed (for a 100 species mechanism) using the stochastic approach over the conventional, direct integration method of solving for the chemistry. Here, the stochastic chemistry algorithm is applied to develop a new transported PDF model of turbulent premixed combustion. The methodology relies on representing the relevant spatially dependent physical processes as queuing events. The canonical problem of a one-dimensional premixed flame is used for validation. For the laminar case, molecular diffusion is described by a random walk. For the turbulent case, one of two different material transport submodels can provide the necessary closure: Taylor dispersion or Kerstein's one-dimensional turbulence approach. The former exploits ``eddy diffusivity'' and hence would be much more computationally tractable for practical applications. Various validation studies are performed. Results from the Monte Carlo simulations compare well to asymptotic solutions of laminar premixed flames, both with and without high activation temperatures. The correct scaling of the turbulent burning velocity is predicted in both Damköhler's small- and large-scale turbulence limits. The effect of applying the eddy diffusivity concept in the various regimes is discussed.

  4. Biochemical Network Stochastic Simulator (BioNetS): software for stochastic modeling of biochemical networks.

    PubMed

    Adalsteinsson, David; McMillen, David; Elston, Timothy C

    2004-03-08

    Intrinsic fluctuations due to the stochastic nature of biochemical reactions can have large effects on the response of biochemical networks. This is particularly true for pathways that involve transcriptional regulation, where generally there are two copies of each gene and the number of messenger RNA (mRNA) molecules can be small. Therefore, there is a need for computational tools for developing and investigating stochastic models of biochemical networks. We have developed the software package Biochemical Network Stochastic Simulator (BioNetS) for efficiently and accurately simulating stochastic models of biochemical networks. BioNetS has a graphical user interface that allows models to be entered in a straightforward manner, and allows the user to specify the type of random variable (discrete or continuous) for each chemical species in the network. The discrete variables are simulated using an efficient implementation of the Gillespie algorithm. For the continuous random variables, BioNetS constructs and numerically solves the appropriate chemical Langevin equations. The software package has been developed to scale efficiently with network size, thereby allowing large systems to be studied. BioNetS runs as a BioSpice agent and can be downloaded from http://www.biospice.org. BioNetS also can be run as a stand alone package. All the required files are accessible from http://x.amath.unc.edu/BioNetS. We have developed BioNetS to be a reliable tool for studying the stochastic dynamics of large biochemical networks. Important features of BioNetS are its ability to handle hybrid models that consist of both continuous and discrete random variables and its ability to model cell growth and division. We have verified the accuracy and efficiency of the numerical methods by considering several test systems.

  5. Capturing the Large Scale Behavior of Many Particle Systems Through Coarse-Graining

    NASA Astrophysics Data System (ADS)

    Punshon-Smith, Samuel

    This dissertation is concerned with two areas of investigation: the first is understanding the mathematical structures behind the emergence of macroscopic laws and the effects of small scales fluctuations, the second involves the rigorous mathematical study of such laws and related questions of well-posedness. To address these areas of investigation the dissertation involves two parts: Part I concerns the theory of coarse-graining of many particle systems. We first investigate the mathematical structure behind the Mori-Zwanzig (projection operator) formalism by introducing two perturbative approaches to coarse-graining of systems that have an explicit scale separation. One concerns systems with little dissipation, while the other concerns systems with strong dissipation. In both settings we obtain an asymptotic series of `corrections' to the limiting description which are small with respect to the scaling parameter, these corrections represent the effects of small scales. We determine that only certain approximations give rise to dissipative effects in the resulting evolution. Next we apply this framework to the problem of coarse-graining the locally conserved quantities of a classical Hamiltonian system. By lumping conserved quantities into a collection of mesoscopic cells, we obtain, through a series of approximations, a stochastic particle system that resembles a discretization of the non-linear equations of fluctuating hydrodynamics. We study this system in the case that the transport coefficients are constant and prove well-posedness of the stochastic dynamics. Part II concerns the mathematical description of models where the underlying characteristics are stochastic. Such equations can model, for instance, the dynamics of a passive scalar in a random (turbulent) velocity field or the statistical behavior of a collection of particles subject to random environmental forces. First, we study general well-posedness properties of stochastic transport equation with rough diffusion coefficients. Our main result is strong existence and uniqueness under certain regularity conditions on the coefficients, and uses the theory of renormalized solutions of transport equations adapted to the stochastic setting. Next, in a work undertaken with collaborator Scott-Smith we study the Boltzmann equation with a stochastic forcing. The noise describing the forcing is white in time and colored in space and describes the effects of random environmental forces on a rarefied gas undergoing instantaneous, binary collisions. Under a cut-off assumption on the collision kernel and a coloring hypothesis for the noise coefficients, we prove the global existence of renormalized (DiPerna/Lions) martingale solutions to the Boltzmann equation for large initial data with finite mass, energy, and entropy. Our analysis includes a detailed study of weak martingale solutions to a class of linear stochastic kinetic equations. Tightness of the appropriate quantities is proved by an extension of the Skorohod theorem to non-metric spaces.

  6. State-dependent anisotrophy: Comparison of quasi-analytical solutions with stochastic results for steady gravity drainage

    USGS Publications Warehouse

    Green, Timothy R.; Freyberg, David L.

    1995-01-01

    Anisotropy in large-scale unsaturated hydraulic conductivity of layered soils changes with the moisture state. Here, state-dependent anisotropy is computed under conditions of large-scale gravity drainage. Soils represented by Gardner's exponential function are perfectly stratified, periodic, and inclined. Analytical integration of Darcy’s law across each layer results in a system of nonlinear equations that is solved iteratively for capillary suction at layer interfaces and for the Darcy flux normal to layering. Computed fluxes and suction profiles are used to determine both upscaled hydraulic conductivity in the principal directions and the corresponding “state-dependent” anisotropy ratio as functions of the mean suction. Three groups of layered soils are analyzed and compared with independent predictions from the stochastic results of Yeh et al. (1985b). The small-perturbation approach predicts appropriate behaviors for anisotropy under nonarid conditions. However, the stochastic results are limited to moderate values of mean suction; this limitation is linked to a Taylor series approximation in terms of a group of statistical and geometric parameters. Two alternative forms of the Taylor series provide upper and lower bounds for the state-dependent anisotropy of relatively dry soils.

  7. Minimizing the stochasticity of halos in large-scale structure surveys

    NASA Astrophysics Data System (ADS)

    Hamaus, Nico; Seljak, Uroš; Desjacques, Vincent; Smith, Robert E.; Baldauf, Tobias

    2010-08-01

    In recent work (Seljak, Hamaus, and Desjacques 2009) it was found that weighting central halo galaxies by halo mass can significantly suppress their stochasticity relative to the dark matter, well below the Poisson model expectation. This is useful for constraining relations between galaxies and the dark matter, such as the galaxy bias, especially in situations where sampling variance errors can be eliminated. In this paper we extend this study with the goal of finding the optimal mass-dependent halo weighting. We use N-body simulations to perform a general analysis of halo stochasticity and its dependence on halo mass. We investigate the stochasticity matrix, defined as Cij≡⟨(δi-biδm)(δj-bjδm)⟩, where δm is the dark matter overdensity in Fourier space, δi the halo overdensity of the i-th halo mass bin, and bi the corresponding halo bias. In contrast to the Poisson model predictions we detect nonvanishing correlations between different mass bins. We also find the diagonal terms to be sub-Poissonian for the highest-mass halos. The diagonalization of this matrix results in one large and one low eigenvalue, with the remaining eigenvalues close to the Poisson prediction 1/n¯, where n¯ is the mean halo number density. The eigenmode with the lowest eigenvalue contains most of the information and the corresponding eigenvector provides an optimal weighting function to minimize the stochasticity between halos and dark matter. We find this optimal weighting function to match linear mass weighting at high masses, while at the low-mass end the weights approach a constant whose value depends on the low-mass cut in the halo mass function. This weighting further suppresses the stochasticity as compared to the previously explored mass weighting. Finally, we employ the halo model to derive the stochasticity matrix and the scale-dependent bias from an analytical perspective. It is remarkably successful in reproducing our numerical results and predicts that the stochasticity between halos and the dark matter can be reduced further when going to halo masses lower than we can resolve in current simulations.

  8. The Stochastic predictability limits of GCM internal variability and the Stochastic Seasonal to Interannual Prediction System (StocSIPS)

    NASA Astrophysics Data System (ADS)

    Del Rio Amador, Lenin; Lovejoy, Shaun

    2017-04-01

    Over the past ten years, a key advance in our understanding of atmospheric variability is the discovery that between the weather and climate regime lies an intermediate "macroweather" regime, spanning the range of scales from ≈10 days to ≈30 years. Macroweather statistics are characterized by two fundamental symmetries: scaling and the factorization of the joint space-time statistics. In the time domain, the scaling has low intermittency with the additional property that successive fluctuations tend to cancel. In space, on the contrary the scaling has high (multifractal) intermittency corresponding to the existence of different climate zones. These properties have fundamental implications for macroweather forecasting: a) the temporal scaling implies that the system has a long range memory that can be exploited for forecasting; b) the low temporal intermittency implies that mathematically well-established (Gaussian) forecasting techniques can be used; and c), the statistical factorization property implies that although spatial correlations (including teleconnections) may be large, if long enough time series are available, they are not necessarily useful in improving forecasts. Theoretically, these conditions imply the existence of stochastic predictability limits in our talk, we show that these limits apply to GCM's. Based on these statistical implications, we developed the Stochastic Seasonal and Interannual Prediction System (StocSIPS) for the prediction of temperature from regional to global scales and from one month to many years horizons. One of the main components of StocSIPS is the separation and prediction of both the internal and externally forced variabilities. In order to test the theoretical assumptions and consequences for predictability and predictions, we use 41 different CMIP5 model outputs from preindustrial control runs that have fixed external forcings: whose variability is purely internally generated. We first show that these statistical assumptions hold with relatively good accuracy and then we performed hindcasts at global and regional scales from monthly to annual time resolutions using StocSIPS. We obtained excellent agreement between the hindcast Mean Square Skill Score (MSSS) and the theoretical stochastic limits. We also show the application of StocSIPS to the prediction of average global temperature and compare our results with those obtained using multi-model ensemble approaches. StocSIPS has numerous advantages including a) higher MSSS for large time horizons, b) the from convergence to the real - not model - climate, c) much higher computational speed, d) no need for data assimilation, e) no ad hoc post processing and f) no need for downscaling.

  9. Stochastic layer scaling in the two-wire model for divertor tokamaks

    NASA Astrophysics Data System (ADS)

    Ali, Halima; Punjabi, Alkesh; Boozer, Allen

    2009-06-01

    The question of magnetic field structure in the vicinity of the separatrix in divertor tokamaks is studied. The authors have investigated this problem earlier in a series of papers, using various mathematical techniques. In the present paper, the two-wire model (TWM) [Reiman, A. 1996 Phys. Plasmas 3, 906] is considered. It is noted that, in the TWM, it is useful to consider an extra equation expressing magnetic flux conservation. This equation does not add any more information to the TWM, since the equation is derived from the TWM. This equation is useful for controlling the step size in the numerical integration of the TWM equations. The TWM with the extra equation is called the flux-preserving TWM. Nevertheless, the technique is apparently still plagued by numerical inaccuracies when the perturbation level is low, resulting in an incorrect scaling of the stochastic layer width. The stochastic broadening of the separatrix in the flux-preserving TWM is compared with that in the low mn (poloidal mode number m and toroidal mode number n) map (LMN) [Ali, H., Punjabi, A., Boozer, A. and Evans, T. 2004 Phys. Plasmas 11, 1908]. The flux-preserving TWM and LMN both give Boozer-Rechester 0.5 power scaling of the stochastic layer width with the amplitude of magnetic perturbation when the perturbation is sufficiently large [Boozer, A. and Rechester, A. 1978, Phys. Fluids 21, 682]. The flux-preserving TWM gives a larger stochastic layer width when the perturbation is low, while the LMN gives correct scaling in the low perturbation region. Area-preserving maps such as the LMN respect the Hamiltonian structure of field line trajectories, and have the added advantage of computational efficiency. Also, for a $1\\frac12$ degree of freedom Hamiltonian system such as field lines, maps do not give Arnold diffusion.

  10. Randomized central limit theorems: A unified theory.

    PubMed

    Eliazar, Iddo; Klafter, Joseph

    2010-08-01

    The central limit theorems (CLTs) characterize the macroscopic statistical behavior of large ensembles of independent and identically distributed random variables. The CLTs assert that the universal probability laws governing ensembles' aggregate statistics are either Gaussian or Lévy, and that the universal probability laws governing ensembles' extreme statistics are Fréchet, Weibull, or Gumbel. The scaling schemes underlying the CLTs are deterministic-scaling all ensemble components by a common deterministic scale. However, there are "random environment" settings in which the underlying scaling schemes are stochastic-scaling the ensemble components by different random scales. Examples of such settings include Holtsmark's law for gravitational fields and the Stretched Exponential law for relaxation times. In this paper we establish a unified theory of randomized central limit theorems (RCLTs)-in which the deterministic CLT scaling schemes are replaced with stochastic scaling schemes-and present "randomized counterparts" to the classic CLTs. The RCLT scaling schemes are shown to be governed by Poisson processes with power-law statistics, and the RCLTs are shown to universally yield the Lévy, Fréchet, and Weibull probability laws.

  11. Current fluctuations in periodically driven systems

    NASA Astrophysics Data System (ADS)

    Barato, Andre C.; Chetrite, Raphael

    2018-05-01

    Small nonequelibrium systems driven by an external periodic protocol can be described by Markov processes with time-periodic transition rates. In general, current fluctuations in such small systems are large and may play a crucial role. We develop a theoretical formalism to evaluate the rate of such large deviations in periodically driven systems. We show that the scaled cumulant generating function that characterizes current fluctuations is given by a maximal Floquet exponent. Comparing deterministic protocols with stochastic protocols, we show that, with respect to large deviations, systems driven by a stochastic protocol with an infinitely large number of jumps are equivalent to systems driven by deterministic protocols. Our results are illustrated with three case studies: a two-state model for a heat engine, a three-state model for a molecular pump, and a biased random walk with a time-periodic affinity.

  12. A VLSI recurrent network of integrate-and-fire neurons connected by plastic synapses with long-term memory.

    PubMed

    Chicca, E; Badoni, D; Dante, V; D'Andreagiovanni, M; Salina, G; Carota, L; Fusi, S; Del Giudice, P

    2003-01-01

    Electronic neuromorphic devices with on-chip, on-line learning should be able to modify quickly the synaptic couplings to acquire information about new patterns to be stored (synaptic plasticity) and, at the same time, preserve this information on very long time scales (synaptic stability). Here, we illustrate the electronic implementation of a simple solution to this stability-plasticity problem, recently proposed and studied in various contexts. It is based on the observation that reducing the analog depth of the synapses to the extreme (bistable synapses) does not necessarily disrupt the performance of the device as an associative memory, provided that 1) the number of neurons is large enough; 2) the transitions between stable synaptic states are stochastic; and 3) learning is slow. The drastic reduction of the analog depth of the synaptic variable also makes this solution appealing from the point of view of electronic implementation and offers a simple methodological alternative to the technological solution based on floating gates. We describe the full custom analog very large-scale integration (VLSI) realization of a small network of integrate-and-fire neurons connected by bistable deterministic plastic synapses which can implement the idea of stochastic learning. In the absence of stimuli, the memory is preserved indefinitely. During the stimulation the synapse undergoes quick temporary changes through the activities of the pre- and postsynaptic neurons; those changes stochastically result in a long-term modification of the synaptic efficacy. The intentionally disordered pattern of connectivity allows the system to generate a randomness suited to drive the stochastic selection mechanism. We check by a suitable stimulation protocol that the stochastic synaptic plasticity produces the expected pattern of potentiation and depression in the electronic network.

  13. Supercomputer optimizations for stochastic optimal control applications

    NASA Technical Reports Server (NTRS)

    Chung, Siu-Leung; Hanson, Floyd B.; Xu, Huihuang

    1991-01-01

    Supercomputer optimizations for a computational method of solving stochastic, multibody, dynamic programming problems are presented. The computational method is valid for a general class of optimal control problems that are nonlinear, multibody dynamical systems, perturbed by general Markov noise in continuous time, i.e., nonsmooth Gaussian as well as jump Poisson random white noise. Optimization techniques for vector multiprocessors or vectorizing supercomputers include advanced data structures, loop restructuring, loop collapsing, blocking, and compiler directives. These advanced computing techniques and superconducting hardware help alleviate Bellman's curse of dimensionality in dynamic programming computations, by permitting the solution of large multibody problems. Possible applications include lumped flight dynamics models for uncertain environments, such as large scale and background random aerospace fluctuations.

  14. Climate SPHINX: evaluating the impact of resolution and stochastic physics parameterisations in the EC-Earth global climate model

    NASA Astrophysics Data System (ADS)

    Davini, Paolo; von Hardenberg, Jost; Corti, Susanna; Christensen, Hannah M.; Juricke, Stephan; Subramanian, Aneesh; Watson, Peter A. G.; Weisheimer, Antje; Palmer, Tim N.

    2017-03-01

    The Climate SPHINX (Stochastic Physics HIgh resolutioN eXperiments) project is a comprehensive set of ensemble simulations aimed at evaluating the sensitivity of present and future climate to model resolution and stochastic parameterisation. The EC-Earth Earth system model is used to explore the impact of stochastic physics in a large ensemble of 30-year climate integrations at five different atmospheric horizontal resolutions (from 125 up to 16 km). The project includes more than 120 simulations in both a historical scenario (1979-2008) and a climate change projection (2039-2068), together with coupled transient runs (1850-2100). A total of 20.4 million core hours have been used, made available from a single year grant from PRACE (the Partnership for Advanced Computing in Europe), and close to 1.5 PB of output data have been produced on SuperMUC IBM Petascale System at the Leibniz Supercomputing Centre (LRZ) in Garching, Germany. About 140 TB of post-processed data are stored on the CINECA supercomputing centre archives and are freely accessible to the community thanks to an EUDAT data pilot project. This paper presents the technical and scientific set-up of the experiments, including the details on the forcing used for the simulations performed, defining the SPHINX v1.0 protocol. In addition, an overview of preliminary results is given. An improvement in the simulation of Euro-Atlantic atmospheric blocking following resolution increase is observed. It is also shown that including stochastic parameterisation in the low-resolution runs helps to improve some aspects of the tropical climate - specifically the Madden-Julian Oscillation and the tropical rainfall variability. These findings show the importance of representing the impact of small-scale processes on the large-scale climate variability either explicitly (with high-resolution simulations) or stochastically (in low-resolution simulations).

  15. Simulation-optimization of large agro-hydrosystems using a decomposition approach

    NASA Astrophysics Data System (ADS)

    Schuetze, Niels; Grundmann, Jens

    2014-05-01

    In this contribution a stochastic simulation-optimization framework for decision support for optimal planning and operation of water supply of large agro-hydrosystems is presented. It is based on a decomposition solution strategy which allows for (i) the usage of numerical process models together with efficient Monte Carlo simulations for a reliable estimation of higher quantiles of the minimum agricultural water demand for full and deficit irrigation strategies at small scale (farm level), and (ii) the utilization of the optimization results at small scale for solving water resources management problems at regional scale. As a secondary result of several simulation-optimization runs at the smaller scale stochastic crop-water production functions (SCWPF) for different crops are derived which can be used as a basic tool for assessing the impact of climate variability on risk for potential yield. In addition, microeconomic impacts of climate change and the vulnerability of the agro-ecological systems are evaluated. The developed methodology is demonstrated through its application on a real-world case study for the South Al-Batinah region in the Sultanate of Oman where a coastal aquifer is affected by saltwater intrusion due to excessive groundwater withdrawal for irrigated agriculture.

  16. Focused-based multifractal analysis of the wake in a wind turbine array utilizing proper orthogonal decomposition

    NASA Astrophysics Data System (ADS)

    Kadum, Hawwa; Ali, Naseem; Cal, Raúl

    2016-11-01

    Hot-wire anemometry measurements have been performed on a 3 x 3 wind turbine array to study the multifractality of the turbulent kinetic energy dissipations. A multifractal spectrum and Hurst exponents are determined at nine locations downstream of the hub height, and bottom and top tips. Higher multifractality is found at 0.5D and 1D downstream of the bottom tip and hub height. The second order of the Hurst exponent and combination factor show an ability to predict the flow state in terms of its development. Snapshot proper orthogonal decomposition is used to identify the coherent and incoherent structures and to reconstruct the stochastic velocity using a specific number of the POD eigenfunctions. The accumulation of the turbulent kinetic energy in top tip location exhibits fast convergence compared to the bottom tip and hub height locations. The dissipation of the large and small scales are determined using the reconstructed stochastic velocities. The higher multifractality is shown in the dissipation of the large scale compared to small-scale dissipation showing consistency with the behavior of the original signals.

  17. Novel patch modelling method for efficient simulation and prediction uncertainty analysis of multi-scale groundwater flow and transport processes

    NASA Astrophysics Data System (ADS)

    Sreekanth, J.; Moore, Catherine

    2018-04-01

    The application of global sensitivity and uncertainty analysis techniques to groundwater models of deep sedimentary basins are typically challenged by large computational burdens combined with associated numerical stability issues. The highly parameterized approaches required for exploring the predictive uncertainty associated with the heterogeneous hydraulic characteristics of multiple aquifers and aquitards in these sedimentary basins exacerbate these issues. A novel Patch Modelling Methodology is proposed for improving the computational feasibility of stochastic modelling analysis of large-scale and complex groundwater models. The method incorporates a nested groundwater modelling framework that enables efficient simulation of groundwater flow and transport across multiple spatial and temporal scales. The method also allows different processes to be simulated within different model scales. Existing nested model methodologies are extended by employing 'joining predictions' for extrapolating prediction-salient information from one model scale to the next. This establishes a feedback mechanism supporting the transfer of information from child models to parent models as well as parent models to child models in a computationally efficient manner. This feedback mechanism is simple and flexible and ensures that while the salient small scale features influencing larger scale prediction are transferred back to the larger scale, this does not require the live coupling of models. This method allows the modelling of multiple groundwater flow and transport processes using separate groundwater models that are built for the appropriate spatial and temporal scales, within a stochastic framework, while also removing the computational burden associated with live model coupling. The utility of the method is demonstrated by application to an actual large scale aquifer injection scheme in Australia.

  18. Climate SPHINX: High-resolution present-day and future climate simulations with an improved representation of small-scale variability

    NASA Astrophysics Data System (ADS)

    Davini, Paolo; von Hardenberg, Jost; Corti, Susanna; Subramanian, Aneesh; Weisheimer, Antje; Christensen, Hannah; Juricke, Stephan; Palmer, Tim

    2016-04-01

    The PRACE Climate SPHINX project investigates the sensitivity of climate simulations to model resolution and stochastic parameterization. The EC-Earth Earth-System Model is used to explore the impact of stochastic physics in 30-years climate integrations as a function of model resolution (from 80km up to 16km for the atmosphere). The experiments include more than 70 simulations in both a historical scenario (1979-2008) and a climate change projection (2039-2068), using RCP8.5 CMIP5 forcing. A total amount of 20 million core hours will be used at end of the project (March 2016) and about 150 TBytes of post-processed data will be available to the climate community. Preliminary results show a clear improvement in the representation of climate variability over the Euro-Atlantic following resolution increase. More specifically, the well-known atmospheric blocking negative bias over Europe is definitely resolved. High resolution runs also show improved fidelity in representation of tropical variability - such as the MJO and its propagation - over the low resolution simulations. It is shown that including stochastic parameterization in the low resolution runs help to improve some of the aspects of the MJO propagation further. These findings show the importance of representing the impact of small scale processes on the large scale climate variability either explicitly (with high resolution simulations) or stochastically (in low resolution simulations).

  19. Research on trading patterns of large users' direct power purchase considering consumption of clean energy

    NASA Astrophysics Data System (ADS)

    Guojun, He; Lin, Guo; Zhicheng, Yu; Xiaojun, Zhu; Lei, Wang; Zhiqiang, Zhao

    2017-03-01

    In order to reduce the stochastic volatility of supply and demand, and maintain the electric power system's stability after large scale stochastic renewable energy sources connected to grid, the development and consumption should be promoted by marketing means. Bilateral contract transaction model of large users' direct power purchase conforms to the actual situation of our country. Trading pattern of large users' direct power purchase is analyzed in this paper, characteristics of each power generation are summed up, and centralized matching mode is mainly introduced. Through the establishment of power generation enterprises' priority evaluation index system and the analysis of power generation enterprises' priority based on fuzzy clustering, the sorting method of power generation enterprises' priority in trading patterns of large users' direct power purchase is put forward. Suggestions for trading mechanism of large users' direct power purchase are offered by this method, which is good for expand the promotion of large users' direct power purchase further.

  20. a Stochastic Approach to Multiobjective Optimization of Large-Scale Water Reservoir Networks

    NASA Astrophysics Data System (ADS)

    Bottacin-Busolin, A.; Worman, A. L.

    2013-12-01

    A main challenge for the planning and management of water resources is the development of multiobjective strategies for operation of large-scale water reservoir networks. The optimal sequence of water releases from multiple reservoirs depends on the stochastic variability of correlated hydrologic inflows and on various processes that affect water demand and energy prices. Although several methods have been suggested, large-scale optimization problems arising in water resources management are still plagued by the high dimensional state space and by the stochastic nature of the hydrologic inflows. In this work, the optimization of reservoir operation is approached using approximate dynamic programming (ADP) with policy iteration and function approximators. The method is based on an off-line learning process in which operating policies are evaluated for a number of stochastic inflow scenarios, and the resulting value functions are used to design new, improved policies until convergence is attained. A case study is presented of a multi-reservoir system in the Dalälven River, Sweden, which includes 13 interconnected reservoirs and 36 power stations. Depending on the late spring and summer peak discharges, the lowlands adjacent to Dalälven can often be flooded during the summer period, and the presence of stagnating floodwater during the hottest months of the year is the cause of a large proliferation of mosquitos, which is a major problem for the people living in the surroundings. Chemical pesticides are currently being used as a preventive countermeasure, which do not provide an effective solution to the problem and have adverse environmental impacts. In this study, ADP was used to analyze the feasibility of alternative operating policies for reducing the flood risk at a reasonable economic cost for the hydropower companies. To this end, mid-term operating policies were derived by combining flood risk reduction with hydropower production objectives. The performance of the resulting policies was evaluated by simulating the online operating process for historical inflow scenarios and synthetic inflow forecasts. The simulations are based on a combined mid- and short-term planning model in which the value function derived in the mid-term planning phase provides the value of the policy at the end of the short-term operating horizon. While a purely deterministic linear analysis provided rather optimistic results, the stochastic model allowed for a more accurate evaluation of trade-offs and limitations of alternative operating strategies for the Dalälven reservoir network.

  1. Spatial distribution and optimal harvesting of an age-structured population in a fluctuating environment.

    PubMed

    Engen, Steinar; Lee, Aline Magdalena; Sæther, Bernt-Erik

    2018-02-01

    We analyze a spatial age-structured model with density regulation, age specific dispersal, stochasticity in vital rates and proportional harvesting. We include two age classes, juveniles and adults, where juveniles are subject to logistic density dependence. There are environmental stochastic effects with arbitrary spatial scales on all birth and death rates, and individuals of both age classes are subject to density independent dispersal with given rates and specified distributions of dispersal distances. We show how to simulate the joint density fields of the age classes and derive results for the spatial scales of all spatial autocovariance functions for densities. A general result is that the squared scale has an additive term equal to the squared scale of the environmental noise, corresponding to the Moran effect, as well as additive terms proportional to the dispersal rate and variance of dispersal distance for the age classes and approximately inversely proportional to the strength of density regulation. We show that the optimal harvesting strategy in the deterministic case is to harvest only juveniles when their relative value (e.g. financial) is large, and otherwise only adults. With increasing environmental stochasticity there is an interval of increasing length of values of juveniles relative to adults where both age classes should be harvested. Harvesting generally tends to increase all spatial scales of the autocovariances of densities. Copyright © 2017. Published by Elsevier Inc.

  2. Modeling when, where, and how to manage a forest epidemic, motivated by sudden oak death in California

    Treesearch

    Nik J. Cunniffe; Richard C. Cobb; Ross K. Meentemeyer; David M. Rizzo; Christopher A. Gilligan

    2016-01-01

    Sudden oak death, caused by Phytophthora ramorum, has killed millions of oak and tanoak in California since its first detection in 1995. Despite some localized small-scale management, there has been no large-scale attempt to slow the spread of the pathogen in California. Here we use a stochastic spatially-explicit model parameterized using data on...

  3. Simultaneous stochastic inversion for geomagnetic main field and secular variation. I - A large-scale inverse problem

    NASA Technical Reports Server (NTRS)

    Bloxham, Jeremy

    1987-01-01

    The method of stochastic inversion is extended to the simultaneous inversion of both main field and secular variation. In the present method, the time dependency is represented by an expansion in Legendre polynomials, resulting in a simple diagonal form for the a priori covariance matrix. The efficient preconditioned Broyden-Fletcher-Goldfarb-Shanno algorithm is used to solve the large system of equations resulting from expansion of the field spatially to spherical harmonic degree 14 and temporally to degree 8. Application of the method to observatory data spanning the 1900-1980 period results in a data fit of better than 30 nT, while providing temporally and spatially smoothly varying models of the magnetic field at the core-mantle boundary.

  4. Suppression of phase mixing in drift-kinetic plasma turbulence

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Parker, J. T., E-mail: joseph.parker@stfc.ac.uk; OCIAM, Mathematical Institute, University of Oxford, Andrew Wiles Building, Radcliffe Observatory Quarter, Woodstock Road, Oxford OX2 6GG; Brasenose College, Radcliffe Square, Oxford OX1 4AJ

    2016-07-15

    Transfer of free energy from large to small velocity-space scales by phase mixing leads to Landau damping in a linear plasma. In a turbulent drift-kinetic plasma, this transfer is statistically nearly canceled by an inverse transfer from small to large velocity-space scales due to “anti-phase-mixing” modes excited by a stochastic form of plasma echo. Fluid moments (density, velocity, and temperature) are thus approximately energetically isolated from the higher moments of the distribution function, so phase mixing is ineffective as a dissipation mechanism when the plasma collisionality is small.

  5. Generation Expansion Planning With Large Amounts of Wind Power via Decision-Dependent Stochastic Programming

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zhan, Yiduo; Zheng, Qipeng P.; Wang, Jianhui

    Power generation expansion planning needs to deal with future uncertainties carefully, given that the invested generation assets will be in operation for a long time. Many stochastic programming models have been proposed to tackle this challenge. However, most previous works assume predetermined future uncertainties (i.e., fixed random outcomes with given probabilities). In several recent studies of generation assets' planning (e.g., thermal versus renewable), new findings show that the investment decisions could affect the future uncertainties as well. To this end, this paper proposes a multistage decision-dependent stochastic optimization model for long-term large-scale generation expansion planning, where large amounts of windmore » power are involved. In the decision-dependent model, the future uncertainties are not only affecting but also affected by the current decisions. In particular, the probability distribution function is determined by not only input parameters but also decision variables. To deal with the nonlinear constraints in our model, a quasi-exact solution approach is then introduced to reformulate the multistage stochastic investment model to a mixed-integer linear programming model. The wind penetration, investment decisions, and the optimality of the decision-dependent model are evaluated in a series of multistage case studies. The results show that the proposed decision-dependent model provides effective optimization solutions for long-term generation expansion planning.« less

  6. Laws of Large Numbers and Langevin Approximations for Stochastic Neural Field Equations

    PubMed Central

    2013-01-01

    In this study, we consider limit theorems for microscopic stochastic models of neural fields. We show that the Wilson–Cowan equation can be obtained as the limit in uniform convergence on compacts in probability for a sequence of microscopic models when the number of neuron populations distributed in space and the number of neurons per population tend to infinity. This result also allows to obtain limits for qualitatively different stochastic convergence concepts, e.g., convergence in the mean. Further, we present a central limit theorem for the martingale part of the microscopic models which, suitably re-scaled, converges to a centred Gaussian process with independent increments. These two results provide the basis for presenting the neural field Langevin equation, a stochastic differential equation taking values in a Hilbert space, which is the infinite-dimensional analogue of the chemical Langevin equation in the present setting. On a technical level, we apply recently developed law of large numbers and central limit theorems for piecewise deterministic processes taking values in Hilbert spaces to a master equation formulation of stochastic neuronal network models. These theorems are valid for processes taking values in Hilbert spaces, and by this are able to incorporate spatial structures of the underlying model. Mathematics Subject Classification (2000): 60F05, 60J25, 60J75, 92C20. PMID:23343328

  7. Using remotely sensed data and stochastic models to simulate realistic flood hazard footprints across the continental US

    NASA Astrophysics Data System (ADS)

    Bates, P. D.; Quinn, N.; Sampson, C. C.; Smith, A.; Wing, O.; Neal, J. C.

    2017-12-01

    Remotely sensed data has transformed the field of large scale hydraulic modelling. New digital elevation, hydrography and river width data has allowed such models to be created for the first time, and remotely sensed observations of water height, slope and water extent has allowed them to be calibrated and tested. As a result, we are now able to conduct flood risk analyses at national, continental or even global scales. However, continental scale analyses have significant additional complexity compared to typical flood risk modelling approaches. Traditional flood risk assessment uses frequency curves to define the magnitude of extreme flows at gauging stations. The flow values for given design events, such as the 1 in 100 year return period flow, are then used to drive hydraulic models in order to produce maps of flood hazard. Such an approach works well for single gauge locations and local models because over relatively short river reaches (say 10-60km) one can assume that the return period of an event does not vary. At regional to national scales and across multiple river catchments this assumption breaks down, and for a given flood event the return period will be different at different gauging stations, a pattern known as the event `footprint'. Despite this, many national scale risk analyses still use `constant in space' return period hazard layers (e.g. the FEMA Special Flood Hazard Areas) in their calculations. Such an approach can estimate potential exposure, but will over-estimate risk and cannot determine likely flood losses over a whole region or country. We address this problem by using a stochastic model to simulate many realistic extreme event footprints based on observed gauged flows and the statistics of gauge to gauge correlations. We take the entire USGS gauge data catalogue for sites with > 45 years of record and use a conditional approach for multivariate extreme values to generate sets of flood events with realistic return period variation in space. We undertake a number of quality checks of the stochastic model and compare real and simulated footprints to show that the method is able to re-create realistic patterns even at continental scales where there is large variation in flood generating mechanisms. We then show how these patterns can be used to drive a large scale 2D hydraulic to predict regional scale flooding.

  8. An evaluation of sex-age-kill (SAK) model performance

    USGS Publications Warehouse

    Millspaugh, Joshua J.; Skalski, John R.; Townsend, Richard L.; Diefenbach, Duane R.; Boyce, Mark S.; Hansen, Lonnie P.; Kammermeyer, Kent

    2009-01-01

    The sex-age-kill (SAK) model is widely used to estimate abundance of harvested large mammals, including white-tailed deer (Odocoileus virginianus). Despite a long history of use, few formal evaluations of SAK performance exist. We investigated how violations of the stable age distribution and stationary population assumption, changes to male or female harvest, stochastic effects (i.e., random fluctuations in recruitment and survival), and sampling efforts influenced SAK estimation. When the simulated population had a stable age distribution and λ > 1, the SAK model underestimated abundance. Conversely, when λ < 1, the SAK overestimated abundance. When changes to male harvest were introduced, SAK estimates were opposite the true population trend. In contrast, SAK estimates were robust to changes in female harvest rates. Stochastic effects caused SAK estimates to fluctuate about their equilibrium abundance, but the effect dampened as the size of the surveyed population increased. When we considered both stochastic effects and sampling error at a deer management unit scale the resultant abundance estimates were within ±121.9% of the true population level 95% of the time. These combined results demonstrate extreme sensitivity to model violations and scale of analysis. Without changes to model formulation, the SAK model will be biased when λ ≠ 1. Furthermore, any factor that alters the male harvest rate, such as changes to regulations or changes in hunter attitudes, will bias population estimates. Sex-age-kill estimates may be precise at large spatial scales, such as the state level, but less so at the individual management unit level. Alternative models, such as statistical age-at-harvest models, which require similar data types, might allow for more robust, broad-scale demographic assessments.

  9. Simulation of water-energy fluxes through small-scale reservoir systems under limited data availability

    NASA Astrophysics Data System (ADS)

    Papoulakos, Konstantinos; Pollakis, Giorgos; Moustakis, Yiannis; Markopoulos, Apostolis; Iliopoulou, Theano; Dimitriadis, Panayiotis; Koutsoyiannis, Demetris; Efstratiadis, Andreas

    2017-04-01

    Small islands are regarded as promising areas for developing hybrid water-energy systems that combine multiple sources of renewable energy with pumped-storage facilities. Essential element of such systems is the water storage component (reservoir), which implements both flow and energy regulations. Apparently, the representation of the overall water-energy management problem requires the simulation of the operation of the reservoir system, which in turn requires a faithful estimation of water inflows and demands of water and energy. Yet, in small-scale reservoir systems, this task in far from straightforward, since both the availability and accuracy of associated information is generally very poor. For, in contrast to large-scale reservoir systems, for which it is quite easy to find systematic and reliable hydrological data, in the case of small systems such data may be minor or even totally missing. The stochastic approach is the unique means to account for input data uncertainties within the combined water-energy management problem. Using as example the Livadi reservoir, which is the pumped storage component of the small Aegean island of Astypalaia, Greece, we provide a simulation framework, comprising: (a) a stochastic model for generating synthetic rainfall and temperature time series; (b) a stochastic rainfall-runoff model, whose parameters cannot be inferred through calibration and, thus, they are represented as correlated random variables; (c) a stochastic model for estimating water supply and irrigation demands, based on simulated temperature and soil moisture, and (d) a daily operation model of the reservoir system, providing stochastic forecasts of water and energy outflows. Acknowledgement: This research is conducted within the frame of the undergraduate course "Stochastic Methods in Water Resources" of the National Technical University of Athens (NTUA). The School of Civil Engineering of NTUA provided moral support for the participation of the students in the Assembly.

  10. Boosting Bayesian parameter inference of nonlinear stochastic differential equation models by Hamiltonian scale separation.

    PubMed

    Albert, Carlo; Ulzega, Simone; Stoop, Ruedi

    2016-04-01

    Parameter inference is a fundamental problem in data-driven modeling. Given observed data that is believed to be a realization of some parameterized model, the aim is to find parameter values that are able to explain the observed data. In many situations, the dominant sources of uncertainty must be included into the model for making reliable predictions. This naturally leads to stochastic models. Stochastic models render parameter inference much harder, as the aim then is to find a distribution of likely parameter values. In Bayesian statistics, which is a consistent framework for data-driven learning, this so-called posterior distribution can be used to make probabilistic predictions. We propose a novel, exact, and very efficient approach for generating posterior parameter distributions for stochastic differential equation models calibrated to measured time series. The algorithm is inspired by reinterpreting the posterior distribution as a statistical mechanics partition function of an object akin to a polymer, where the measurements are mapped on heavier beads compared to those of the simulated data. To arrive at distribution samples, we employ a Hamiltonian Monte Carlo approach combined with a multiple time-scale integration. A separation of time scales naturally arises if either the number of measurement points or the number of simulation points becomes large. Furthermore, at least for one-dimensional problems, we can decouple the harmonic modes between measurement points and solve the fastest part of their dynamics analytically. Our approach is applicable to a wide range of inference problems and is highly parallelizable.

  11. Stochastic multifractal forecasts: from theory to applications in radar meteorology

    NASA Astrophysics Data System (ADS)

    da Silva Rocha Paz, Igor; Tchiguirinskaia, Ioulia; Schertzer, Daniel

    2017-04-01

    Radar meteorology has been very inspiring for the development of multifractals. It has enabled to work on a 3D+1 field with many challenging applications, including predictability and stochastic forecasts, especially nowcasts that are particularly demanding in computation speed. Multifractals are indeed parsimonious stochastic models that require only a few physically meaningful parameters, e.g. Universal Multifractal (UM) parameters, because they are based on non-trivial symmetries of nonlinear equations. We first recall the physical principles of multifractal predictability and predictions, which are so closely related that the latter correspond to the most optimal predictions in the multifractal framework. Indeed, these predictions are based on the fundamental duality of a relatively slow decay of large scale structures and an injection of new born small scale structures. Overall, this triggers a mulfitractal inverse cascade of unpredictability. With the help of high resolution rainfall radar data (≈ 100 m), we detail and illustrate the corresponding stochastic algorithm in the framework of (causal) UM Fractionally Integrated Flux models (UM-FIF), where the rainfall field is obtained with the help of a fractional integration of a conservative multifractal flux, whose average is strictly scale invariant (like the energy flux in a dynamic cascade). Whereas, the introduction of small structures is rather straightforward, the deconvolution of the past of the field is more subtle, but nevertheless achievable, to obtain the past of the flux. Then, one needs to only fractionally integrate a multiplicative combination of past and future fluxes to obtain a nowcast realisation.

  12. Breaking the theoretical scaling limit for predicting quasiparticle energies: the stochastic GW approach.

    PubMed

    Neuhauser, Daniel; Gao, Yi; Arntsen, Christopher; Karshenas, Cyrus; Rabani, Eran; Baer, Roi

    2014-08-15

    We develop a formalism to calculate the quasiparticle energy within the GW many-body perturbation correction to the density functional theory. The occupied and virtual orbitals of the Kohn-Sham Hamiltonian are replaced by stochastic orbitals used to evaluate the Green function G, the polarization potential W, and, thereby, the GW self-energy. The stochastic GW (sGW) formalism relies on novel theoretical concepts such as stochastic time-dependent Hartree propagation, stochastic matrix compression, and spatial or temporal stochastic decoupling techniques. Beyond the theoretical interest, the formalism enables linear scaling GW calculations breaking the theoretical scaling limit for GW as well as circumventing the need for energy cutoff approximations. We illustrate the method for silicon nanocrystals of varying sizes with N_{e}>3000 electrons.

  13. Stochastic Optimally Tuned Range-Separated Hybrid Density Functional Theory.

    PubMed

    Neuhauser, Daniel; Rabani, Eran; Cytter, Yael; Baer, Roi

    2016-05-19

    We develop a stochastic formulation of the optimally tuned range-separated hybrid density functional theory that enables significant reduction of the computational effort and scaling of the nonlocal exchange operator at the price of introducing a controllable statistical error. Our method is based on stochastic representations of the Coulomb convolution integral and of the generalized Kohn-Sham density matrix. The computational cost of the approach is similar to that of usual Kohn-Sham density functional theory, yet it provides a much more accurate description of the quasiparticle energies for the frontier orbitals. This is illustrated for a series of silicon nanocrystals up to sizes exceeding 3000 electrons. Comparison with the stochastic GW many-body perturbation technique indicates excellent agreement for the fundamental band gap energies, good agreement for the band edge quasiparticle excitations, and very low statistical errors in the total energy for large systems. The present approach has a major advantage over one-shot GW by providing a self-consistent Hamiltonian that is central for additional postprocessing, for example, in the stochastic Bethe-Salpeter approach.

  14. Improved Large-Eddy Simulation Using a Stochastic Backscatter Model: Application to the Neutral Atmospheric Boundary Layer and Urban Street Canyon Flow

    NASA Astrophysics Data System (ADS)

    O'Neill, J. J.; Cai, X.; Kinnersley, R.

    2015-12-01

    Large-eddy simulation (LES) provides a powerful tool for developing our understanding of atmospheric boundary layer (ABL) dynamics, which in turn can be used to improve the parameterisations of simpler operational models. However, LES modelling is not without its own limitations - most notably, the need to parameterise the effects of all subgrid-scale (SGS) turbulence. Here, we employ a stochastic backscatter SGS model, which explicitly handles the effects of both forward and reverse energy transfer to/from the subgrid scales, to simulate the neutrally stratified ABL as well as flow within an idealised urban street canyon. In both cases, a clear improvement in LES output statistics is observed when compared with the performance of a SGS model that handles forward energy transfer only. In the neutral ABL case, the near-surface velocity profile is brought significantly closer towards its expected logarithmic form. In the street canyon case, the strength of the primary vortex that forms within the canyon is more accurately reproduced when compared to wind tunnel measurements. Our results indicate that grid-scale backscatter plays an important role in both these modelled situations.

  15. Programming Probabilistic Structural Analysis for Parallel Processing Computer

    NASA Technical Reports Server (NTRS)

    Sues, Robert H.; Chen, Heh-Chyun; Twisdale, Lawrence A.; Chamis, Christos C.; Murthy, Pappu L. N.

    1991-01-01

    The ultimate goal of this research program is to make Probabilistic Structural Analysis (PSA) computationally efficient and hence practical for the design environment by achieving large scale parallelism. The paper identifies the multiple levels of parallelism in PSA, identifies methodologies for exploiting this parallelism, describes the development of a parallel stochastic finite element code, and presents results of two example applications. It is demonstrated that speeds within five percent of those theoretically possible can be achieved. A special-purpose numerical technique, the stochastic preconditioned conjugate gradient method, is also presented and demonstrated to be extremely efficient for certain classes of PSA problems.

  16. Hierarchical stochastic modeling of large river ecosystems and fish growth across spatio-temporal scales and climate models: the Missouri River endangered pallid sturgeon example

    USGS Publications Warehouse

    Wildhaber, Mark L.; Wikle, Christopher K.; Moran, Edward H.; Anderson, Christopher J.; Franz, Kristie J.; Dey, Rima

    2017-01-01

    We present a hierarchical series of spatially decreasing and temporally increasing models to evaluate the uncertainty in the atmosphere – ocean global climate model (AOGCM) and the regional climate model (RCM) relative to the uncertainty in the somatic growth of the endangered pallid sturgeon (Scaphirhynchus albus). For effects on fish populations of riverine ecosystems, cli- mate output simulated by coarse-resolution AOGCMs and RCMs must be downscaled to basins to river hydrology to population response. One needs to transfer the information from these climate simulations down to the individual scale in a way that minimizes extrapolation and can account for spatio-temporal variability in the intervening stages. The goal is a framework to determine whether, given uncertainties in the climate models and the biological response, meaningful inference can still be made. The non-linear downscaling of climate information to the river scale requires that one realistically account for spatial and temporal variability across scale. Our down- scaling procedure includes the use of fixed/calibrated hydrological flow and temperature models coupled with a stochastically parameterized sturgeon bioenergetics model. We show that, although there is a large amount of uncertainty associated with both the climate model output and the fish growth process, one can establish significant differences in fish growth distributions between models, and between future and current climates for a given model.

  17. Normal forms for reduced stochastic climate models

    PubMed Central

    Majda, Andrew J.; Franzke, Christian; Crommelin, Daan

    2009-01-01

    The systematic development of reduced low-dimensional stochastic climate models from observations or comprehensive high-dimensional climate models is an important topic for atmospheric low-frequency variability, climate sensitivity, and improved extended range forecasting. Here techniques from applied mathematics are utilized to systematically derive normal forms for reduced stochastic climate models for low-frequency variables. The use of a few Empirical Orthogonal Functions (EOFs) (also known as Principal Component Analysis, Karhunen–Loéve and Proper Orthogonal Decomposition) depending on observational data to span the low-frequency subspace requires the assessment of dyad interactions besides the more familiar triads in the interaction between the low- and high-frequency subspaces of the dynamics. It is shown below that the dyad and multiplicative triad interactions combine with the climatological linear operator interactions to simultaneously produce both strong nonlinear dissipation and Correlated Additive and Multiplicative (CAM) stochastic noise. For a single low-frequency variable the dyad interactions and climatological linear operator alone produce a normal form with CAM noise from advection of the large scales by the small scales and simultaneously strong cubic damping. These normal forms should prove useful for developing systematic strategies for the estimation of stochastic models from climate data. As an illustrative example the one-dimensional normal form is applied below to low-frequency patterns such as the North Atlantic Oscillation (NAO) in a climate model. The results here also illustrate the short comings of a recent linear scalar CAM noise model proposed elsewhere for low-frequency variability. PMID:19228943

  18. Multi-Frequency Signal Detection Based on Frequency Exchange and Re-Scaling Stochastic Resonance and Its Application to Weak Fault Diagnosis.

    PubMed

    Liu, Jinjun; Leng, Yonggang; Lai, Zhihui; Fan, Shengbo

    2018-04-25

    Mechanical fault diagnosis usually requires not only identification of the fault characteristic frequency, but also detection of its second and/or higher harmonics. However, it is difficult to detect a multi-frequency fault signal through the existing Stochastic Resonance (SR) methods, because the characteristic frequency of the fault signal as well as its second and higher harmonics frequencies tend to be large parameters. To solve the problem, this paper proposes a multi-frequency signal detection method based on Frequency Exchange and Re-scaling Stochastic Resonance (FERSR). In the method, frequency exchange is implemented using filtering technique and Single SideBand (SSB) modulation. This new method can overcome the limitation of "sampling ratio" which is the ratio of the sampling frequency to the frequency of target signal. It also ensures that the multi-frequency target signals can be processed to meet the small-parameter conditions. Simulation results demonstrate that the method shows good performance for detecting a multi-frequency signal with low sampling ratio. Two practical cases are employed to further validate the effectiveness and applicability of this method.

  19. Shallow cumuli ensemble statistics for development of a stochastic parameterization

    NASA Astrophysics Data System (ADS)

    Sakradzija, Mirjana; Seifert, Axel; Heus, Thijs

    2014-05-01

    According to a conventional deterministic approach to the parameterization of moist convection in numerical atmospheric models, a given large scale forcing produces an unique response from the unresolved convective processes. This representation leaves out the small-scale variability of convection, as it is known from the empirical studies of deep and shallow convective cloud ensembles, there is a whole distribution of sub-grid states corresponding to the given large scale forcing. Moreover, this distribution gets broader with the increasing model resolution. This behavior is also consistent with our theoretical understanding of a coarse-grained nonlinear system. We propose an approach to represent the variability of the unresolved shallow-convective states, including the dependence of the sub-grid states distribution spread and shape on the model horizontal resolution. Starting from the Gibbs canonical ensemble theory, Craig and Cohen (2006) developed a theory for the fluctuations in a deep convective ensemble. The micro-states of a deep convective cloud ensemble are characterized by the cloud-base mass flux, which, according to the theory, is exponentially distributed (Boltzmann distribution). Following their work, we study the shallow cumulus ensemble statistics and the distribution of the cloud-base mass flux. We employ a Large-Eddy Simulation model (LES) and a cloud tracking algorithm, followed by a conditional sampling of clouds at the cloud base level, to retrieve the information about the individual cloud life cycles and the cloud ensemble as a whole. In the case of shallow cumulus cloud ensemble, the distribution of micro-states is a generalized exponential distribution. Based on the empirical and theoretical findings, a stochastic model has been developed to simulate the shallow convective cloud ensemble and to test the convective ensemble theory. Stochastic model simulates a compound random process, with the number of convective elements drawn from a Poisson distribution, and cloud properties sub-sampled from a generalized ensemble distribution. We study the role of the different cloud subtypes in a shallow convective ensemble and how the diverse cloud properties and cloud lifetimes affect the system macro-state. To what extent does the cloud-base mass flux distribution deviate from the simple Boltzmann distribution and how does it affect the results from the stochastic model? Is the memory, provided by the finite lifetime of individual clouds, of importance for the ensemble statistics? We also test for the minimal information given as an input to the stochastic model, able to reproduce the ensemble mean statistics and the variability in a convective ensemble. An important property of the resulting distribution of the sub-grid convective states is its scale-adaptivity - the smaller the grid-size, the broader the compound distribution of the sub-grid states.

  20. Variable classification in the LSST era: exploring a model for quasi-periodic light curves

    NASA Astrophysics Data System (ADS)

    Zinn, J. C.; Kochanek, C. S.; Kozłowski, S.; Udalski, A.; Szymański, M. K.; Soszyński, I.; Wyrzykowski, Ł.; Ulaczyk, K.; Poleski, R.; Pietrukowicz, P.; Skowron, J.; Mróz, P.; Pawlak, M.

    2017-06-01

    The Large Synoptic Survey Telescope (LSST) is expected to yield ˜107 light curves over the course of its mission, which will require a concerted effort in automated classification. Stochastic processes provide one means of quantitatively describing variability with the potential advantage over simple light-curve statistics that the parameters may be physically meaningful. Here, we survey a large sample of periodic, quasi-periodic and stochastic Optical Gravitational Lensing Experiment-III variables using the damped random walk (DRW; CARMA(1,0)) and quasi-periodic oscillation (QPO; CARMA(2,1)) stochastic process models. The QPO model is described by an amplitude, a period and a coherence time-scale, while the DRW has only an amplitude and a time-scale. We find that the periodic and quasi-periodic stellar variables are generally better described by a QPO than a DRW, while quasars are better described by the DRW model. There are ambiguities in interpreting the QPO coherence time due to non-sinusoidal light-curve shapes, signal-to-noise ratio, error mischaracterizations and cadence. Higher order implementations of the QPO model that better capture light-curve shapes are necessary for the coherence time to have its implied physical meaning. Independent of physical meaning, the extra parameter of the QPO model successfully distinguishes most of the classes of periodic and quasi-periodic variables we consider.

  1. Geometric structure and information change in phase transitions

    NASA Astrophysics Data System (ADS)

    Kim, Eun-jin; Hollerbach, Rainer

    2017-06-01

    We propose a toy model for a cyclic order-disorder transition and introduce a geometric methodology to understand stochastic processes involved in transitions. Specifically, our model consists of a pair of forward and backward processes (FPs and BPs) for the emergence and disappearance of a structure in a stochastic environment. We calculate time-dependent probability density functions (PDFs) and the information length L , which is the total number of different states that a system undergoes during the transition. Time-dependent PDFs during transient relaxation exhibit strikingly different behavior in FPs and BPs. In particular, FPs driven by instability undergo the broadening of the PDF with a large increase in fluctuations before the transition to the ordered state accompanied by narrowing the PDF width. During this stage, we identify an interesting geodesic solution accompanied by the self-regulation between the growth and nonlinear damping where the time scale τ of information change is constant in time, independent of the strength of the stochastic noise. In comparison, BPs are mainly driven by the macroscopic motion due to the movement of the PDF peak. The total information length L between initial and final states is much larger in BPs than in FPs, increasing linearly with the deviation γ of a control parameter from the critical state in BPs while increasing logarithmically with γ in FPs. L scales as |lnD | and D-1 /2 in FPs and BPs, respectively, where D measures the strength of the stochastic forcing. These differing scalings with γ and D suggest a great utility of L in capturing different underlying processes, specifically, diffusion vs advection in phase transition by geometry. We discuss physical origins of these scalings and comment on implications of our results for bistable systems undergoing repeated order-disorder transitions (e.g., fitness).

  2. Geometric structure and information change in phase transitions.

    PubMed

    Kim, Eun-Jin; Hollerbach, Rainer

    2017-06-01

    We propose a toy model for a cyclic order-disorder transition and introduce a geometric methodology to understand stochastic processes involved in transitions. Specifically, our model consists of a pair of forward and backward processes (FPs and BPs) for the emergence and disappearance of a structure in a stochastic environment. We calculate time-dependent probability density functions (PDFs) and the information length L, which is the total number of different states that a system undergoes during the transition. Time-dependent PDFs during transient relaxation exhibit strikingly different behavior in FPs and BPs. In particular, FPs driven by instability undergo the broadening of the PDF with a large increase in fluctuations before the transition to the ordered state accompanied by narrowing the PDF width. During this stage, we identify an interesting geodesic solution accompanied by the self-regulation between the growth and nonlinear damping where the time scale τ of information change is constant in time, independent of the strength of the stochastic noise. In comparison, BPs are mainly driven by the macroscopic motion due to the movement of the PDF peak. The total information length L between initial and final states is much larger in BPs than in FPs, increasing linearly with the deviation γ of a control parameter from the critical state in BPs while increasing logarithmically with γ in FPs. L scales as |lnD| and D^{-1/2} in FPs and BPs, respectively, where D measures the strength of the stochastic forcing. These differing scalings with γ and D suggest a great utility of L in capturing different underlying processes, specifically, diffusion vs advection in phase transition by geometry. We discuss physical origins of these scalings and comment on implications of our results for bistable systems undergoing repeated order-disorder transitions (e.g., fitness).

  3. Stochasticity of convection in Giga-LES data

    NASA Astrophysics Data System (ADS)

    De La Chevrotière, Michèle; Khouider, Boualem; Majda, Andrew J.

    2016-09-01

    The poor representation of tropical convection in general circulation models (GCMs) is believed to be responsible for much of the uncertainty in the predictions of weather and climate in the tropics. The stochastic multicloud model (SMCM) was recently developed by Khouider et al. (Commun Math Sci 8(1):187-216, 2010) to represent the missing variability in GCMs due to unresolved features of organized tropical convection. The SMCM is based on three cloud types (congestus, deep and stratiform), and transitions between these cloud types are formalized in terms of probability rules that are functions of the large-scale environment convective state and a set of seven arbitrary cloud timescale parameters. Here, a statistical inference method based on the Bayesian paradigm is applied to estimate these key cloud timescales from the Giga-LES dataset, a 24-h large-eddy simulation (LES) of deep tropical convection (Khairoutdinov et al. in J Adv Model Earth Syst 1(12), 2009) over a domain comparable to a GCM gridbox. A sequential learning strategy is used where the Giga-LES domain is partitioned into a few subdomains, and atmospheric time series obtained on each subdomain are used to train the Bayesian procedure incrementally. Convergence of the marginal posterior densities for all seven parameters is demonstrated for two different grid partitions, and sensitivity tests to other model parameters are also presented. A single column model simulation using the SMCM parameterization with the Giga-LES inferred parameters reproduces many important statistical features of the Giga-LES run, without any further tuning. In particular it exhibits intermittent dynamical behavior in both the stochastic cloud fractions and the large scale dynamics, with periods of dry phases followed by a coherent sequence of congestus, deep, and stratiform convection, varying on timescales of a few hours consistent with the Giga-LES time series. The chaotic variations of the cloud area fractions were captured fairly well both qualitatively and quantitatively demonstrating the stochastic nature of convection in the Giga-LES simulation.

  4. Symmetries and stochastic symmetry breaking in multifractal geophysics: analysis and simulation with the help of the Lévy-Clifford algebra of cascade generators..

    NASA Astrophysics Data System (ADS)

    Schertzer, D. J. M.; Tchiguirinskaia, I.

    2016-12-01

    Multifractal fields, whose definition is rather independent of their domain dimension, have opened a new approach of geophysics enabling to explore its spatial extension that is of prime importance as underlined by the expression "spatial chaos". However multifractals have been until recently restricted to be scalar valued, i.e. to one-dimensional codomains. This has prevented to deal with the key question of complex component interactions and their non trivial symmetries. We first emphasize that the Lie algebra of stochastic generators of cascade processes enables us to generalize multifractals to arbitrarily large codomains, e.g. flows of vector fields on large dimensional manifolds. In particular, we have recently investigated the neat example of stable Levy generators on Clifford algebra that have a number of seductive properties, e.g. universal statistical and robust algebra properties, both defining the basic symmetries of the corresponding fields (Schertzer and Tchiguirinskaia, 2015). These properties provide a convenient multifractal framework to study both the symmetries of the fields and how they stochastically break the symmetries of the underlying equations due to boundary conditions, large scale rotations and forcings. These developments should help us to answer to challenging questions such as the climatology of (exo-) planets based on first principles (Pierrehumbert, 2013), to fully address the question of the limitations of quasi- geostrophic turbulence (Schertzer et al., 2012) and to explore the peculiar phenomenology of turbulent dynamics of the atmosphere or oceans that is neither two- or three-dimensional. Pierrehumbert, R.T., 2013. Strange news from other stars. Nature Geoscience, 6(2), pp.8183. Schertzer, D. et al., 2012. Quasi-geostrophic turbulence and generalized scale invariance, a theoretical reply. Atmos. Chem. Phys., 12, pp.327336. Schertzer, D. & Tchiguirinskaia, I., 2015. Multifractal vector fields and stochastic Clifford algebra. Chaos: An Interdisciplinary Journal of Nonlinear Science, 25(12), p.123127

  5. Stochastic predation events and population persistence in bighorn sheep

    PubMed Central

    Festa-Bianchet, Marco; Coulson, Tim; Gaillard, Jean-Michel; Hogg, John T; Pelletier, Fanie

    2006-01-01

    Many studies have reported temporal changes in the relative importance of density-dependence and environmental stochasticity in affecting population growth rates, but they typically assume that the predominant factor limiting growth remains constant over long periods of time. Stochastic switches in limiting factors that persist for multiple time-steps have received little attention, but most wild populations may periodically experience such switches. Here, we consider the dynamics of three populations of individually marked bighorn sheep (Ovis canadensis) monitored for 24–28 years. Each population experienced one or two distinct cougar (Puma concolor) predation events leading to population declines. The onset and duration of predation events were stochastic and consistent with predation by specialist individuals. A realistic Markov chain model confirms that predation by specialist cougars can cause extinction of isolated populations. We suggest that such processes may be common. In such cases, predator–prey equilibria may only occur at large geographical and temporal scales, and are unlikely with increasing habitat fragmentation. PMID:16777749

  6. Statistical nature of infrared dynamics on de Sitter background

    NASA Astrophysics Data System (ADS)

    Tokuda, Junsei; Tanaka, Takahiro

    2018-02-01

    In this study, we formulate a systematic way of deriving an effective equation of motion(EoM) for long wavelength modes of a massless scalar field with a general potential V(phi) on de Sitter background, and investigate whether or not the effective EoM can be described as a classical stochastic process. Our formulation gives an extension of the usual stochastic formalism to including sub-leading secular growth coming from the nonlinearity of short wavelength modes. Applying our formalism to λ phi4 theory, we explicitly derive an effective EoM which correctly recovers the next-to-leading secularly growing part at a late time, and show that this effective EoM can be seen as a classical stochastic process. Our extended stochastic formalism can describe all secularly growing terms which appear in all correlation functions with a specific operator ordering. The restriction of the operator ordering will not be a big drawback because the commutator of a light scalar field becomes negligible at large scales owing to the squeezing.

  7. Combining deterministic and stochastic velocity fields in the analysis of deep crustal seismic data

    NASA Astrophysics Data System (ADS)

    Larkin, Steven Paul

    Standard crustal seismic modeling obtains deterministic velocity models which ignore the effects of wavelength-scale heterogeneity, known to exist within the Earth's crust. Stochastic velocity models are a means to include wavelength-scale heterogeneity in the modeling. These models are defined by statistical parameters obtained from geologic maps of exposed crystalline rock, and are thus tied to actual geologic structures. Combining both deterministic and stochastic velocity models into a single model allows a realistic full wavefield (2-D) to be computed. By comparing these simulations to recorded seismic data, the effects of wavelength-scale heterogeneity can be investigated. Combined deterministic and stochastic velocity models are created for two datasets, the 1992 RISC seismic experiment in southeastern California and the 1986 PASSCAL seismic experiment in northern Nevada. The RISC experiment was located in the transition zone between the Salton Trough and the southern Basin and Range province. A high-velocity body previously identified beneath the Salton Trough is constrained to pinch out beneath the Chocolate Mountains to the northeast. The lateral extent of this body is evidence for the ephemeral nature of rifting loci as a continent is initially rifted. Stochastic modeling of wavelength-scale structures above this body indicate that little more than 5% mafic intrusion into a more felsic continental crust is responsible for the observed reflectivity. Modeling of the wide-angle RISC data indicates that coda waves following PmP are initially dominated by diffusion of energy out of the near-surface basin as the wavefield reverberates within this low-velocity layer. At later times, this coda consists of scattered body waves and P to S conversions. Surface waves do not play a significant role in this coda. Modeling of the PASSCAL dataset indicates that a high-gradient crust-mantle transition zone or a rough Moho interface is necessary to reduce precritical PmP energy. Possibly related, inconsistencies in published velocity models are rectified by hypothesizing the existence of large, elongate, high-velocity bodies at the base of the crust oriented to and of similar scale as the basins and ranges at the surface. This structure would result in an anisotropic lower crust.

  8. Stochastic thermodynamics across scales: Emergent inter-attractoral discrete Markov jump process and its underlying continuous diffusion

    NASA Astrophysics Data System (ADS)

    Santillán, Moisés; Qian, Hong

    2013-01-01

    We investigate the internal consistency of a recently developed mathematical thermodynamic structure across scales, between a continuous stochastic nonlinear dynamical system, i.e., a diffusion process with Langevin and Fokker-Planck equations, and its emergent discrete, inter-attractoral Markov jump process. We analyze how the system’s thermodynamic state functions, e.g. free energy F, entropy S, entropy production ep, free energy dissipation Ḟ, etc., are related when the continuous system is described with coarse-grained discrete variables. It is shown that the thermodynamics derived from the underlying, detailed continuous dynamics gives rise to exactly the free-energy representation of Gibbs and Helmholtz. That is, the system’s thermodynamic structure is the same as if one only takes a middle road and starts with the natural discrete description, with the corresponding transition rates empirically determined. By natural we mean in the thermodynamic limit of a large system, with an inherent separation of time scales between inter- and intra-attractoral dynamics. This result generalizes a fundamental idea from chemistry, and the theory of Kramers, by incorporating thermodynamics: while a mechanical description of a molecule is in terms of continuous bond lengths and angles, chemical reactions are phenomenologically described by a discrete representation, in terms of exponential rate laws and a stochastic thermodynamics.

  9. Combining Deterministic structures and stochastic heterogeneity for transport modeling

    NASA Astrophysics Data System (ADS)

    Zech, Alraune; Attinger, Sabine; Dietrich, Peter; Teutsch, Georg

    2017-04-01

    Contaminant transport in highly heterogeneous aquifers is extremely challenging and subject of current scientific debate. Tracer plumes often show non-symmetric but highly skewed plume shapes. Predicting such transport behavior using the classical advection-dispersion-equation (ADE) in combination with a stochastic description of aquifer properties requires a dense measurement network. This is in contrast to the available information for most aquifers. A new conceptual aquifer structure model is presented which combines large-scale deterministic information and the stochastic approach for incorporating sub-scale heterogeneity. The conceptual model is designed to allow for a goal-oriented, site specific transport analysis making use of as few data as possible. Thereby the basic idea is to reproduce highly skewed tracer plumes in heterogeneous media by incorporating deterministic contrasts and effects of connectivity instead of using unimodal heterogeneous models with high variances. The conceptual model consists of deterministic blocks of mean hydraulic conductivity which might be measured by pumping tests indicating values differing in orders of magnitudes. A sub-scale heterogeneity is introduced within every block. This heterogeneity can be modeled as bimodal or log-normal distributed. The impact of input parameters, structure and conductivity contrasts is investigated in a systematic manor. Furthermore, some first successful implementation of the model was achieved for the well known MADE site.

  10. Large-scale database searching using tandem mass spectra: looking up the answer in the back of the book.

    PubMed

    Sadygov, Rovshan G; Cociorva, Daniel; Yates, John R

    2004-12-01

    Database searching is an essential element of large-scale proteomics. Because these methods are widely used, it is important to understand the rationale of the algorithms. Most algorithms are based on concepts first developed in SEQUEST and PeptideSearch. Four basic approaches are used to determine a match between a spectrum and sequence: descriptive, interpretative, stochastic and probability-based matching. We review the basic concepts used by most search algorithms, the computational modeling of peptide identification and current challenges and limitations of this approach for protein identification.

  11. Modeling stochastic noise in gene regulatory systems

    PubMed Central

    Meister, Arwen; Du, Chao; Li, Ye Henry; Wong, Wing Hung

    2014-01-01

    The Master equation is considered the gold standard for modeling the stochastic mechanisms of gene regulation in molecular detail, but it is too complex to solve exactly in most cases, so approximation and simulation methods are essential. However, there is still a lack of consensus about the best way to carry these out. To help clarify the situation, we review Master equation models of gene regulation, theoretical approximations based on an expansion method due to N.G. van Kampen and R. Kubo, and simulation algorithms due to D.T. Gillespie and P. Langevin. Expansion of the Master equation shows that for systems with a single stable steady-state, the stochastic model reduces to a deterministic model in a first-order approximation. Additional theory, also due to van Kampen, describes the asymptotic behavior of multistable systems. To support and illustrate the theory and provide further insight into the complex behavior of multistable systems, we perform a detailed simulation study comparing the various approximation and simulation methods applied to synthetic gene regulatory systems with various qualitative characteristics. The simulation studies show that for large stochastic systems with a single steady-state, deterministic models are quite accurate, since the probability distribution of the solution has a single peak tracking the deterministic trajectory whose variance is inversely proportional to the system size. In multistable stochastic systems, large fluctuations can cause individual trajectories to escape from the domain of attraction of one steady-state and be attracted to another, so the system eventually reaches a multimodal probability distribution in which all stable steady-states are represented proportional to their relative stability. However, since the escape time scales exponentially with system size, this process can take a very long time in large systems. PMID:25632368

  12. Water resources planning and management : A stochastic dual dynamic programming approach

    NASA Astrophysics Data System (ADS)

    Goor, Q.; Pinte, D.; Tilmant, A.

    2008-12-01

    Allocating water between different users and uses, including the environment, is one of the most challenging task facing water resources managers and has always been at the heart of Integrated Water Resources Management (IWRM). As water scarcity is expected to increase over time, allocation decisions among the different uses will have to be found taking into account the complex interactions between water and the economy. Hydro-economic optimization models can capture those interactions while prescribing efficient allocation policies. Many hydro-economic models found in the literature are formulated as large-scale non linear optimization problems (NLP), seeking to maximize net benefits from the system operation while meeting operational and/or institutional constraints, and describing the main hydrological processes. However, those models rarely incorporate the uncertainty inherent to the availability of water, essentially because of the computational difficulties associated stochastic formulations. The purpose of this presentation is to present a stochastic programming model that can identify economically efficient allocation policies in large-scale multipurpose multireservoir systems. The model is based on stochastic dual dynamic programming (SDDP), an extension of traditional SDP that is not affected by the curse of dimensionality. SDDP identify efficient allocation policies while considering the hydrologic uncertainty. The objective function includes the net benefits from the hydropower and irrigation sectors, as well as penalties for not meeting operational and/or institutional constraints. To be able to implement the efficient decomposition scheme that remove the computational burden, the one-stage SDDP problem has to be a linear program. Recent developments improve the representation of the non-linear and mildly non- convex hydropower function through a convex hull approximation of the true hydropower function. This model is illustrated on a cascade of 14 reservoirs on the Nile river basin.

  13. The global reference atmospheric model, mod 2 (with two scale perturbation model)

    NASA Technical Reports Server (NTRS)

    Justus, C. G.; Hargraves, W. R.

    1976-01-01

    The Global Reference Atmospheric Model was improved to produce more realistic simulations of vertical profiles of atmospheric parameters. A revised two scale random perturbation model using perturbation magnitudes which are adjusted to conform to constraints imposed by the perfect gas law and the hydrostatic condition is described. The two scale perturbation model produces appropriately correlated (horizontally and vertically) small scale and large scale perturbations. These stochastically simulated perturbations are representative of the magnitudes and wavelengths of perturbations produced by tides and planetary scale waves (large scale) and turbulence and gravity waves (small scale). Other new features of the model are: (1) a second order geostrophic wind relation for use at low latitudes which does not "blow up" at low latitudes as the ordinary geostrophic relation does; and (2) revised quasi-biennial amplitudes and phases and revised stationary perturbations, based on data through 1972.

  14. Application of stochastic processes in random growth and evolutionary dynamics

    NASA Astrophysics Data System (ADS)

    Oikonomou, Panagiotis

    We study the effect of power-law distributed randomness on the dynamical behavior of processes such as stochastic growth patterns and evolution. First, we examine the geometrical properties of random shapes produced by a generalized stochastic Loewner Evolution driven by a superposition of a Brownian motion and a stable Levy process. The situation is defined by the usual stochastic Loewner Evolution parameter, kappa, as well as alpha which defines the power-law tail of the stable Levy distribution. We show that the properties of these patterns change qualitatively and singularly at critical values of kappa and alpha. It is reasonable to call such changes "phase transitions". These transitions occur as kappa passes through four and as alpha passes through one. Numerical simulations are used to explore the global scaling behavior of these patterns in each "phase". We show both analytically and numerically that the growth continues indefinitely in the vertical direction for alpha greater than 1, goes as logarithmically with time for alpha equals to 1, and saturates for alpha smaller than 1. The probability density has two different scales corresponding to directions along and perpendicular to the boundary. Scaling functions for the probability density are given for various limiting cases. Second, we study the effect of the architecture of biological networks on their evolutionary dynamics. In recent years, studies of the architecture of large networks have unveiled a common topology, called scale-free, in which a majority of the elements are poorly connected except for a small fraction of highly connected components. We ask how networks with distinct topologies can evolve towards a pre-established target phenotype through a process of random mutations and selection. We use networks of Boolean components as a framework to model a large class of phenotypes. Within this approach, we find that homogeneous random networks and scale-free networks exhibit drastically different evolutionary paths. While homogeneous random networks accumulate neutral mutations and evolve by sparse punctuated steps, scale-free networks evolve rapidly and continuously towards the target phenotype. Moreover, we show that scale-free networks always evolve faster than homogeneous random networks; remarkably, this property does not depend on the precise value of the topological parameter. By contrast, homogeneous random networks require a specific tuning of their topological parameter in order to optimize their fitness. This model suggests that the evolutionary paths of biological networks, punctuated or continuous, may solely be determined by the network topology.

  15. Visual attention mitigates information loss in small- and large-scale neural codes

    PubMed Central

    Sprague, Thomas C; Saproo, Sameer; Serences, John T

    2015-01-01

    Summary The visual system transforms complex inputs into robust and parsimonious neural codes that efficiently guide behavior. Because neural communication is stochastic, the amount of encoded visual information necessarily decreases with each synapse. This constraint requires processing sensory signals in a manner that protects information about relevant stimuli from degradation. Such selective processing – or selective attention – is implemented via several mechanisms, including neural gain and changes in tuning properties. However, examining each of these effects in isolation obscures their joint impact on the fidelity of stimulus feature representations by large-scale population codes. Instead, large-scale activity patterns can be used to reconstruct representations of relevant and irrelevant stimuli, providing a holistic understanding about how neuron-level modulations collectively impact stimulus encoding. PMID:25769502

  16. Disentangling Random Motion and Flow in a Complex Medium

    PubMed Central

    Koslover, Elena F.; Chan, Caleb K.; Theriot, Julie A.

    2016-01-01

    We describe a technique for deconvolving the stochastic motion of particles from large-scale fluid flow in a dynamic environment such as that found in living cells. The method leverages the separation of timescales to subtract out the persistent component of motion from single-particle trajectories. The mean-squared displacement of the resulting trajectories is rescaled so as to enable robust extraction of the diffusion coefficient and subdiffusive scaling exponent of the stochastic motion. We demonstrate the applicability of the method for characterizing both diffusive and fractional Brownian motion overlaid by flow and analytically calculate the accuracy of the method in different parameter regimes. This technique is employed to analyze the motion of lysosomes in motile neutrophil-like cells, showing that the cytoplasm of these cells behaves as a viscous fluid at the timescales examined. PMID:26840734

  17. Fast and Precise Emulation of Stochastic Biochemical Reaction Networks With Amplified Thermal Noise in Silicon Chips.

    PubMed

    Kim, Jaewook; Woo, Sung Sik; Sarpeshkar, Rahul

    2018-04-01

    The analysis and simulation of complex interacting biochemical reaction pathways in cells is important in all of systems biology and medicine. Yet, the dynamics of even a modest number of noisy or stochastic coupled biochemical reactions is extremely time consuming to simulate. In large part, this is because of the expensive cost of random number and Poisson process generation and the presence of stiff, coupled, nonlinear differential equations. Here, we demonstrate that we can amplify inherent thermal noise in chips to emulate randomness physically, thus alleviating these costs significantly. Concurrently, molecular flux in thermodynamic biochemical reactions maps to thermodynamic electronic current in a transistor such that stiff nonlinear biochemical differential equations are emulated exactly in compact, digitally programmable, highly parallel analog "cytomorphic" transistor circuits. For even small-scale systems involving just 80 stochastic reactions, our 0.35-μm BiCMOS chips yield a 311× speedup in the simulation time of Gillespie's stochastic algorithm over COPASI, a fast biochemical-reaction software simulator that is widely used in computational biology; they yield a 15 500× speedup over equivalent MATLAB stochastic simulations. The chip emulation results are consistent with these software simulations over a large range of signal-to-noise ratios. Most importantly, our physical emulation of Poisson chemical dynamics does not involve any inherently sequential processes and updates such that, unlike prior exact simulation approaches, they are parallelizable, asynchronous, and enable even more speedup for larger-size networks.

  18. Toward Development of a Stochastic Wake Model: Validation Using LES and Turbine Loads

    DOE PAGES

    Moon, Jae; Manuel, Lance; Churchfield, Matthew; ...

    2017-12-28

    Wind turbines within an array do not experience free-stream undisturbed flow fields. Rather, the flow fields on internal turbines are influenced by wakes generated by upwind unit and exhibit different dynamic characteristics relative to the free stream. The International Electrotechnical Commission (IEC) standard 61400-1 for the design of wind turbines only considers a deterministic wake model for the design of a wind plant. This study is focused on the development of a stochastic model for waked wind fields. First, high-fidelity physics-based waked wind velocity fields are generated using Large-Eddy Simulation (LES). Stochastic characteristics of these LES waked wind velocity field,more » including mean and turbulence components, are analyzed. Wake-related mean and turbulence field-related parameters are then estimated for use with a stochastic model, using Multivariate Multiple Linear Regression (MMLR) with the LES data. To validate the simulated wind fields based on the stochastic model, wind turbine tower and blade loads are generated using aeroelastic simulation for utility-scale wind turbine models and compared with those based directly on the LES inflow. The study's overall objective is to offer efficient and validated stochastic approaches that are computationally tractable for assessing the performance and loads of turbines operating in wakes.« less

  19. Toward Development of a Stochastic Wake Model: Validation Using LES and Turbine Loads

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Moon, Jae; Manuel, Lance; Churchfield, Matthew

    Wind turbines within an array do not experience free-stream undisturbed flow fields. Rather, the flow fields on internal turbines are influenced by wakes generated by upwind unit and exhibit different dynamic characteristics relative to the free stream. The International Electrotechnical Commission (IEC) standard 61400-1 for the design of wind turbines only considers a deterministic wake model for the design of a wind plant. This study is focused on the development of a stochastic model for waked wind fields. First, high-fidelity physics-based waked wind velocity fields are generated using Large-Eddy Simulation (LES). Stochastic characteristics of these LES waked wind velocity field,more » including mean and turbulence components, are analyzed. Wake-related mean and turbulence field-related parameters are then estimated for use with a stochastic model, using Multivariate Multiple Linear Regression (MMLR) with the LES data. To validate the simulated wind fields based on the stochastic model, wind turbine tower and blade loads are generated using aeroelastic simulation for utility-scale wind turbine models and compared with those based directly on the LES inflow. The study's overall objective is to offer efficient and validated stochastic approaches that are computationally tractable for assessing the performance and loads of turbines operating in wakes.« less

  20. LARGE-SCALE NATURAL GRADIENT TRACER TEST IN SAND AND GRAVEL, CAPE CODE, MASSACHUSETTS 3. HYDRAULIC CONDUCTI- VITY AND CALCULATED MACRODISPERSIVITIES

    EPA Science Inventory

    Hydraulic conductivity (K) variability in a sand and gravel aquifer on Cape Cod, Massachusetts, was measured and subsequently used in stochastic transport theories to estimate macrodispersivities. Nearly 1500 K measurements were obtained by borehole flowmeter tests ...

  1. Trends in modern system theory

    NASA Technical Reports Server (NTRS)

    Athans, M.

    1976-01-01

    The topics considered are related to linear control system design, adaptive control, failure detection, control under failure, system reliability, and large-scale systems and decentralized control. It is pointed out that the design of a linear feedback control system which regulates a process about a desirable set point or steady-state condition in the presence of disturbances is a very important problem. The linearized dynamics of the process are used for design purposes. The typical linear-quadratic design involving the solution of the optimal control problem of a linear time-invariant system with respect to a quadratic performance criterion is considered along with gain reduction theorems and the multivariable phase margin theorem. The stumbling block in many adaptive design methodologies is associated with the amount of real time computation which is necessary. Attention is also given to the desperate need to develop good theories for large-scale systems, the beginning of a microprocessor revolution, the translation of the Wiener-Hopf theory into the time domain, and advances made in dynamic team theory, dynamic stochastic games, and finite memory stochastic control.

  2. Cast aluminium single crystals cross the threshold from bulk to size-dependent stochastic plasticity

    NASA Astrophysics Data System (ADS)

    Krebs, J.; Rao, S. I.; Verheyden, S.; Miko, C.; Goodall, R.; Curtin, W. A.; Mortensen, A.

    2017-07-01

    Metals are known to exhibit mechanical behaviour at the nanoscale different to bulk samples. This transition typically initiates at the micrometre scale, yet existing techniques to produce micrometre-sized samples often introduce artefacts that can influence deformation mechanisms. Here, we demonstrate the casting of micrometre-scale aluminium single-crystal wires by infiltration of a salt mould. Samples have millimetre lengths, smooth surfaces, a range of crystallographic orientations, and a diameter D as small as 6 μm. The wires deform in bursts, at a stress that increases with decreasing D. Bursts greater than 200 nm account for roughly 50% of wire deformation and have exponentially distributed intensities. Dislocation dynamics simulations show that single-arm sources that produce large displacement bursts halted by stochastic cross-slip and lock formation explain microcast wire behaviour. This microcasting technique may be extended to several other metals or alloys and offers the possibility of exploring mechanical behaviour spanning the micrometre scale.

  3. Agent based reasoning for the non-linear stochastic models of long-range memory

    NASA Astrophysics Data System (ADS)

    Kononovicius, A.; Gontis, V.

    2012-02-01

    We extend Kirman's model by introducing variable event time scale. The proposed flexible time scale is equivalent to the variable trading activity observed in financial markets. Stochastic version of the extended Kirman's agent based model is compared to the non-linear stochastic models of long-range memory in financial markets. The agent based model providing matching macroscopic description serves as a microscopic reasoning of the earlier proposed stochastic model exhibiting power law statistics.

  4. Anomalous scaling of stochastic processes and the Moses effect

    NASA Astrophysics Data System (ADS)

    Chen, Lijian; Bassler, Kevin E.; McCauley, Joseph L.; Gunaratne, Gemunu H.

    2017-04-01

    The state of a stochastic process evolving over a time t is typically assumed to lie on a normal distribution whose width scales like t1/2. However, processes in which the probability distribution is not normal and the scaling exponent differs from 1/2 are known. The search for possible origins of such "anomalous" scaling and approaches to quantify them are the motivations for the work reported here. In processes with stationary increments, where the stochastic process is time-independent, autocorrelations between increments and infinite variance of increments can cause anomalous scaling. These sources have been referred to as the Joseph effect and the Noah effect, respectively. If the increments are nonstationary, then scaling of increments with t can also lead to anomalous scaling, a mechanism we refer to as the Moses effect. Scaling exponents quantifying the three effects are defined and related to the Hurst exponent that characterizes the overall scaling of the stochastic process. Methods of time series analysis that enable accurate independent measurement of each exponent are presented. Simple stochastic processes are used to illustrate each effect. Intraday financial time series data are analyzed, revealing that their anomalous scaling is due only to the Moses effect. In the context of financial market data, we reiterate that the Joseph exponent, not the Hurst exponent, is the appropriate measure to test the efficient market hypothesis.

  5. Anomalous scaling of stochastic processes and the Moses effect.

    PubMed

    Chen, Lijian; Bassler, Kevin E; McCauley, Joseph L; Gunaratne, Gemunu H

    2017-04-01

    The state of a stochastic process evolving over a time t is typically assumed to lie on a normal distribution whose width scales like t^{1/2}. However, processes in which the probability distribution is not normal and the scaling exponent differs from 1/2 are known. The search for possible origins of such "anomalous" scaling and approaches to quantify them are the motivations for the work reported here. In processes with stationary increments, where the stochastic process is time-independent, autocorrelations between increments and infinite variance of increments can cause anomalous scaling. These sources have been referred to as the Joseph effect and the Noah effect, respectively. If the increments are nonstationary, then scaling of increments with t can also lead to anomalous scaling, a mechanism we refer to as the Moses effect. Scaling exponents quantifying the three effects are defined and related to the Hurst exponent that characterizes the overall scaling of the stochastic process. Methods of time series analysis that enable accurate independent measurement of each exponent are presented. Simple stochastic processes are used to illustrate each effect. Intraday financial time series data are analyzed, revealing that their anomalous scaling is due only to the Moses effect. In the context of financial market data, we reiterate that the Joseph exponent, not the Hurst exponent, is the appropriate measure to test the efficient market hypothesis.

  6. Stochastic theory of log-periodic patterns

    NASA Astrophysics Data System (ADS)

    Canessa, Enrique

    2000-12-01

    We introduce an analytical model based on birth-death clustering processes to help in understanding the empirical log-periodic corrections to power law scaling and the finite-time singularity as reported in several domains including rupture, earthquakes, world population and financial systems. In our stochastic theory log-periodicities are a consequence of transient clusters induced by an entropy-like term that may reflect the amount of co-operative information carried by the state of a large system of different species. The clustering completion rates for the system are assumed to be given by a simple linear death process. The singularity at t0 is derived in terms of birth-death clustering coefficients.

  7. A LES-Langevin model for turbulence

    NASA Astrophysics Data System (ADS)

    Dolganov, Rostislav; Dubrulle, Bérengère; Laval, Jean-Philippe

    2006-11-01

    The rationale for Large Eddy Simulation is rooted in our inability to handle all degrees of freedom (N˜10^16 for Re˜10^7). ``Deterministic'' models based on eddy-viscosity seek to reproduce the intensification of the energy transport. However, they fail to reproduce backward energy transfer (backscatter) from small to large scale, which is an essentiel feature of the turbulence near wall or in boundary layer. To capture this backscatter, ``stochastic'' strategies have been developed. In the present talk, we shall discuss such a strategy, based on a Rapid Distorsion Theory (RDT). Specifically, we first divide the small scale contribution to the Reynolds Stress Tensor in two parts: a turbulent viscosity and the pseudo-Lamb vector, representing the nonlinear cross terms of resolved and sub-grid scales. We then estimate the dynamics of small-scale motion by the RDT applied to Navier-Stockes equation. We use this to model the cross term evolution by a Langevin equation, in which the random force is provided by sub-grid pressure terms. Our LES model is thus made of a truncated Navier-Stockes equation including the turbulent force and a generalized Langevin equation for the latter, integrated on a twice-finer grid. The backscatter is automatically included in our stochastic model of the pseudo-Lamb vector. We apply this model to the case of homogeneous isotropic turbulence and turbulent channel flow.

  8. BOOK REVIEW: Statistical Mechanics of Turbulent Flows

    NASA Astrophysics Data System (ADS)

    Cambon, C.

    2004-10-01

    This is a handbook for a computational approach to reacting flows, including background material on statistical mechanics. In this sense, the title is somewhat misleading with respect to other books dedicated to the statistical theory of turbulence (e.g. Monin and Yaglom). In the present book, emphasis is placed on modelling (engineering closures) for computational fluid dynamics. The probabilistic (pdf) approach is applied to the local scalar field, motivated first by the nonlinearity of chemical source terms which appear in the transport equations of reacting species. The probabilistic and stochastic approaches are also used for the velocity field and particle position; nevertheless they are essentially limited to Lagrangian models for a local vector, with only single-point statistics, as for the scalar. Accordingly, conventional techniques, such as single-point closures for RANS (Reynolds-averaged Navier-Stokes) and subgrid-scale models for LES (large-eddy simulations), are described and in some cases reformulated using underlying Langevin models and filtered pdfs. Even if the theoretical approach to turbulence is not discussed in general, the essentials of probabilistic and stochastic-processes methods are described, with a useful reminder concerning statistics at the molecular level. The book comprises 7 chapters. Chapter 1 briefly states the goals and contents, with a very clear synoptic scheme on page 2. Chapter 2 presents definitions and examples of pdfs and related statistical moments. Chapter 3 deals with stochastic processes, pdf transport equations, from Kramer-Moyal to Fokker-Planck (for Markov processes), and moments equations. Stochastic differential equations are introduced and their relationship to pdfs described. This chapter ends with a discussion of stochastic modelling. The equations of fluid mechanics and thermodynamics are addressed in chapter 4. Classical conservation equations (mass, velocity, internal energy) are derived from their counterparts at the molecular level. In addition, equations are given for multicomponent reacting systems. The chapter ends with miscellaneous topics, including DNS, (idea of) the energy cascade, and RANS. Chapter 5 is devoted to stochastic models for the large scales of turbulence. Langevin-type models for velocity (and particle position) are presented, and their various consequences for second-order single-point corelations (Reynolds stress components, Kolmogorov constant) are discussed. These models are then presented for the scalar. The chapter ends with compressible high-speed flows and various models, ranging from k-epsilon to hybrid RANS-pdf. Stochastic models for small-scale turbulence are addressed in chapter 6. These models are based on the concept of a filter density function (FDF) for the scalar, and a more conventional SGS (sub-grid-scale model) for the velocity in LES. The final chapter, chapter 7, is entitled `The unification of turbulence models' and aims at reconciling large-scale and small-scale modelling. This book offers a timely survey of techniques in modern computational fluid mechanics for turbulent flows with reacting scalars. It should be of interest to engineers, while the discussion of the underlying tools, namely pdfs, stochastic and statistical equations should also be attractive to applied mathematicians and physicists. The book's emphasis on local pdfs and stochastic Langevin models gives a consistent structure to the book and allows the author to cover almost the whole spectrum of practical modelling in turbulent CFD. On the other hand, one might regret that non-local issues are not mentioned explicitly, or even briefly. These problems range from the presence of pressure-strain correlations in the Reynolds stress transport equations to the presence of two-point pdfs in the single-point pdf equation derived from the Navier--Stokes equations. (One may recall that, even without scalar transport, a general closure problem for turbulence statistics results from both non-linearity and non-locality of Navier-Stokes equations, the latter coming from, e.g., the nonlocal relationship of velocity and pressure in the quasi-incompressible case. These two aspects are often intricately linked. It is well known that non-linearity alone is not responsible for the `problem', as evidenced by 1D turbulence without pressure (`Burgulence' from the Burgers equation) and probably 3D (cosmological gas). A local description in terms of pdf for the velocity can resolve the `non-linear' problem, which instead yields an infinite hierarchy of equations in terms of moments. On the other hand, non-locality yields a hierarchy of unclosed equations, with the single-point pdf equation for velocity derived from NS incompressible equations involving a two-point pdf, and so on. The general relationship was given by Lundgren (1967, Phys. Fluids 10 (5), 969-975), with the equation for pdf at n points involving the pdf at n+1 points. The nonlocal problem appears in various statistical models which are not discussed in the book. The simplest example is full RST or ASM models, in which the closure of pressure-strain correlations is pivotal (their counterpart ought to be identified and discussed in equations (5-21) and the following ones). The book does not address more sophisticated non-local approaches, such as two-point (or spectral) non-linear closure theories and models, `rapid distortion theory' for linear regimes, not to mention scaling and intermittency based on two-point structure functions, etc. The book sometimes mixes theoretical modelling and pure empirical relationships, the empirical character coming from the lack of a nonlocal (two-point) approach.) In short, the book is orientated more towards applications than towards turbulence theory; it is written clearly and concisely and should be useful to a large community, interested either in the underlying stochastic formalism or in CFD applications.

  9. Multiscale Hy3S: hybrid stochastic simulation for supercomputers.

    PubMed

    Salis, Howard; Sotiropoulos, Vassilios; Kaznessis, Yiannis N

    2006-02-24

    Stochastic simulation has become a useful tool to both study natural biological systems and design new synthetic ones. By capturing the intrinsic molecular fluctuations of "small" systems, these simulations produce a more accurate picture of single cell dynamics, including interesting phenomena missed by deterministic methods, such as noise-induced oscillations and transitions between stable states. However, the computational cost of the original stochastic simulation algorithm can be high, motivating the use of hybrid stochastic methods. Hybrid stochastic methods partition the system into multiple subsets and describe each subset as a different representation, such as a jump Markov, Poisson, continuous Markov, or deterministic process. By applying valid approximations and self-consistently merging disparate descriptions, a method can be considerably faster, while retaining accuracy. In this paper, we describe Hy3S, a collection of multiscale simulation programs. Building on our previous work on developing novel hybrid stochastic algorithms, we have created the Hy3S software package to enable scientists and engineers to both study and design extremely large well-mixed biological systems with many thousands of reactions and chemical species. We have added adaptive stochastic numerical integrators to permit the robust simulation of dynamically stiff biological systems. In addition, Hy3S has many useful features, including embarrassingly parallelized simulations with MPI; special discrete events, such as transcriptional and translation elongation and cell division; mid-simulation perturbations in both the number of molecules of species and reaction kinetic parameters; combinatorial variation of both initial conditions and kinetic parameters to enable sensitivity analysis; use of NetCDF optimized binary format to quickly read and write large datasets; and a simple graphical user interface, written in Matlab, to help users create biological systems and analyze data. We demonstrate the accuracy and efficiency of Hy3S with examples, including a large-scale system benchmark and a complex bistable biochemical network with positive feedback. The software itself is open-sourced under the GPL license and is modular, allowing users to modify it for their own purposes. Hy3S is a powerful suite of simulation programs for simulating the stochastic dynamics of networks of biochemical reactions. Its first public version enables computational biologists to more efficiently investigate the dynamics of realistic biological systems.

  10. Boosting Bayesian parameter inference of stochastic differential equation models with methods from statistical physics

    NASA Astrophysics Data System (ADS)

    Albert, Carlo; Ulzega, Simone; Stoop, Ruedi

    2016-04-01

    Measured time-series of both precipitation and runoff are known to exhibit highly non-trivial statistical properties. For making reliable probabilistic predictions in hydrology, it is therefore desirable to have stochastic models with output distributions that share these properties. When parameters of such models have to be inferred from data, we also need to quantify the associated parametric uncertainty. For non-trivial stochastic models, however, this latter step is typically very demanding, both conceptually and numerically, and always never done in hydrology. Here, we demonstrate that methods developed in statistical physics make a large class of stochastic differential equation (SDE) models amenable to a full-fledged Bayesian parameter inference. For concreteness we demonstrate these methods by means of a simple yet non-trivial toy SDE model. We consider a natural catchment that can be described by a linear reservoir, at the scale of observation. All the neglected processes are assumed to happen at much shorter time-scales and are therefore modeled with a Gaussian white noise term, the standard deviation of which is assumed to scale linearly with the system state (water volume in the catchment). Even for constant input, the outputs of this simple non-linear SDE model show a wealth of desirable statistical properties, such as fat-tailed distributions and long-range correlations. Standard algorithms for Bayesian inference fail, for models of this kind, because their likelihood functions are extremely high-dimensional intractable integrals over all possible model realizations. The use of Kalman filters is illegitimate due to the non-linearity of the model. Particle filters could be used but become increasingly inefficient with growing number of data points. Hamiltonian Monte Carlo algorithms allow us to translate this inference problem to the problem of simulating the dynamics of a statistical mechanics system and give us access to most sophisticated methods that have been developed in the statistical physics community over the last few decades. We demonstrate that such methods, along with automated differentiation algorithms, allow us to perform a full-fledged Bayesian inference, for a large class of SDE models, in a highly efficient and largely automatized manner. Furthermore, our algorithm is highly parallelizable. For our toy model, discretized with a few hundred points, a full Bayesian inference can be performed in a matter of seconds on a standard PC.

  11. Finite-Time and -Size Scalings in the Evaluation of Large Deviation Functions. Numerical Analysis in Continuous Time

    NASA Astrophysics Data System (ADS)

    Guevara Hidalgo, Esteban; Nemoto, Takahiro; Lecomte, Vivien

    Rare trajectories of stochastic systems are important to understand because of their potential impact. However, their properties are by definition difficult to sample directly. Population dynamics provide a numerical tool allowing their study, by means of simulating a large number of copies of the system, which are subjected to a selection rule that favors the rare trajectories of interest. However, such algorithms are plagued by finite simulation time- and finite population size- effects that can render their use delicate. Using the continuous-time cloning algorithm, we analyze the finite-time and finite-size scalings of estimators of the large deviation functions associated to the distribution of the rare trajectories. We use these scalings in order to propose a numerical approach which allows to extract the infinite-time and infinite-size limit of these estimators.

  12. Visual attention mitigates information loss in small- and large-scale neural codes.

    PubMed

    Sprague, Thomas C; Saproo, Sameer; Serences, John T

    2015-04-01

    The visual system transforms complex inputs into robust and parsimonious neural codes that efficiently guide behavior. Because neural communication is stochastic, the amount of encoded visual information necessarily decreases with each synapse. This constraint requires that sensory signals are processed in a manner that protects information about relevant stimuli from degradation. Such selective processing--or selective attention--is implemented via several mechanisms, including neural gain and changes in tuning properties. However, examining each of these effects in isolation obscures their joint impact on the fidelity of stimulus feature representations by large-scale population codes. Instead, large-scale activity patterns can be used to reconstruct representations of relevant and irrelevant stimuli, thereby providing a holistic understanding about how neuron-level modulations collectively impact stimulus encoding. Copyright © 2015 Elsevier Ltd. All rights reserved.

  13. A multi-scaled approach for simulating chemical reaction systems.

    PubMed

    Burrage, Kevin; Tian, Tianhai; Burrage, Pamela

    2004-01-01

    In this paper we give an overview of some very recent work, as well as presenting a new approach, on the stochastic simulation of multi-scaled systems involving chemical reactions. In many biological systems (such as genetic regulation and cellular dynamics) there is a mix between small numbers of key regulatory proteins, and medium and large numbers of molecules. In addition, it is important to be able to follow the trajectories of individual molecules by taking proper account of the randomness inherent in such a system. We describe different types of simulation techniques (including the stochastic simulation algorithm, Poisson Runge-Kutta methods and the balanced Euler method) for treating simulations in the three different reaction regimes: slow, medium and fast. We then review some recent techniques on the treatment of coupled slow and fast reactions for stochastic chemical kinetics and present a new approach which couples the three regimes mentioned above. We then apply this approach to a biologically inspired problem involving the expression and activity of LacZ and LacY proteins in E. coli, and conclude with a discussion on the significance of this work. Copyright 2004 Elsevier Ltd.

  14. Intermediate scattering function of an anisotropic active Brownian particle

    PubMed Central

    Kurzthaler, Christina; Leitmann, Sebastian; Franosch, Thomas

    2016-01-01

    Various challenges are faced when animalcules such as bacteria, protozoa, algae, or sperms move autonomously in aqueous media at low Reynolds number. These active agents are subject to strong stochastic fluctuations, that compete with the directed motion. So far most studies consider the lowest order moments of the displacements only, while more general spatio-temporal information on the stochastic motion is provided in scattering experiments. Here we derive analytically exact expressions for the directly measurable intermediate scattering function for a mesoscopic model of a single, anisotropic active Brownian particle in three dimensions. The mean-square displacement and the non-Gaussian parameter of the stochastic process are obtained as derivatives of the intermediate scattering function. These display different temporal regimes dominated by effective diffusion and directed motion due to the interplay of translational and rotational diffusion which is rationalized within the theory. The most prominent feature of the intermediate scattering function is an oscillatory behavior at intermediate wavenumbers reflecting the persistent swimming motion, whereas at small length scales bare translational and at large length scales an enhanced effective diffusion emerges. We anticipate that our characterization of the motion of active agents will serve as a reference for more realistic models and experimental observations. PMID:27830719

  15. Intermediate scattering function of an anisotropic active Brownian particle.

    PubMed

    Kurzthaler, Christina; Leitmann, Sebastian; Franosch, Thomas

    2016-10-10

    Various challenges are faced when animalcules such as bacteria, protozoa, algae, or sperms move autonomously in aqueous media at low Reynolds number. These active agents are subject to strong stochastic fluctuations, that compete with the directed motion. So far most studies consider the lowest order moments of the displacements only, while more general spatio-temporal information on the stochastic motion is provided in scattering experiments. Here we derive analytically exact expressions for the directly measurable intermediate scattering function for a mesoscopic model of a single, anisotropic active Brownian particle in three dimensions. The mean-square displacement and the non-Gaussian parameter of the stochastic process are obtained as derivatives of the intermediate scattering function. These display different temporal regimes dominated by effective diffusion and directed motion due to the interplay of translational and rotational diffusion which is rationalized within the theory. The most prominent feature of the intermediate scattering function is an oscillatory behavior at intermediate wavenumbers reflecting the persistent swimming motion, whereas at small length scales bare translational and at large length scales an enhanced effective diffusion emerges. We anticipate that our characterization of the motion of active agents will serve as a reference for more realistic models and experimental observations.

  16. Stochastic processes on multiple scales: averaging, decimation and beyond

    NASA Astrophysics Data System (ADS)

    Bo, Stefano; Celani, Antonio

    The recent advances in handling microscopic systems are increasingly motivating stochastic modeling in a large number of physical, chemical and biological phenomena. Relevant processes often take place on widely separated time scales. In order to simplify the description, one usually focuses on the slower degrees of freedom and only the average effect of the fast ones is retained. It is then fundamental to eliminate such fast variables in a controlled fashion, carefully accounting for their net effect on the slower dynamics. We shall present how this can be done by either decimating or coarse-graining the fast processes and discuss applications to physical, biological and chemical examples. With the same tools we will address the fate of functionals of the stochastic trajectories (such as residence times, counting statistics, fluxes, entropy production, etc.) upon elimination of the fast variables. In general, for functionals, such elimination can present additional difficulties. In some cases, it is not possible to express them in terms of the effective trajectories on the slow degrees of freedom but additional details of the fast processes must be retained. We will focus on such cases and show how naive procedures can lead to inconsistent results.

  17. Multi-Frequency Signal Detection Based on Frequency Exchange and Re-Scaling Stochastic Resonance and Its Application to Weak Fault Diagnosis

    PubMed Central

    Leng, Yonggang; Fan, Shengbo

    2018-01-01

    Mechanical fault diagnosis usually requires not only identification of the fault characteristic frequency, but also detection of its second and/or higher harmonics. However, it is difficult to detect a multi-frequency fault signal through the existing Stochastic Resonance (SR) methods, because the characteristic frequency of the fault signal as well as its second and higher harmonics frequencies tend to be large parameters. To solve the problem, this paper proposes a multi-frequency signal detection method based on Frequency Exchange and Re-scaling Stochastic Resonance (FERSR). In the method, frequency exchange is implemented using filtering technique and Single SideBand (SSB) modulation. This new method can overcome the limitation of "sampling ratio" which is the ratio of the sampling frequency to the frequency of target signal. It also ensures that the multi-frequency target signals can be processed to meet the small-parameter conditions. Simulation results demonstrate that the method shows good performance for detecting a multi-frequency signal with low sampling ratio. Two practical cases are employed to further validate the effectiveness and applicability of this method. PMID:29693577

  18. Intermediate scattering function of an anisotropic active Brownian particle

    NASA Astrophysics Data System (ADS)

    Kurzthaler, Christina; Leitmann, Sebastian; Franosch, Thomas

    2016-10-01

    Various challenges are faced when animalcules such as bacteria, protozoa, algae, or sperms move autonomously in aqueous media at low Reynolds number. These active agents are subject to strong stochastic fluctuations, that compete with the directed motion. So far most studies consider the lowest order moments of the displacements only, while more general spatio-temporal information on the stochastic motion is provided in scattering experiments. Here we derive analytically exact expressions for the directly measurable intermediate scattering function for a mesoscopic model of a single, anisotropic active Brownian particle in three dimensions. The mean-square displacement and the non-Gaussian parameter of the stochastic process are obtained as derivatives of the intermediate scattering function. These display different temporal regimes dominated by effective diffusion and directed motion due to the interplay of translational and rotational diffusion which is rationalized within the theory. The most prominent feature of the intermediate scattering function is an oscillatory behavior at intermediate wavenumbers reflecting the persistent swimming motion, whereas at small length scales bare translational and at large length scales an enhanced effective diffusion emerges. We anticipate that our characterization of the motion of active agents will serve as a reference for more realistic models and experimental observations.

  19. Chemically intuited, large-scale screening of MOFs by machine learning techniques

    NASA Astrophysics Data System (ADS)

    Borboudakis, Giorgos; Stergiannakos, Taxiarchis; Frysali, Maria; Klontzas, Emmanuel; Tsamardinos, Ioannis; Froudakis, George E.

    2017-10-01

    A novel computational methodology for large-scale screening of MOFs is applied to gas storage with the use of machine learning technologies. This approach is a promising trade-off between the accuracy of ab initio methods and the speed of classical approaches, strategically combined with chemical intuition. The results demonstrate that the chemical properties of MOFs are indeed predictable (stochastically, not deterministically) using machine learning methods and automated analysis protocols, with the accuracy of predictions increasing with sample size. Our initial results indicate that this methodology is promising to apply not only to gas storage in MOFs but in many other material science projects.

  20. The statistics of primordial density fluctuations

    NASA Astrophysics Data System (ADS)

    Barrow, John D.; Coles, Peter

    1990-05-01

    The statistical properties of the density fluctuations produced by power-law inflation are investigated. It is found that, even the fluctuations present in the scalar field driving the inflation are Gaussian, the resulting density perturbations need not be, due to stochastic variations in the Hubble parameter. All the moments of the density fluctuations are calculated, and is is argued that, for realistic parameter choices, the departures from Gaussian statistics are small and would have a negligible effect on the large-scale structure produced in the model. On the other hand, the model predicts a power spectrum with n not equal to 1, and this could be good news for large-scale structure.

  1. Non-Gaussian Multi-resolution Modeling of Magnetosphere-Ionosphere Coupling Processes

    NASA Astrophysics Data System (ADS)

    Fan, M.; Paul, D.; Lee, T. C. M.; Matsuo, T.

    2016-12-01

    The most dynamic coupling between the magnetosphere and ionosphere occurs in the Earth's polar atmosphere. Our objective is to model scale-dependent stochastic characteristics of high-latitude ionospheric electric fields that originate from solar wind magnetosphere-ionosphere interactions. The Earth's high-latitude ionospheric electric field exhibits considerable variability, with increasing non-Gaussian characteristics at decreasing spatio-temporal scales. Accurately representing the underlying stochastic physical process through random field modeling is crucial not only for scientific understanding of the energy, momentum and mass exchanges between the Earth's magnetosphere and ionosphere, but also for modern technological systems including telecommunication, navigation, positioning and satellite tracking. While a lot of efforts have been made to characterize the large-scale variability of the electric field in the context of Gaussian processes, no attempt has been made so far to model the small-scale non-Gaussian stochastic process observed in the high-latitude ionosphere. We construct a novel random field model using spherical needlets as building blocks. The double localization of spherical needlets in both spatial and frequency domains enables the model to capture the non-Gaussian and multi-resolutional characteristics of the small-scale variability. The estimation procedure is computationally feasible due to the utilization of an adaptive Gibbs sampler. We apply the proposed methodology to the computational simulation output from the Lyon-Fedder-Mobarry (LFM) global magnetohydrodynamics (MHD) magnetosphere model. Our non-Gaussian multi-resolution model results in characterizing significantly more energy associated with the small-scale ionospheric electric field variability in comparison to Gaussian models. By accurately representing unaccounted-for additional energy and momentum sources to the Earth's upper atmosphere, our novel random field modeling approach will provide a viable remedy to the current numerical models' systematic biases resulting from the underestimation of high-latitude energy and momentum sources.

  2. Modeling the spreading of large-scale wildland fires

    Treesearch

    Mohamed Drissi

    2015-01-01

    The objective of the present study is twofold. First, the last developments and validation results of a hybrid model designed to simulate fire patterns in heterogeneous landscapes are presented. The model combines the features of a stochastic small-world network model with those of a deterministic semi-physical model of the interaction between burning and non-burning...

  3. Application of stochastic models in identification and apportionment of heavy metal pollution sources in the surface soils of a large-scale region.

    PubMed

    Hu, Yuanan; Cheng, Hefa

    2013-04-16

    As heavy metals occur naturally in soils at measurable concentrations and their natural background contents have significant spatial variations, identification and apportionment of heavy metal pollution sources across large-scale regions is a challenging task. Stochastic models, including the recently developed conditional inference tree (CIT) and the finite mixture distribution model (FMDM), were applied to identify the sources of heavy metals found in the surface soils of the Pearl River Delta, China, and to apportion the contributions from natural background and human activities. Regression trees were successfully developed for the concentrations of Cd, Cu, Zn, Pb, Cr, Ni, As, and Hg in 227 soil samples from a region of over 7.2 × 10(4) km(2) based on seven specific predictors relevant to the source and behavior of heavy metals: land use, soil type, soil organic carbon content, population density, gross domestic product per capita, and the lengths and classes of the roads surrounding the sampling sites. The CIT and FMDM results consistently indicate that Cd, Zn, Cu, Pb, and Cr in the surface soils of the PRD were contributed largely by anthropogenic sources, whereas As, Ni, and Hg in the surface soils mostly originated from the soil parent materials.

  4. Mean Field Analysis of Large-Scale Interacting Populations of Stochastic Conductance-Based Spiking Neurons Using the Klimontovich Method

    NASA Astrophysics Data System (ADS)

    Gandolfo, Daniel; Rodriguez, Roger; Tuckwell, Henry C.

    2017-03-01

    We investigate the dynamics of large-scale interacting neural populations, composed of conductance based, spiking model neurons with modifiable synaptic connection strengths, which are possibly also subjected to external noisy currents. The network dynamics is controlled by a set of neural population probability distributions (PPD) which are constructed along the same lines as in the Klimontovich approach to the kinetic theory of plasmas. An exact non-closed, nonlinear, system of integro-partial differential equations is derived for the PPDs. As is customary, a closing procedure leads to a mean field limit. The equations we have obtained are of the same type as those which have been recently derived using rigorous techniques of probability theory. The numerical solutions of these so called McKean-Vlasov-Fokker-Planck equations, which are only valid in the limit of infinite size networks, actually shows that the statistical measures as obtained from PPDs are in good agreement with those obtained through direct integration of the stochastic dynamical system for large but finite size networks. Although numerical solutions have been obtained for networks of Fitzhugh-Nagumo model neurons, which are often used to approximate Hodgkin-Huxley model neurons, the theory can be readily applied to networks of general conductance-based model neurons of arbitrary dimension.

  5. Dynamics Under Location Uncertainty: Model Derivation, Modified Transport and Uncertainty Quantification

    NASA Astrophysics Data System (ADS)

    Resseguier, V.; Memin, E.; Chapron, B.; Fox-Kemper, B.

    2017-12-01

    In order to better observe and predict geophysical flows, ensemble-based data assimilation methods are of high importance. In such methods, an ensemble of random realizations represents the variety of the simulated flow's likely behaviors. For this purpose, randomness needs to be introduced in a suitable way and physically-based stochastic subgrid parametrizations are promising paths. This talk will propose a new kind of such a parametrization referred to as modeling under location uncertainty. The fluid velocity is decomposed into a resolved large-scale component and an aliased small-scale one. The first component is possibly random but time-correlated whereas the second is white-in-time but spatially-correlated and possibly inhomogeneous and anisotropic. With such a velocity, the material derivative of any - possibly active - tracer is modified. Three new terms appear: a correction of the large-scale advection, a multiplicative noise and a possibly heterogeneous and anisotropic diffusion. This parameterization naturally ensures attractive properties such as energy conservation for each realization. Additionally, this stochastic material derivative and the associated Reynolds' transport theorem offer a systematic method to derive stochastic models. In particular, we will discuss the consequences of the Quasi-Geostrophic assumptions in our framework. Depending on the turbulence amount, different models with different physical behaviors are obtained. Under strong turbulence assumptions, a simplified diagnosis of frontolysis and frontogenesis at the surface of the ocean is possible in this framework. A Surface Quasi-Geostrophic (SQG) model with a weaker noise influence has also been simulated. A single realization better represents small scales than a deterministic SQG model at the same resolution. Moreover, an ensemble accurately predicts extreme events, bifurcations as well as the amplitudes and the positions of the simulation errors. Figure 1 highlights this last result and compares it to the strong error underestimation of an ensemble simulated from the deterministic dynamic with random initial conditions.

  6. Efficient Constant-Time Complexity Algorithm for Stochastic Simulation of Large Reaction Networks.

    PubMed

    Thanh, Vo Hong; Zunino, Roberto; Priami, Corrado

    2017-01-01

    Exact stochastic simulation is an indispensable tool for a quantitative study of biochemical reaction networks. The simulation realizes the time evolution of the model by randomly choosing a reaction to fire and update the system state according to a probability that is proportional to the reaction propensity. Two computationally expensive tasks in simulating large biochemical networks are the selection of next reaction firings and the update of reaction propensities due to state changes. We present in this work a new exact algorithm to optimize both of these simulation bottlenecks. Our algorithm employs the composition-rejection on the propensity bounds of reactions to select the next reaction firing. The selection of next reaction firings is independent of the number reactions while the update of propensities is skipped and performed only when necessary. It therefore provides a favorable scaling for the computational complexity in simulating large reaction networks. We benchmark our new algorithm with the state of the art algorithms available in literature to demonstrate its applicability and efficiency.

  7. A Macroscopic Multifractal Analysis of Parabolic Stochastic PDEs

    NASA Astrophysics Data System (ADS)

    Khoshnevisan, Davar; Kim, Kunwoo; Xiao, Yimin

    2018-05-01

    It is generally argued that the solution to a stochastic PDE with multiplicative noise—such as \\dot{u}= 1/2 u''+uξ, where {ξ} denotes space-time white noise—routinely produces exceptionally-large peaks that are "macroscopically multifractal." See, for example, Gibbon and Doering (Arch Ration Mech Anal 177:115-150, 2005), Gibbon and Titi (Proc R Soc A 461:3089-3097, 2005), and Zimmermann et al. (Phys Rev Lett 85(17):3612-3615, 2000). A few years ago, we proved that the spatial peaks of the solution to the mentioned stochastic PDE indeed form a random multifractal in the macroscopic sense of Barlow and Taylor (J Phys A 22(13):2621-2626, 1989; Proc Lond Math Soc (3) 64:125-152, 1992). The main result of the present paper is a proof of a rigorous formulation of the assertion that the spatio-temporal peaks of the solution form infinitely-many different multifractals on infinitely-many different scales, which we sometimes refer to as "stretch factors." A simpler, though still complex, such structure is shown to also exist for the constant-coefficient version of the said stochastic PDE.

  8. A Macroscopic Multifractal Analysis of Parabolic Stochastic PDEs

    NASA Astrophysics Data System (ADS)

    Khoshnevisan, Davar; Kim, Kunwoo; Xiao, Yimin

    2018-04-01

    It is generally argued that the solution to a stochastic PDE with multiplicative noise—such as \\dot{u}= 1/2 u''+uξ, where {ξ} denotes space-time white noise—routinely produces exceptionally-large peaks that are "macroscopically multifractal." See, for example, Gibbon and Doering (Arch Ration Mech Anal 177:115-150, 2005), Gibbon and Titi (Proc R Soc A 461:3089-3097, 2005), and Zimmermann et al. (Phys Rev Lett 85(17):3612-3615, 2000). A few years ago, we proved that the spatial peaks of the solution to the mentioned stochastic PDE indeed form a random multifractal in the macroscopic sense of Barlow and Taylor (J Phys A 22(13):2621-2626, 1989; Proc Lond Math Soc (3) 64:125-152, 1992). The main result of the present paper is a proof of a rigorous formulation of the assertion that the spatio-temporal peaks of the solution form infinitely-many different multifractals on infinitely-many different scales, which we sometimes refer to as "stretch factors." A simpler, though still complex, such structure is shown to also exist for the constant-coefficient version of the said stochastic PDE.

  9. How large a dataset should be in order to estimate scaling exponents and other statistics correctly in studies of solar wind turbulence

    NASA Astrophysics Data System (ADS)

    Rowlands, G.; Kiyani, K. H.; Chapman, S. C.; Watkins, N. W.

    2009-12-01

    Quantitative analysis of solar wind fluctuations are often performed in the context of intermittent turbulence and center around methods to quantify statistical scaling, such as power spectra and structure functions which assume a stationary process. The solar wind exhibits large scale secular changes and so the question arises as to whether the timeseries of the fluctuations is non-stationary. One approach is to seek a local stationarity by parsing the time interval over which statistical analysis is performed. Hence, natural systems such as the solar wind unavoidably provide observations over restricted intervals. Consequently, due to a reduction of sample size leading to poorer estimates, a stationary stochastic process (time series) can yield anomalous time variation in the scaling exponents, suggestive of nonstationarity. The variance in the estimates of scaling exponents computed from an interval of N observations is known for finite variance processes to vary as ~1/N as N becomes large for certain statistical estimators; however, the convergence to this behavior will depend on the details of the process, and may be slow. We study the variation in the scaling of second-order moments of the time-series increments with N for a variety of synthetic and “real world” time series, and we find that in particular for heavy tailed processes, for realizable N, one is far from this ~1/N limiting behavior. We propose a semiempirical estimate for the minimum N needed to make a meaningful estimate of the scaling exponents for model stochastic processes and compare these with some “real world” time series from the solar wind. With fewer datapoints the stationary timeseries becomes indistinguishable from a nonstationary process and we illustrate this with nonstationary synthetic datasets. Reference article: K. H. Kiyani, S. C. Chapman and N. W. Watkins, Phys. Rev. E 79, 036109 (2009).

  10. The Schrödinger-Poisson equations as the large-N limit of the Newtonian N-body system: applications to the large scale dark matter dynamics

    NASA Astrophysics Data System (ADS)

    Briscese, Fabio

    2017-09-01

    In this paper it is argued how the dynamics of the classical Newtonian N-body system can be described in terms of the Schrödinger-Poisson equations in the large N limit. This result is based on the stochastic quantization introduced by Nelson, and on the Calogero conjecture. According to the Calogero conjecture, the emerging effective Planck constant is computed in terms of the parameters of the N-body system as \\hbar ˜ M^{5/3} G^{1/2} (N/< ρ > )^{1/6}, where is G the gravitational constant, N and M are the number and the mass of the bodies, and < ρ > is their average density. The relevance of this result in the context of large scale structure formation is discussed. In particular, this finding gives a further argument in support of the validity of the Schrödinger method as numerical double of the N-body simulations of dark matter dynamics at large cosmological scales.

  11. A continuum dislocation dynamics framework for plasticity of polycrystalline materials

    NASA Astrophysics Data System (ADS)

    Askari, Hesam Aldin

    The objective of this research is to investigate the mechanical response of polycrystals in different settings to identify the mechanisms that give rise to specific response observed in the deformation process. Particularly the large deformation of magnesium alloys and yield properties of copper in small scales are investigated. We develop a continuum dislocation dynamics framework based on dislocation mechanisms and interaction laws and implement this formulation in a viscoplastic self-consistent scheme to obtain the mechanical response in a polycrystalline system. The versatility of this method allows various applications in the study of problems involving large deformation, study of microstructure and its evolution, superplasticity, study of size effect in polycrystals and stochastic plasticity. The findings from the numerical solution are compared to the experimental results to validate the simulation results. We apply this framework to study the deformation mechanisms in magnesium alloys at moderate to fast strain rates and room temperature to 450 °C. Experiments for the same range of strain rates and temperatures were carried out to obtain the mechanical and material properties, and to compare with the numerical results. The numerical approach for magnesium is divided into four main steps; 1) room temperature unidirectional loading 2) high temperature deformation without grain boundary sliding 3) high temperature with grain boundary sliding mechanism 4) room temperature cyclic loading. We demonstrate the capability of our modeling approach in prediction of mechanical properties and texture evolution and discuss the improvement obtained by using the continuum dislocation dynamics method. The framework was also applied to nano-sized copper polycrystals to study the yield properties at small scales and address the observed yield scatter. By combining our developed method with a Monte Carlo simulation approach, the stochastic plasticity at small length scales was studied and the sources of the uncertainty in the polycrystalline structure are discussed. Our results suggest that the stochastic response is mainly because of a) stochastic plasticity due to dislocation substructure inside crystals and b) the microstructure of the polycrystalline material. The extent of the uncertainty is correlated to the "effective cell length" in the sampling procedure whether using simulations and experimental approach.

  12. Quantitative Missense Variant Effect Prediction Using Large-Scale Mutagenesis Data.

    PubMed

    Gray, Vanessa E; Hause, Ronald J; Luebeck, Jens; Shendure, Jay; Fowler, Douglas M

    2018-01-24

    Large datasets describing the quantitative effects of mutations on protein function are becoming increasingly available. Here, we leverage these datasets to develop Envision, which predicts the magnitude of a missense variant's molecular effect. Envision combines 21,026 variant effect measurements from nine large-scale experimental mutagenesis datasets, a hitherto untapped training resource, with a supervised, stochastic gradient boosting learning algorithm. Envision outperforms other missense variant effect predictors both on large-scale mutagenesis data and on an independent test dataset comprising 2,312 TP53 variants whose effects were measured using a low-throughput approach. This dataset was never used for hyperparameter tuning or model training and thus serves as an independent validation set. Envision prediction accuracy is also more consistent across amino acids than other predictors. Finally, we demonstrate that Envision's performance improves as more large-scale mutagenesis data are incorporated. We precompute Envision predictions for every possible single amino acid variant in human, mouse, frog, zebrafish, fruit fly, worm, and yeast proteomes (https://envision.gs.washington.edu/). Copyright © 2017 Elsevier Inc. All rights reserved.

  13. Advanced Dynamically Adaptive Algorithms for Stochastic Simulations on Extreme Scales

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Xiu, Dongbin

    2017-03-03

    The focus of the project is the development of mathematical methods and high-performance computational tools for stochastic simulations, with a particular emphasis on computations on extreme scales. The core of the project revolves around the design of highly efficient and scalable numerical algorithms that can adaptively and accurately, in high dimensional spaces, resolve stochastic problems with limited smoothness, even containing discontinuities.

  14. Self-tuning stochastic resonance energy harvester for smart tires

    NASA Astrophysics Data System (ADS)

    Kim, Hongjip; Tai, Wei Che; Zuo, Lei

    2018-03-01

    Energy harvesting from smart tire has been an influential topic for researchers over several years. In this paper, we propose novel energy harvester for smart tire taking advantage of adaptive tuning stochastic resonance. Compared to previous tire energy harvesters, it can generate large power and has wide bandwidth. Large power is achieved by stochastic resonance while wide-bandwidth is accomplished by adaptive tuning via centrifugal stiffening effect. Energy harvesting configuration for modulated noise is described first. It is an electromagnetic energy harvester consists of rotating beam subject to centrifugal buckling. Equation of motion for energy harvester is derived to investigate the effect of centrifugal stiffening. Numerical analysis was conducted to simulate response. The result show that high power is achieved with wide bandwidth. To verify the theoretical and simulation results, the experiment was conducted. Equivalent horizontal rotating platform is built to mimic tire environment. Experiment results showed good agreement with the numerical result with around 10% of errors, which verified feasibility of proposed harvester. Maximum power 1.8mW is achieved from 3:1 scale experiment setup. The equivalent working range of harvester is around 60-105 km/h which is typical speed for car in general road and highway.

  15. Scale-invariance underlying the logistic equation and its social applications

    NASA Astrophysics Data System (ADS)

    Hernando, A.; Plastino, A.

    2013-01-01

    On the basis of dynamical principles we i) advance a derivation of the Logistic Equation (LE), widely employed (among multiple applications) in the simulation of population growth, and ii) demonstrate that scale-invariance and a mean-value constraint are sufficient and necessary conditions for obtaining it. We also generalize the LE to multi-component systems and show that the above dynamical mechanisms underlie a large number of scale-free processes. Examples are presented regarding city-populations, diffusion in complex networks, and popularity of technological products, all of them obeying the multi-component logistic equation in an either stochastic or deterministic way.

  16. The slow-scale linear noise approximation: an accurate, reduced stochastic description of biochemical networks under timescale separation conditions

    PubMed Central

    2012-01-01

    Background It is well known that the deterministic dynamics of biochemical reaction networks can be more easily studied if timescale separation conditions are invoked (the quasi-steady-state assumption). In this case the deterministic dynamics of a large network of elementary reactions are well described by the dynamics of a smaller network of effective reactions. Each of the latter represents a group of elementary reactions in the large network and has associated with it an effective macroscopic rate law. A popular method to achieve model reduction in the presence of intrinsic noise consists of using the effective macroscopic rate laws to heuristically deduce effective probabilities for the effective reactions which then enables simulation via the stochastic simulation algorithm (SSA). The validity of this heuristic SSA method is a priori doubtful because the reaction probabilities for the SSA have only been rigorously derived from microscopic physics arguments for elementary reactions. Results We here obtain, by rigorous means and in closed-form, a reduced linear Langevin equation description of the stochastic dynamics of monostable biochemical networks in conditions characterized by small intrinsic noise and timescale separation. The slow-scale linear noise approximation (ssLNA), as the new method is called, is used to calculate the intrinsic noise statistics of enzyme and gene networks. The results agree very well with SSA simulations of the non-reduced network of elementary reactions. In contrast the conventional heuristic SSA is shown to overestimate the size of noise for Michaelis-Menten kinetics, considerably under-estimate the size of noise for Hill-type kinetics and in some cases even miss the prediction of noise-induced oscillations. Conclusions A new general method, the ssLNA, is derived and shown to correctly describe the statistics of intrinsic noise about the macroscopic concentrations under timescale separation conditions. The ssLNA provides a simple and accurate means of performing stochastic model reduction and hence it is expected to be of widespread utility in studying the dynamics of large noisy reaction networks, as is common in computational and systems biology. PMID:22583770

  17. Multiple-scale stochastic processes: Decimation, averaging and beyond

    NASA Astrophysics Data System (ADS)

    Bo, Stefano; Celani, Antonio

    2017-02-01

    The recent experimental progresses in handling microscopic systems have allowed to probe them at levels where fluctuations are prominent, calling for stochastic modeling in a large number of physical, chemical and biological phenomena. This has provided fruitful applications for established stochastic methods and motivated further developments. These systems often involve processes taking place on widely separated time scales. For an efficient modeling one usually focuses on the slower degrees of freedom and it is of great importance to accurately eliminate the fast variables in a controlled fashion, carefully accounting for their net effect on the slower dynamics. This procedure in general requires to perform two different operations: decimation and coarse-graining. We introduce the asymptotic methods that form the basis of this procedure and discuss their application to a series of physical, biological and chemical examples. We then turn our attention to functionals of the stochastic trajectories such as residence times, counting statistics, fluxes, entropy production, etc. which have been increasingly studied in recent years. For such functionals, the elimination of the fast degrees of freedom can present additional difficulties and naive procedures can lead to blatantly inconsistent results. Homogenization techniques for functionals are less covered in the literature and we will pedagogically present them here, as natural extensions of the ones employed for the trajectories. We will also discuss recent applications of these techniques to the thermodynamics of small systems and their interpretation in terms of information-theoretic concepts.

  18. Final Report. Analysis and Reduction of Complex Networks Under Uncertainty

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Marzouk, Youssef M.; Coles, T.; Spantini, A.

    2013-09-30

    The project was a collaborative effort among MIT, Sandia National Laboratories (local PI Dr. Habib Najm), the University of Southern California (local PI Prof. Roger Ghanem), and The Johns Hopkins University (local PI Prof. Omar Knio, now at Duke University). Our focus was the analysis and reduction of large-scale dynamical systems emerging from networks of interacting components. Such networks underlie myriad natural and engineered systems. Examples important to DOE include chemical models of energy conversion processes, and elements of national infrastructure—e.g., electric power grids. Time scales in chemical systems span orders of magnitude, while infrastructure networks feature both local andmore » long-distance connectivity, with associated clusters of time scales. These systems also blend continuous and discrete behavior; examples include saturation phenomena in surface chemistry and catalysis, and switching in electrical networks. Reducing size and stiffness is essential to tractable and predictive simulation of these systems. Computational singular perturbation (CSP) has been effectively used to identify and decouple dynamics at disparate time scales in chemical systems, allowing reduction of model complexity and stiffness. In realistic settings, however, model reduction must contend with uncertainties, which are often greatest in large-scale systems most in need of reduction. Uncertainty is not limited to parameters; one must also address structural uncertainties—e.g., whether a link is present in a network—and the impact of random perturbations, e.g., fluctuating loads or sources. Research under this project developed new methods for the analysis and reduction of complex multiscale networks under uncertainty, by combining computational singular perturbation (CSP) with probabilistic uncertainty quantification. CSP yields asymptotic approximations of reduceddimensionality “slow manifolds” on which a multiscale dynamical system evolves. Introducing uncertainty in this context raised fundamentally new issues, e.g., how is the topology of slow manifolds transformed by parametric uncertainty? How to construct dynamical models on these uncertain manifolds? To address these questions, we used stochastic spectral polynomial chaos (PC) methods to reformulate uncertain network models and analyzed them using CSP in probabilistic terms. Finding uncertain manifolds involved the solution of stochastic eigenvalue problems, facilitated by projection onto PC bases. These problems motivated us to explore the spectral properties stochastic Galerkin systems. We also introduced novel methods for rank-reduction in stochastic eigensystems—transformations of a uncertain dynamical system that lead to lower storage and solution complexity. These technical accomplishments are detailed below. This report focuses on the MIT portion of the joint project.« less

  19. Stochastic assembly in a subtropical forest chronosequence: evidence from contrasting changes of species, phylogenetic and functional dissimilarity over succession.

    PubMed

    Mi, Xiangcheng; Swenson, Nathan G; Jia, Qi; Rao, Mide; Feng, Gang; Ren, Haibao; Bebber, Daniel P; Ma, Keping

    2016-09-07

    Deterministic and stochastic processes jointly determine the community dynamics of forest succession. However, it has been widely held in previous studies that deterministic processes dominate forest succession. Furthermore, inference of mechanisms for community assembly may be misleading if based on a single axis of diversity alone. In this study, we evaluated the relative roles of deterministic and stochastic processes along a disturbance gradient by integrating species, functional, and phylogenetic beta diversity in a subtropical forest chronosequence in Southeastern China. We found a general pattern of increasing species turnover, but little-to-no change in phylogenetic and functional turnover over succession at two spatial scales. Meanwhile, the phylogenetic and functional beta diversity were not significantly different from random expectation. This result suggested a dominance of stochastic assembly, contrary to the general expectation that deterministic processes dominate forest succession. On the other hand, we found significant interactions of environment and disturbance and limited evidence for significant deviations of phylogenetic or functional turnover from random expectations for different size classes. This result provided weak evidence of deterministic processes over succession. Stochastic assembly of forest succession suggests that post-disturbance restoration may be largely unpredictable and difficult to control in subtropical forests.

  20. Signaling in large-scale neural networks.

    PubMed

    Berg, Rune W; Hounsgaard, Jørn

    2009-02-01

    We examine the recent finding that neurons in spinal motor circuits enter a high conductance state during functional network activity. The underlying concomitant increase in random inhibitory and excitatory synaptic activity leads to stochastic signal processing. The possible advantages of this metabolically costly organization are analyzed by comparing with synaptically less intense networks driven by the intrinsic response properties of the network neurons.

  1. The influence of Stochastic perturbation of Geotechnical media On Electromagnetic tomography

    NASA Astrophysics Data System (ADS)

    Song, Lei; Yang, Weihao; Huangsonglei, Jiahui; Li, HaiPeng

    2015-04-01

    Electromagnetic tomography (CT) are commonly utilized in Civil engineering to detect the structure defects or geological anomalies. CT are generally recognized as a high precision geophysical method and the accuracy of CT are expected to be several centimeters and even to be several millimeters. Then, high frequency antenna with short wavelength are utilized commonly in Civil Engineering. As to the geotechnical media, stochastic perturbation of the EM parameters are inevitably exist in geological scales, in structure scales and in local scales, et al. In those cases, the geometric dimensionings of the target body, the EM wavelength and the accuracy expected might be of the same order. When the high frequency EM wave propagated in the stochastic geotechnical media, the GPR signal would be reflected not only from the target bodies but also from the stochastic perturbation of the background media. To detect the karst caves in dissolution fracture rock, one need to assess the influence of the stochastic distributed dissolution holes and fractures; to detect the void in a concrete structure, one should master the influence of the stochastic distributed stones, et al. In this paper, on the base of stochastic media discrete realizations, the authors try to evaluate quantificationally the influence of the stochastic perturbation of Geotechnical media by Radon/Iradon Transfer through full-combined Monte Carlo numerical simulation. It is found the stochastic noise is related with transfer angle, perturbing strength, angle interval, autocorrelation length, et al. And the quantitative formula of the accuracy of the electromagnetic tomography is also established, which could help on the precision estimation of GPR tomography in stochastic perturbation Geotechnical media. Key words: Stochastic Geotechnical Media; Electromagnetic Tomography; Radon/Iradon Transfer.

  2. Scaling and criticality in a stochastic multi-agent model of a financial market

    NASA Astrophysics Data System (ADS)

    Lux, Thomas; Marchesi, Michele

    1999-02-01

    Financial prices have been found to exhibit some universal characteristics that resemble the scaling laws characterizing physical systems in which large numbers of units interact. This raises the question of whether scaling in finance emerges in a similar way - from the interactions of a large ensemble of market participants. However, such an explanation is in contradiction to the prevalent `efficient market hypothesis' in economics, which assumes that the movements of financial prices are an immediate and unbiased reflection of incoming news about future earning prospects. Within this hypothesis, scaling in price changes would simply reflect similar scaling in the `input' signals that influence them. Here we describe a multi-agent model of financial markets which supports the idea that scaling arises from mutual interactions of participants. Although the `news arrival process' in our model lacks both power-law scaling and any temporal dependence in volatility, we find that it generates such behaviour as a result of interactions between agents.

  3. Amplitudes and Anisotropies at Kinetic Scales in Reflection-Driven Turbulence

    NASA Astrophysics Data System (ADS)

    Chandran, B. D. G.; Perez, J. C.

    2016-12-01

    The dissipation processes in solar-wind turbulence depend critically on the amplitudes and anisotropies of the fluctuations at kinetic scales. For example, the efficiencies of nonlinear dissipation mechanisms such as stochastic heating are a strongly increasing function of the kinetic-scale fluctuation amplitudes. In addition, ``slab-like'' fluctuations that vary most rapidly parallel to the background magnetic field dissipate very differently than ``quasi-2D'' fluctuations that vary most rapidly perpendicular to the magnetic field. Both the amplitudes and anisotropies of the kinetic-scale fluctuations are heavily influenced by the cascade mechanisms and spectral scalings in the inertial range of the turbulence. More precisely, the properties and dynamics of the turbulence within the inertial range (at ``fluid length scales'') to a large extent determine the amplitudes and anisotropies of the fluctuations at the proton kinetic scales, which bound the inertial range from below. In this presentation I will describe recent work by Jean Perez and myself on direct numerical simulations of non-compressive turbulence at ``fluid length scales'' between the Sun and a heliocentric distance of 65 solar radii. These simulations account for the non-WKB reflection of outward-propagating Alfven-wave-like fluctuations. This partial reflection produces Sunward-propagating fluctuations, which interact with the outward-propagating fluctuations to produce turbulence and a cascade of energy from large scales to small scales. I will discuss the relative strength of the parallel and perpendicular energy cascades in our simulations, and the implications of our results for the spatial anisotropies of non-compressive fluctuations at the proton kinetic scales near the Sun. I will also present results on the parallel and perpendicular power spectra of both outward-propagating and inward-propagating Alfven-wave-like fluctuations at different heliocentric distances. I will discuss the implications of these inertial-range spectra for the relative importance of cyclotron heating, stochastic heating, and Landau damping.

  4. A hybrid multiscale Monte Carlo algorithm (HyMSMC) to cope with disparity in time scales and species populations in intracellular networks.

    PubMed

    Samant, Asawari; Ogunnaike, Babatunde A; Vlachos, Dionisios G

    2007-05-24

    The fundamental role that intrinsic stochasticity plays in cellular functions has been shown via numerous computational and experimental studies. In the face of such evidence, it is important that intracellular networks are simulated with stochastic algorithms that can capture molecular fluctuations. However, separation of time scales and disparity in species population, two common features of intracellular networks, make stochastic simulation of such networks computationally prohibitive. While recent work has addressed each of these challenges separately, a generic algorithm that can simultaneously tackle disparity in time scales and population scales in stochastic systems is currently lacking. In this paper, we propose the hybrid, multiscale Monte Carlo (HyMSMC) method that fills in this void. The proposed HyMSMC method blends stochastic singular perturbation concepts, to deal with potential stiffness, with a hybrid of exact and coarse-grained stochastic algorithms, to cope with separation in population sizes. In addition, we introduce the computational singular perturbation (CSP) method as a means of systematically partitioning fast and slow networks and computing relaxation times for convergence. We also propose a new criteria of convergence of fast networks to stochastic low-dimensional manifolds, which further accelerates the algorithm. We use several prototype and biological examples, including a gene expression model displaying bistability, to demonstrate the efficiency, accuracy and applicability of the HyMSMC method. Bistable models serve as stringent tests for the success of multiscale MC methods and illustrate limitations of some literature methods.

  5. Effect of weak rotation on large-scale circulation cessations in turbulent convection.

    PubMed

    Assaf, Michael; Angheluta, Luiza; Goldenfeld, Nigel

    2012-08-17

    We investigate the effect of weak rotation on the large-scale circulation (LSC) of turbulent Rayleigh-Bénard convection, using the theory for cessations in a low-dimensional stochastic model of the flow previously studied. We determine the cessation frequency of the LSC as a function of rotation, and calculate the statistics of the amplitude and azimuthal velocity fluctuations of the LSC as a function of the rotation rate for different Rayleigh numbers. Furthermore, we show that the tails of the reorientation PDF remain unchanged for rotating systems, while the distribution of the LSC amplitude and correspondingly the cessation frequency are strongly affected by rotation. Our results are in close agreement with experimental observations.

  6. Research on unit commitment with large-scale wind power connected power system

    NASA Astrophysics Data System (ADS)

    Jiao, Ran; Zhang, Baoqun; Chi, Zhongjun; Gong, Cheng; Ma, Longfei; Yang, Bing

    2017-01-01

    Large-scale integration of wind power generators into power grid brings severe challenges to power system economic dispatch due to its stochastic volatility. Unit commitment including wind farm is analyzed from the two parts of modeling and solving methods. The structures and characteristics can be summarized after classification has been done according to different objective function and constraints. Finally, the issues to be solved and possible directions of research and development in the future are discussed, which can adapt to the requirements of the electricity market, energy-saving power generation dispatching and smart grid, even providing reference for research and practice of researchers and workers in this field.

  7. Disentangling Mechanisms That Mediate the Balance Between Stochastic and Deterministic Processes in Microbial Succession

    DOE PAGES

    Dini-Andreote, Francisco; Stegen, James C.; van Elsas, Jan D.; ...

    2015-03-17

    Despite growing recognition that deterministic and stochastic factors simultaneously influence bacterial communities, little is known about mechanisms shifting their relative importance. To better understand underlying mechanisms, we developed a conceptual model linking ecosystem development during primary succession to shifts in the stochastic/deterministic balance. To evaluate the conceptual model we coupled spatiotemporal data on soil bacterial communities with environmental conditions spanning 105 years of salt marsh development. At the local scale there was a progression from stochasticity to determinism due to Na accumulation with increasing ecosystem age, supporting a main element of the conceptual model. At the regional-scale, soil organic mattermore » (SOM) governed the relative influence of stochasticity and the type of deterministic ecological selection, suggesting scale-dependency in how deterministic ecological selection is imposed. Analysis of a new ecological simulation model supported these conceptual inferences. Looking forward, we propose an extended conceptual model that integrates primary and secondary succession in microbial systems.« less

  8. Doubly stochastic Poisson process models for precipitation at fine time-scales

    NASA Astrophysics Data System (ADS)

    Ramesh, Nadarajah I.; Onof, Christian; Xie, Dichao

    2012-09-01

    This paper considers a class of stochastic point process models, based on doubly stochastic Poisson processes, in the modelling of rainfall. We examine the application of this class of models, a neglected alternative to the widely-known Poisson cluster models, in the analysis of fine time-scale rainfall intensity. These models are mainly used to analyse tipping-bucket raingauge data from a single site but an extension to multiple sites is illustrated which reveals the potential of this class of models to study the temporal and spatial variability of precipitation at fine time-scales.

  9. Aerofoil broadband and tonal noise modelling using stochastic sound sources and incorporated large scale fluctuations

    NASA Astrophysics Data System (ADS)

    Proskurov, S.; Darbyshire, O. R.; Karabasov, S. A.

    2017-12-01

    The present work discusses modifications to the stochastic Fast Random Particle Mesh (FRPM) method featuring both tonal and broadband noise sources. The technique relies on the combination of incorporated vortex-shedding resolved flow available from Unsteady Reynolds-Averaged Navier-Stokes (URANS) simulation with the fine-scale turbulence FRPM solution generated via the stochastic velocity fluctuations in the context of vortex sound theory. In contrast to the existing literature, our method encompasses a unified treatment for broadband and tonal acoustic noise sources at the source level, thus, accounting for linear source interference as well as possible non-linear source interaction effects. When sound sources are determined, for the sound propagation, Acoustic Perturbation Equations (APE-4) are solved in the time-domain. Results of the method's application for two aerofoil benchmark cases, with both sharp and blunt trailing edges are presented. In each case, the importance of individual linear and non-linear noise sources was investigated. Several new key features related to the unsteady implementation of the method were tested and brought into the equation. Encouraging results have been obtained for benchmark test cases using the new technique which is believed to be potentially applicable to other airframe noise problems where both tonal and broadband parts are important.

  10. Parallel STEPS: Large Scale Stochastic Spatial Reaction-Diffusion Simulation with High Performance Computers

    PubMed Central

    Chen, Weiliang; De Schutter, Erik

    2017-01-01

    Stochastic, spatial reaction-diffusion simulations have been widely used in systems biology and computational neuroscience. However, the increasing scale and complexity of models and morphologies have exceeded the capacity of any serial implementation. This led to the development of parallel solutions that benefit from the boost in performance of modern supercomputers. In this paper, we describe an MPI-based, parallel operator-splitting implementation for stochastic spatial reaction-diffusion simulations with irregular tetrahedral meshes. The performance of our implementation is first examined and analyzed with simulations of a simple model. We then demonstrate its application to real-world research by simulating the reaction-diffusion components of a published calcium burst model in both Purkinje neuron sub-branch and full dendrite morphologies. Simulation results indicate that our implementation is capable of achieving super-linear speedup for balanced loading simulations with reasonable molecule density and mesh quality. In the best scenario, a parallel simulation with 2,000 processes runs more than 3,600 times faster than its serial SSA counterpart, and achieves more than 20-fold speedup relative to parallel simulation with 100 processes. In a more realistic scenario with dynamic calcium influx and data recording, the parallel simulation with 1,000 processes and no load balancing is still 500 times faster than the conventional serial SSA simulation. PMID:28239346

  11. Parallel STEPS: Large Scale Stochastic Spatial Reaction-Diffusion Simulation with High Performance Computers.

    PubMed

    Chen, Weiliang; De Schutter, Erik

    2017-01-01

    Stochastic, spatial reaction-diffusion simulations have been widely used in systems biology and computational neuroscience. However, the increasing scale and complexity of models and morphologies have exceeded the capacity of any serial implementation. This led to the development of parallel solutions that benefit from the boost in performance of modern supercomputers. In this paper, we describe an MPI-based, parallel operator-splitting implementation for stochastic spatial reaction-diffusion simulations with irregular tetrahedral meshes. The performance of our implementation is first examined and analyzed with simulations of a simple model. We then demonstrate its application to real-world research by simulating the reaction-diffusion components of a published calcium burst model in both Purkinje neuron sub-branch and full dendrite morphologies. Simulation results indicate that our implementation is capable of achieving super-linear speedup for balanced loading simulations with reasonable molecule density and mesh quality. In the best scenario, a parallel simulation with 2,000 processes runs more than 3,600 times faster than its serial SSA counterpart, and achieves more than 20-fold speedup relative to parallel simulation with 100 processes. In a more realistic scenario with dynamic calcium influx and data recording, the parallel simulation with 1,000 processes and no load balancing is still 500 times faster than the conventional serial SSA simulation.

  12. Large-Scale Analysis of Auditory Segregation Behavior Crowdsourced via a Smartphone App.

    PubMed

    Teki, Sundeep; Kumar, Sukhbinder; Griffiths, Timothy D

    2016-01-01

    The human auditory system is adept at detecting sound sources of interest from a complex mixture of several other simultaneous sounds. The ability to selectively attend to the speech of one speaker whilst ignoring other speakers and background noise is of vital biological significance-the capacity to make sense of complex 'auditory scenes' is significantly impaired in aging populations as well as those with hearing loss. We investigated this problem by designing a synthetic signal, termed the 'stochastic figure-ground' stimulus that captures essential aspects of complex sounds in the natural environment. Previously, we showed that under controlled laboratory conditions, young listeners sampled from the university subject pool (n = 10) performed very well in detecting targets embedded in the stochastic figure-ground signal. Here, we presented a modified version of this cocktail party paradigm as a 'game' featured in a smartphone app (The Great Brain Experiment) and obtained data from a large population with diverse demographical patterns (n = 5148). Despite differences in paradigms and experimental settings, the observed target-detection performance by users of the app was robust and consistent with our previous results from the psychophysical study. Our results highlight the potential use of smartphone apps in capturing robust large-scale auditory behavioral data from normal healthy volunteers, which can also be extended to study auditory deficits in clinical populations with hearing impairments and central auditory disorders.

  13. Turbulence modeling and combustion simulation in porous media under high Peclet number

    NASA Astrophysics Data System (ADS)

    Moiseev, Andrey A.; Savin, Andrey V.

    2018-05-01

    Turbulence modelling in porous flows and burning still remains not completely clear until now. Undoubtedly, conventional turbulence models must work well under high Peclet numbers when porous channels shape is implemented in details. Nevertheless, the true turbulent mixing takes place at micro-scales only, and the dispersion mixing works at macro-scales almost independent from true turbulence. The dispersion mechanism is characterized by the definite space scale (scale of the porous structure) and definite velocity scale (filtration velocity). The porous structure is stochastic one usually, and this circumstance allows applying the analogy between space-time-stochastic true turbulence and the dispersion flow which is stochastic in space only, when porous flow is simulated at the macro-scale level. Additionally, the mentioned analogy allows applying well-known turbulent combustion models in simulations of porous combustion under high Peclet numbers.

  14. Cosmological signatures of a UV-conformal standard model.

    PubMed

    Dorsch, Glauber C; Huber, Stephan J; No, Jose Miguel

    2014-09-19

    Quantum scale invariance in the UV has been recently advocated as an attractive way of solving the gauge hierarchy problem arising in the standard model. We explore the cosmological signatures at the electroweak scale when the breaking of scale invariance originates from a hidden sector and is mediated to the standard model by gauge interactions (gauge mediation). These scenarios, while being hard to distinguish from the standard model at LHC, can give rise to a strong electroweak phase transition leading to the generation of a large stochastic gravitational wave signal in possible reach of future space-based detectors such as eLISA and BBO. This relic would be the cosmological imprint of the breaking of scale invariance in nature.

  15. Inertial-Range Reconnection in Magnetohydrodynamic Turbulence and in the Solar Wind.

    PubMed

    Lalescu, Cristian C; Shi, Yi-Kang; Eyink, Gregory L; Drivas, Theodore D; Vishniac, Ethan T; Lazarian, Alexander

    2015-07-10

    In situ spacecraft data on the solar wind show events identified as magnetic reconnection with wide outflows and extended "X lines," 10(3)-10(4) times ion scales. To understand the role of turbulence at these scales, we make a case study of an inertial-range reconnection event in a magnetohydrodynamic simulation. We observe stochastic wandering of field lines in space, breakdown of standard magnetic flux freezing due to Richardson dispersion, and a broadened reconnection zone containing many current sheets. The coarse-grain magnetic geometry is like large-scale reconnection in the solar wind, however, with a hyperbolic flux tube or apparent X line extending over integral length scales.

  16. Nonparametric weighted stochastic block models

    NASA Astrophysics Data System (ADS)

    Peixoto, Tiago P.

    2018-01-01

    We present a Bayesian formulation of weighted stochastic block models that can be used to infer the large-scale modular structure of weighted networks, including their hierarchical organization. Our method is nonparametric, and thus does not require the prior knowledge of the number of groups or other dimensions of the model, which are instead inferred from data. We give a comprehensive treatment of different kinds of edge weights (i.e., continuous or discrete, signed or unsigned, bounded or unbounded), as well as arbitrary weight transformations, and describe an unsupervised model selection approach to choose the best network description. We illustrate the application of our method to a variety of empirical weighted networks, such as global migrations, voting patterns in congress, and neural connections in the human brain.

  17. Probabilistic Inference in General Graphical Models through Sampling in Stochastic Networks of Spiking Neurons

    PubMed Central

    Pecevski, Dejan; Buesing, Lars; Maass, Wolfgang

    2011-01-01

    An important open problem of computational neuroscience is the generic organization of computations in networks of neurons in the brain. We show here through rigorous theoretical analysis that inherent stochastic features of spiking neurons, in combination with simple nonlinear computational operations in specific network motifs and dendritic arbors, enable networks of spiking neurons to carry out probabilistic inference through sampling in general graphical models. In particular, it enables them to carry out probabilistic inference in Bayesian networks with converging arrows (“explaining away”) and with undirected loops, that occur in many real-world tasks. Ubiquitous stochastic features of networks of spiking neurons, such as trial-to-trial variability and spontaneous activity, are necessary ingredients of the underlying computational organization. We demonstrate through computer simulations that this approach can be scaled up to neural emulations of probabilistic inference in fairly large graphical models, yielding some of the most complex computations that have been carried out so far in networks of spiking neurons. PMID:22219717

  18. Tipping point analysis of ocean acoustic noise

    NASA Astrophysics Data System (ADS)

    Livina, Valerie N.; Brouwer, Albert; Harris, Peter; Wang, Lian; Sotirakopoulos, Kostas; Robinson, Stephen

    2018-02-01

    We apply tipping point analysis to a large record of ocean acoustic data to identify the main components of the acoustic dynamical system and study possible bifurcations and transitions of the system. The analysis is based on a statistical physics framework with stochastic modelling, where we represent the observed data as a composition of deterministic and stochastic components estimated from the data using time-series techniques. We analyse long-term and seasonal trends, system states and acoustic fluctuations to reconstruct a one-dimensional stochastic equation to approximate the acoustic dynamical system. We apply potential analysis to acoustic fluctuations and detect several changes in the system states in the past 14 years. These are most likely caused by climatic phenomena. We analyse trends in sound pressure level within different frequency bands and hypothesize a possible anthropogenic impact on the acoustic environment. The tipping point analysis framework provides insight into the structure of the acoustic data and helps identify its dynamic phenomena, correctly reproducing the probability distribution and scaling properties (power-law correlations) of the time series.

  19. A hierarchical exact accelerated stochastic simulation algorithm

    NASA Astrophysics Data System (ADS)

    Orendorff, David; Mjolsness, Eric

    2012-12-01

    A new algorithm, "HiER-leap" (hierarchical exact reaction-leaping), is derived which improves on the computational properties of the ER-leap algorithm for exact accelerated simulation of stochastic chemical kinetics. Unlike ER-leap, HiER-leap utilizes a hierarchical or divide-and-conquer organization of reaction channels into tightly coupled "blocks" and is thereby able to speed up systems with many reaction channels. Like ER-leap, HiER-leap is based on the use of upper and lower bounds on the reaction propensities to define a rejection sampling algorithm with inexpensive early rejection and acceptance steps. But in HiER-leap, large portions of intra-block sampling may be done in parallel. An accept/reject step is used to synchronize across blocks. This method scales well when many reaction channels are present and has desirable asymptotic properties. The algorithm is exact, parallelizable and achieves a significant speedup over the stochastic simulation algorithm and ER-leap on certain problems. This algorithm offers a potentially important step towards efficient in silico modeling of entire organisms.

  20. Clinical Applications of Stochastic Dynamic Models of the Brain, Part I: A Primer.

    PubMed

    Roberts, James A; Friston, Karl J; Breakspear, Michael

    2017-04-01

    Biological phenomena arise through interactions between an organism's intrinsic dynamics and stochastic forces-random fluctuations due to external inputs, thermal energy, or other exogenous influences. Dynamic processes in the brain derive from neurophysiology and anatomical connectivity; stochastic effects arise through sensory fluctuations, brainstem discharges, and random microscopic states such as thermal noise. The dynamic evolution of systems composed of both dynamic and random effects can be studied with stochastic dynamic models (SDMs). This article, Part I of a two-part series, offers a primer of SDMs and their application to large-scale neural systems in health and disease. The companion article, Part II, reviews the application of SDMs to brain disorders. SDMs generate a distribution of dynamic states, which (we argue) represent ideal candidates for modeling how the brain represents states of the world. When augmented with variational methods for model inversion, SDMs represent a powerful means of inferring neuronal dynamics from functional neuroimaging data in health and disease. Together with deeper theoretical considerations, this work suggests that SDMs will play a unique and influential role in computational psychiatry, unifying empirical observations with models of perception and behavior. Copyright © 2017 Society of Biological Psychiatry. Published by Elsevier Inc. All rights reserved.

  1. Perturbation expansions of stochastic wavefunctions for open quantum systems

    NASA Astrophysics Data System (ADS)

    Ke, Yaling; Zhao, Yi

    2017-11-01

    Based on the stochastic unravelling of the reduced density operator in the Feynman path integral formalism for an open quantum system in touch with harmonic environments, a new non-Markovian stochastic Schrödinger equation (NMSSE) has been established that allows for the systematic perturbation expansion in the system-bath coupling to arbitrary order. This NMSSE can be transformed in a facile manner into the other two NMSSEs, i.e., non-Markovian quantum state diffusion and time-dependent wavepacket diffusion method. Benchmarked by numerically exact results, we have conducted a comparative study of the proposed method in its lowest order approximation, with perturbative quantum master equations in the symmetric spin-boson model and the realistic Fenna-Matthews-Olson complex. It is found that our method outperforms the second-order time-convolutionless quantum master equation in the whole parameter regime and even far better than the fourth-order in the slow bath and high temperature cases. Besides, the method is applicable on an equal footing for any kind of spectral density function and is expected to be a powerful tool to explore the quantum dynamics of large-scale systems, benefiting from the wavefunction framework and the time-local appearance within a single stochastic trajectory.

  2. Species Associations in a Species-Rich Subtropical Forest Were Not Well-Explained by Stochastic Geometry of Biodiversity

    PubMed Central

    Wang, Qinggang; Bao, Dachuan; Guo, Yili; Lu, Junmeng; Lu, Zhijun; Xu, Yaozhan; Zhang, Kuihan; Liu, Haibo; Meng, Hongjie; Jiang, Mingxi; Qiao, Xiujuan; Huang, Handong

    2014-01-01

    The stochastic dilution hypothesis has been proposed to explain species coexistence in species-rich communities. The relative importance of the stochastic dilution effects with respect to other effects such as competition and habitat filtering required to be tested. In this study, using data from a 25-ha species-rich subtropical forest plot with a strong topographic structure at Badagongshan in central China, we analyzed overall species associations and fine-scale species interactions between 2,550 species pairs. The result showed that: (1) the proportion of segregation in overall species association analysis at 2 m neighborhood in this plot followed the prediction of the stochastic dilution hypothesis that segregations should decrease with species richness but that at 10 m neighborhood was higher than the prediction. (2) The proportion of no association type was lower than the expectation of stochastic dilution hypothesis. (3) Fine-scale species interaction analyses using Heterogeneous Poisson processes as null models revealed a high proportion (47%) of significant species effects. However, the assumption of separation of scale of this method was not fully met in this plot with a strong fine-scale topographic structure. We also found that for species within the same families, fine-scale positive species interactions occurred more frequently and negative ones occurred less frequently than expected by chance. These results suggested effects of environmental filtering other than species interaction in this forest. (4) We also found that arbor species showed a much higher proportion of significant fine-scale species interactions (66%) than shrub species (18%). We concluded that the stochastic dilution hypothesis only be partly supported and environmental filtering left discernible spatial signals in the spatial associations between species in this species-rich subtropical forest with a strong topographic structure. PMID:24824996

  3. Species associations in a species-rich subtropical forest were not well-explained by stochastic geometry of biodiversity.

    PubMed

    Wang, Qinggang; Bao, Dachuan; Guo, Yili; Lu, Junmeng; Lu, Zhijun; Xu, Yaozhan; Zhang, Kuihan; Liu, Haibo; Meng, Hongjie; Jiang, Mingxi; Qiao, Xiujuan; Huang, Handong

    2014-01-01

    The stochastic dilution hypothesis has been proposed to explain species coexistence in species-rich communities. The relative importance of the stochastic dilution effects with respect to other effects such as competition and habitat filtering required to be tested. In this study, using data from a 25-ha species-rich subtropical forest plot with a strong topographic structure at Badagongshan in central China, we analyzed overall species associations and fine-scale species interactions between 2,550 species pairs. The result showed that: (1) the proportion of segregation in overall species association analysis at 2 m neighborhood in this plot followed the prediction of the stochastic dilution hypothesis that segregations should decrease with species richness but that at 10 m neighborhood was higher than the prediction. (2) The proportion of no association type was lower than the expectation of stochastic dilution hypothesis. (3) Fine-scale species interaction analyses using Heterogeneous Poisson processes as null models revealed a high proportion (47%) of significant species effects. However, the assumption of separation of scale of this method was not fully met in this plot with a strong fine-scale topographic structure. We also found that for species within the same families, fine-scale positive species interactions occurred more frequently and negative ones occurred less frequently than expected by chance. These results suggested effects of environmental filtering other than species interaction in this forest. (4) We also found that arbor species showed a much higher proportion of significant fine-scale species interactions (66%) than shrub species (18%). We concluded that the stochastic dilution hypothesis only be partly supported and environmental filtering left discernible spatial signals in the spatial associations between species in this species-rich subtropical forest with a strong topographic structure.

  4. Finite-time and finite-size scalings in the evaluation of large-deviation functions: Numerical approach in continuous time.

    PubMed

    Guevara Hidalgo, Esteban; Nemoto, Takahiro; Lecomte, Vivien

    2017-06-01

    Rare trajectories of stochastic systems are important to understand because of their potential impact. However, their properties are by definition difficult to sample directly. Population dynamics provides a numerical tool allowing their study, by means of simulating a large number of copies of the system, which are subjected to selection rules that favor the rare trajectories of interest. Such algorithms are plagued by finite simulation time and finite population size, effects that can render their use delicate. In this paper, we present a numerical approach which uses the finite-time and finite-size scalings of estimators of the large deviation functions associated to the distribution of rare trajectories. The method we propose allows one to extract the infinite-time and infinite-size limit of these estimators, which-as shown on the contact process-provides a significant improvement of the large deviation function estimators compared to the standard one.

  5. Inflationary tensor perturbations after BICEP2.

    PubMed

    Caligiuri, Jerod; Kosowsky, Arthur

    2014-05-16

    The measurement of B-mode polarization of the cosmic microwave background at large angular scales by the BICEP experiment suggests a stochastic gravitational wave background from early-Universe inflation with a surprisingly large amplitude. The power spectrum of these tensor perturbations can be probed both with further measurements of the microwave background polarization at smaller scales and also directly via interferometry in space. We show that sufficiently sensitive high-resolution B-mode measurements will ultimately have the ability to test the inflationary consistency relation between the amplitude and spectrum of the tensor perturbations, confirming their inflationary origin. Additionally, a precise B-mode measurement of the tensor spectrum will predict the tensor amplitude on solar system scales to 20% accuracy for an exact power-law tensor spectrum, so a direct detection will then measure the running of the tensor spectral index to high precision.

  6. Non-equilibrium phase transition in mesoscopic biochemical systems: from stochastic to nonlinear dynamics and beyond

    PubMed Central

    Ge, Hao; Qian, Hong

    2011-01-01

    A theory for an non-equilibrium phase transition in a driven biochemical network is presented. The theory is based on the chemical master equation (CME) formulation of mesoscopic biochemical reactions and the mathematical method of large deviations. The large deviations theory provides an analytical tool connecting the macroscopic multi-stability of an open chemical system with the multi-scale dynamics of its mesoscopic counterpart. It shows a corresponding non-equilibrium phase transition among multiple stochastic attractors. As an example, in the canonical phosphorylation–dephosphorylation system with feedback that exhibits bistability, we show that the non-equilibrium steady-state (NESS) phase transition has all the characteristics of classic equilibrium phase transition: Maxwell construction, a discontinuous first-derivative of the ‘free energy function’, Lee–Yang's zero for a generating function and a critical point that matches the cusp in nonlinear bifurcation theory. To the biochemical system, the mathematical analysis suggests three distinct timescales and needed levels of description. They are (i) molecular signalling, (ii) biochemical network nonlinear dynamics, and (iii) cellular evolution. For finite mesoscopic systems such as a cell, motions associated with (i) and (iii) are stochastic while that with (ii) is deterministic. Both (ii) and (iii) are emergent properties of a dynamic biochemical network. PMID:20466813

  7. A stochastic parameterization for deep convection using cellular automata

    NASA Astrophysics Data System (ADS)

    Bengtsson, L.; Steinheimer, M.; Bechtold, P.; Geleyn, J.

    2012-12-01

    Cumulus parameterizations used in most operational weather and climate models today are based on the mass-flux concept which took form in the early 1970's. In such schemes it is assumed that a unique relationship exists between the ensemble-average of the sub-grid convection, and the instantaneous state of the atmosphere in a vertical grid box column. However, such a relationship is unlikely to be described by a simple deterministic function (Palmer, 2011). Thus, because of the statistical nature of the parameterization challenge, it has been recognized by the community that it is important to introduce stochastic elements to the parameterizations (for instance: Plant and Craig, 2008, Khouider et al. 2010, Frenkel et al. 2011, Bentsson et al. 2011, but the list is far from exhaustive). There are undoubtedly many ways in which stochastisity can enter new developments. In this study we use a two-way interacting cellular automata (CA), as its intrinsic nature possesses many qualities interesting for deep convection parameterization. In the one-dimensional entraining plume approach, there is no parameterization of horizontal transport of heat, moisture or momentum due to cumulus convection. In reality, mass transport due to gravity waves that propagate in the horizontal can trigger new convection, important for the organization of deep convection (Huang, 1988). The self-organizational characteristics of the CA allows for lateral communication between adjacent NWP model grid-boxes, and temporal memory. Thus the CA scheme used in this study contain three interesting components for representation of cumulus convection, which are not present in the traditional one-dimensional bulk entraining plume method: horizontal communication, memory and stochastisity. The scheme is implemented in the high resolution regional NWP model ALARO, and simulations show enhanced organization of convective activity along squall-lines. Probabilistic evaluation demonstrate an enhanced spread in large-scale variables in regions where convective activity is large. A two month extended evaluation of the deterministic behaviour of the scheme indicate a neutral impact on forecast skill. References: Bengtsson, L., H. Körnich, E. Källén, and G. Svensson, 2011: Large-scale dynamical response to sub-grid scale organization provided by cellular automata. Journal of the Atmospheric Sciences, 68, 3132-3144. Frenkel, Y., A. Majda, and B. Khouider, 2011: Using the stochastic multicloud model to improve tropical convective parameterization: A paradigm example. Journal of the Atmospheric Sciences, doi: 10.1175/JAS-D-11-0148.1. Huang, X.-Y., 1988: The organization of moist convection by internal 365 gravity waves. Tellus A, 42, 270-285. Khouider, B., J. Biello, and A. Majda, 2010: A Stochastic Multicloud Model for Tropical Convection. Comm. Math. Sci., 8, 187-216. Palmer, T., 2011: Towards the Probabilistic Earth-System Simulator: A Vision for the Future of Climate and Weather Prediction. Quarterly Journal of the Royal Meteorological Society 138 (2012) 841-861 Plant, R. and G. Craig, 2008: A stochastic parameterization for deep convection based on equilibrium statistics. J. Atmos. Sci., 65, 87-105.

  8. Addressing model uncertainty through stochastic parameter perturbations within the High Resolution Rapid Refresh (HRRR) ensemble

    NASA Astrophysics Data System (ADS)

    Wolff, J.; Jankov, I.; Beck, J.; Carson, L.; Frimel, J.; Harrold, M.; Jiang, H.

    2016-12-01

    It is well known that global and regional numerical weather prediction ensemble systems are under-dispersive, producing unreliable and overconfident ensemble forecasts. Typical approaches to alleviate this problem include the use of multiple dynamic cores, multiple physics suite configurations, or a combination of the two. While these approaches may produce desirable results, they have practical and theoretical deficiencies and are more difficult and costly to maintain. An active area of research that promotes a more unified and sustainable system for addressing the deficiencies in ensemble modeling is the use of stochastic physics to represent model-related uncertainty. Stochastic approaches include Stochastic Parameter Perturbations (SPP), Stochastic Kinetic Energy Backscatter (SKEB), Stochastic Perturbation of Physics Tendencies (SPPT), or some combination of all three. The focus of this study is to assess the model performance within a convection-permitting ensemble at 3-km grid spacing across the Contiguous United States (CONUS) when using stochastic approaches. For this purpose, the test utilized a single physics suite configuration based on the operational High-Resolution Rapid Refresh (HRRR) model, with ensemble members produced by employing stochastic methods. Parameter perturbations were employed in the Rapid Update Cycle (RUC) land surface model and Mellor-Yamada-Nakanishi-Niino (MYNN) planetary boundary layer scheme. Results will be presented in terms of bias, error, spread, skill, accuracy, reliability, and sharpness using the Model Evaluation Tools (MET) verification package. Due to the high level of complexity of running a frequently updating (hourly), high spatial resolution (3 km), large domain (CONUS) ensemble system, extensive high performance computing (HPC) resources were needed to meet this objective. Supercomputing resources were provided through the National Center for Atmospheric Research (NCAR) Strategic Capability (NSC) project support, allowing for a more extensive set of tests over multiple seasons, consequently leading to more robust results. Through the use of these stochastic innovations and powerful supercomputing at NCAR, further insights and advancements in ensemble forecasting at convection-permitting scales will be possible.

  9. Effects of intrinsic stochasticity on delayed reaction-diffusion patterning systems.

    PubMed

    Woolley, Thomas E; Baker, Ruth E; Gaffney, Eamonn A; Maini, Philip K; Seirin-Lee, Sungrim

    2012-05-01

    Cellular gene expression is a complex process involving many steps, including the transcription of DNA and translation of mRNA; hence the synthesis of proteins requires a considerable amount of time, from ten minutes to several hours. Since diffusion-driven instability has been observed to be sensitive to perturbations in kinetic delays, the application of Turing patterning mechanisms to the problem of producing spatially heterogeneous differential gene expression has been questioned. In deterministic systems a small delay in the reactions can cause a large increase in the time it takes a system to pattern. Recently, it has been observed that in undelayed systems intrinsic stochasticity can cause pattern initiation to occur earlier than in the analogous deterministic simulations. Here we are interested in adding both stochasticity and delays to Turing systems in order to assess whether stochasticity can reduce the patterning time scale in delayed Turing systems. As analytical insights to this problem are difficult to attain and often limited in their use, we focus on stochastically simulating delayed systems. We consider four different Turing systems and two different forms of delay. Our results are mixed and lead to the conclusion that, although the sensitivity to delays in the Turing mechanism is not completely removed by the addition of intrinsic noise, the effects of the delays are clearly ameliorated in certain specific cases.

  10. Stochastic Ratcheting on a Funneled Energy Landscape Is Necessary for Highly Efficient Contractility of Actomyosin Force Dipoles

    NASA Astrophysics Data System (ADS)

    Komianos, James E.; Papoian, Garegin A.

    2018-04-01

    Current understanding of how contractility emerges in disordered actomyosin networks of nonmuscle cells is still largely based on the intuition derived from earlier works on muscle contractility. In addition, in disordered networks, passive cross-linkers have been hypothesized to percolate force chains in the network, hence, establishing large-scale connectivity between local contractile clusters. This view, however, largely overlooks the free energy of cross-linker binding at the microscale, which, even in the absence of active fluctuations, provides a thermodynamic drive towards highly overlapping filamentous states. In this work, we use stochastic simulations and mean-field theory to shed light on the dynamics of a single actomyosin force dipole—a pair of antiparallel actin filaments interacting with active myosin II motors and passive cross-linkers. We first show that while passive cross-linking without motor activity can produce significant contraction between a pair of actin filaments, driven by thermodynamic favorability of cross-linker binding, a sharp onset of kinetic arrest exists at large cross-link binding energies, greatly diminishing the effectiveness of this contractility mechanism. Then, when considering an active force dipole containing nonmuscle myosin II, we find that cross-linkers can also serve as a structural ratchet when the motor dissociates stochastically from the actin filaments, resulting in significant force amplification when both molecules are present. Our results provide predictions of how actomyosin force dipoles behave at the molecular level with respect to filament boundary conditions, passive cross-linking, and motor activity, which can explicitly be tested using an optical trapping experiment.

  11. Stochasticity in materials structure, properties, and processing—A review

    NASA Astrophysics Data System (ADS)

    Hull, Robert; Keblinski, Pawel; Lewis, Dan; Maniatty, Antoinette; Meunier, Vincent; Oberai, Assad A.; Picu, Catalin R.; Samuel, Johnson; Shephard, Mark S.; Tomozawa, Minoru; Vashishth, Deepak; Zhang, Shengbai

    2018-03-01

    We review the concept of stochasticity—i.e., unpredictable or uncontrolled fluctuations in structure, chemistry, or kinetic processes—in materials. We first define six broad classes of stochasticity: equilibrium (thermodynamic) fluctuations; structural/compositional fluctuations; kinetic fluctuations; frustration and degeneracy; imprecision in measurements; and stochasticity in modeling and simulation. In this review, we focus on the first four classes that are inherent to materials phenomena. We next develop a mathematical framework for describing materials stochasticity and then show how it can be broadly applied to these four materials-related stochastic classes. In subsequent sections, we describe structural and compositional fluctuations at small length scales that modify material properties and behavior at larger length scales; systems with engineered fluctuations, concentrating primarily on composite materials; systems in which stochasticity is developed through nucleation and kinetic phenomena; and configurations in which constraints in a given system prevent it from attaining its ground state and cause it to attain several, equally likely (degenerate) states. We next describe how stochasticity in these processes results in variations in physical properties and how these variations are then accentuated by—or amplify—stochasticity in processing and manufacturing procedures. In summary, the origins of materials stochasticity, the degree to which it can be predicted and/or controlled, and the possibility of using stochastic descriptions of materials structure, properties, and processing as a new degree of freedom in materials design are described.

  12. Fluctuations, ghosts, and the cosmological constant

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hirayama, T.; Holdom, B.

    2004-12-15

    For a large region of parameter space involving the cosmological constant and mass parameters, we discuss fluctuating spacetime solutions that are effectively Minkowskian on large time and distance scales. Rapid, small amplitude oscillations in the scale factor have a frequency determined by the size of a negative cosmological constant. A field with modes of negative energy is required. If it is gravity that induces a coupling between the ghostlike and normal fields, we find that this results in stochastic rather than unstable behavior. The negative energy modes may also permit the existence of Lorentz invariant fluctuating solutions of finite energymore » density. Finally we consider higher derivative gravity theories and find oscillating metric solutions in these theories without the addition of other fields.« less

  13. Solving large scale traveling salesman problems by chaotic neurodynamics.

    PubMed

    Hasegawa, Mikio; Ikeguch, Tohru; Aihara, Kazuyuki

    2002-03-01

    We propose a novel approach for solving large scale traveling salesman problems (TSPs) by chaotic dynamics. First, we realize the tabu search on a neural network, by utilizing the refractory effects as the tabu effects. Then, we extend it to a chaotic neural network version. We propose two types of chaotic searching methods, which are based on two different tabu searches. While the first one requires neurons of the order of n2 for an n-city TSP, the second one requires only n neurons. Moreover, an automatic parameter tuning method of our chaotic neural network is presented for easy application to various problems. Last, we show that our method with n neurons is applicable to large TSPs such as an 85,900-city problem and exhibits better performance than the conventional stochastic searches and the tabu searches.

  14. Synthetic Sediments and Stochastic Groundwater Hydrology

    NASA Astrophysics Data System (ADS)

    Wilson, J. L.

    2002-12-01

    For over twenty years the groundwater community has pursued the somewhat elusive goal of describing the effects of aquifer heterogeneity on subsurface flow and chemical transport. While small perturbation stochastic moment methods have significantly advanced theoretical understanding, why is it that stochastic applications use instead simulations of flow and transport through multiple realizations of synthetic geology? Allan Gutjahr was a principle proponent of the Fast Fourier Transform method for the synthetic generation of aquifer properties and recently explored new, more geologically sound, synthetic methods based on multi-scale Markov random fields. Focusing on sedimentary aquifers, how has the state-of-the-art of synthetic generation changed and what new developments can be expected, for example, to deal with issues like conceptual model uncertainty, the differences between measurement and modeling scales, and subgrid scale variability? What will it take to get stochastic methods, whether based on moments, multiple realizations, or some other approach, into widespread application?

  15. The symmetric quartic map for trajectories of magnetic field lines in elongated divertor tokamak plasmas

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Jones, Morgin; Wadi, Hasina; Ali, Halima

    The coordinates of the area-preserving map equations for integration of magnetic field line trajectories in divertor tokamaks can be any coordinates for which a transformation to ({psi}{sub t},{theta},{phi}) coordinates exists [A. Punjabi, H. Ali, T. Evans, and A. Boozer, Phys. Lett. A 364, 140 (2007)]. {psi}{sub t} is toroidal magnetic flux, {theta} is poloidal angle, and {phi} is toroidal angle. This freedom is exploited to construct the symmetric quartic map such that the only parameter that determines magnetic geometry is the elongation of the separatrix surface. The poloidal flux inside the separatrix, the safety factor as a function of normalizedmore » minor radius, and the magnetic perturbation from the symplectic discretization are all held constant, and only the elongation is {kappa} varied. The width of stochastic layer, the area, and the fractal dimension of the magnetic footprint and the average radial diffusion coefficient of magnetic field lines from the stochastic layer; and how these quantities scale with {kappa} is calculated. The symmetric quartic map gives the correct scalings which are consistent with the scalings of coordinates with {kappa}. The effects of m=1, n={+-}1 internal perturbation with the amplitude that is expected to occur in tokamaks are calculated by adding a term [H. Ali, A. Punjabi, A. H. Boozer, and T. Evans, Phys. Plasmas 11, 1908 (2004)] to the symmetric quartic map. In this case, the width of stochastic layer scales as 0.35 power of {kappa}. The area of the footprint is roughly constant. The average radial diffusion coefficient of field lines near the X-point scales linearly with {kappa}. The low mn perturbation changes the quasisymmetric structure of the footprint, and reorganizes it into a single, large scale, asymmetric structure. The symmetric quartic map is combined with the dipole map [A. Punjabi, H. Ali, and A. H. Boozer, Phys. Plasmas 10, 3992 (2003)] to calculate the effects of magnetic perturbation from a current carrying coil. The coil position and coil current coil are constant. The dipole perturbation enhances the magnetic shear. The width of the stochastic layer scales exponentially with {kappa}. The area of the footprint decreases as the {kappa} increases. The radial diffusion coefficient of field lines scales exponentially with {kappa}. The dipole perturbation changes the topology of the footprint. It breaks up the toroidally spiraling footprint into a number of separate asymmetric toroidal strips. Practical applications of the symmetric quartic map to elongated divertor tokamak plasmas are suggested.« less

  16. The symmetric quartic map for trajectories of magnetic field lines in elongated divertor tokamak plasmas

    NASA Astrophysics Data System (ADS)

    Jones, Morgin; Wadi, Hasina; Ali, Halima; Punjabi, Alkesh

    2009-04-01

    The coordinates of the area-preserving map equations for integration of magnetic field line trajectories in divertor tokamaks can be any coordinates for which a transformation to (ψt,θ,φ) coordinates exists [A. Punjabi, H. Ali, T. Evans, and A. Boozer, Phys. Lett. A 364, 140 (2007)]. ψt is toroidal magnetic flux, θ is poloidal angle, and φ is toroidal angle. This freedom is exploited to construct the symmetric quartic map such that the only parameter that determines magnetic geometry is the elongation of the separatrix surface. The poloidal flux inside the separatrix, the safety factor as a function of normalized minor radius, and the magnetic perturbation from the symplectic discretization are all held constant, and only the elongation is κ varied. The width of stochastic layer, the area, and the fractal dimension of the magnetic footprint and the average radial diffusion coefficient of magnetic field lines from the stochastic layer; and how these quantities scale with κ is calculated. The symmetric quartic map gives the correct scalings which are consistent with the scalings of coordinates with κ. The effects of m =1, n =±1 internal perturbation with the amplitude that is expected to occur in tokamaks are calculated by adding a term [H. Ali, A. Punjabi, A. H. Boozer, and T. Evans, Phys. Plasmas 11, 1908 (2004)] to the symmetric quartic map. In this case, the width of stochastic layer scales as 0.35 power of κ. The area of the footprint is roughly constant. The average radial diffusion coefficient of field lines near the X-point scales linearly with κ. The low mn perturbation changes the quasisymmetric structure of the footprint, and reorganizes it into a single, large scale, asymmetric structure. The symmetric quartic map is combined with the dipole map [A. Punjabi, H. Ali, and A. H. Boozer, Phys. Plasmas 10, 3992 (2003)] to calculate the effects of magnetic perturbation from a current carrying coil. The coil position and coil current coil are constant. The dipole perturbation enhances the magnetic shear. The width of the stochastic layer scales exponentially with κ. The area of the footprint decreases as the κ increases. The radial diffusion coefficient of field lines scales exponentially with κ. The dipole perturbation changes the topology of the footprint. It breaks up the toroidally spiraling footprint into a number of separate asymmetric toroidal strips. Practical applications of the symmetric quartic map to elongated divertor tokamak plasmas are suggested.

  17. Minimalist model of ice microphysics in mixed-phase stratiform clouds

    NASA Astrophysics Data System (ADS)

    Yang, Fan; Ovchinnikov, Mikhail; Shaw, Raymond A.

    2013-07-01

    The question of whether persistent ice crystal precipitation from supercooled layer clouds can be explained by time-dependent, stochastic ice nucleation is explored using an approximate, analytical model and a large-eddy simulation (LES) cloud model. The updraft velocity in the cloud defines an accumulation zone, where small ice particles cannot fall out until they are large enough, which will increase the residence time of ice particles in the cloud. Ice particles reach a quasi-steady state between growth by vapor deposition and fall speed at cloud base. The analytical model predicts that ice water content (wi) has a 2.5 power-law relationship with ice number concentration (ni). wi and ni from a LES cloud model with stochastic ice nucleation confirm the 2.5 power-law relationship, and initial indications of the scaling law are observed in data from the Indirect and Semi-Direct Aerosol Campaign. The prefactor of the power law is proportional to the ice nucleation rate and therefore provides a quantitative link to observations of ice microphysical properties.

  18. Behavior of MHD Instabilities of the Large Helical Device near the Effective Plasma Boundary in the Magnetic Stochastic Region

    NASA Astrophysics Data System (ADS)

    Ohdachi, S.; Suzuki, Y.; Sakakibara, S.; Watanabe, K. Y.; Ida, K.; Goto, M.; Du, X. D.; Narushima, Y.; Takemura, Y.; Yamada, H.

    In the high beta experiments of the Large Helical Device (LHD), the plasma tends to expand from the last closed flux surface (LCFS) determined by the vacuum magnetic field. The pressure/temperature gradient in the external region is finite. The scale length of the pressure profile does not change so much even when the mean free path of electrons exceeds the connection length of the magnetic field line to the wall. There appear MHD instabilities with amplitude of 10-4 of the toroidal magnetic field. From the mode number of the activities (m/n = 2/3, 1/2, 2/4), the location of the corresponding rational surface is outside the vacuum LCFS. The location of the mode is consistent with the fluctuation measurement, e.g., soft X-ray detector arrays. The MHD mode localized in the magnetic stochastic region is affected by the magnetic field structure estimated by the connection length to the wall using 3D equilibrium calculation.

  19. Stochastic Spiking Neural Networks Enabled by Magnetic Tunnel Junctions: From Nontelegraphic to Telegraphic Switching Regimes

    NASA Astrophysics Data System (ADS)

    Liyanagedera, Chamika M.; Sengupta, Abhronil; Jaiswal, Akhilesh; Roy, Kaushik

    2017-12-01

    Stochastic spiking neural networks based on nanoelectronic spin devices can be a possible pathway to achieving "brainlike" compact and energy-efficient cognitive intelligence. The computational model attempt to exploit the intrinsic device stochasticity of nanoelectronic synaptic or neural components to perform learning or inference. However, there has been limited analysis on the scaling effect of stochastic spin devices and its impact on the operation of such stochastic networks at the system level. This work attempts to explore the design space and analyze the performance of nanomagnet-based stochastic neuromorphic computing architectures for magnets with different barrier heights. We illustrate how the underlying network architecture must be modified to account for the random telegraphic switching behavior displayed by magnets with low barrier heights as they are scaled into the superparamagnetic regime. We perform a device-to-system-level analysis on a deep neural-network architecture for a digit-recognition problem on the MNIST data set.

  20. Stochastic downscaling of numerically simulated spatial rain and cloud fields using a transient multifractal approach

    NASA Astrophysics Data System (ADS)

    Nogueira, M.; Barros, A. P.; Miranda, P. M.

    2012-04-01

    Atmospheric fields can be extremely variable over wide ranges of spatial scales, with a scale ratio of 109-1010 between largest (planetary) and smallest (viscous dissipation) scale. Furthermore atmospheric fields with strong variability over wide ranges in scale most likely should not be artificially split apart into large and small scales, as in reality there is no scale separation between resolved and unresolved motions. Usually the effects of the unresolved scales are modeled by a deterministic bulk formula representing an ensemble of incoherent subgrid processes on the resolved flow. This is a pragmatic approach to the problem and not the complete solution to it. These models are expected to underrepresent the small-scale spatial variability of both dynamical and scalar fields due to implicit and explicit numerical diffusion as well as physically based subgrid scale turbulent mixing, resulting in smoother and less intermittent fields as compared to observations. Thus, a fundamental change in the way we formulate our models is required. Stochastic approaches equipped with a possible realization of subgrid processes and potentially coupled to the resolved scales over the range of significant scale interactions range provide one alternative to address the problem. Stochastic multifractal models based on the cascade phenomenology of the atmosphere and its governing equations in particular are the focus of this research. Previous results have shown that rain and cloud fields resulting from both idealized and realistic numerical simulations display multifractal behavior in the resolved scales. This result is observed even in the absence of scaling in the initial conditions or terrain forcing, suggesting that multiscaling is a general property of the nonlinear solutions of the Navier-Stokes equations governing atmospheric dynamics. Our results also show that the corresponding multiscaling parameters for rain and cloud fields exhibit complex nonlinear behavior depending on large scale parameters such as terrain forcing and mean atmospheric conditions at each location, particularly mean wind speed and moist stability. A particularly robust behavior found is the transition of the multiscaling parameters between stable and unstable cases, which has a clear physical correspondence to the transition from stratiform to organized (banded) convective regime. Thus multifractal diagnostics of moist processes are fundamentally transient and should provide a physically robust basis for the downscaling and sub-grid scale parameterizations of moist processes. Here, we investigate the possibility of using a simplified computationally efficient multifractal downscaling methodology based on turbulent cascades to produce statistically consistent fields at scales higher than the ones resolved by the model. Specifically, we are interested in producing rainfall and cloud fields at spatial resolutions necessary for effective flash flood and earth flows forecasting. The results are examined by comparing downscaled field against observations, and tendency error budgets are used to diagnose the evolution of transient errors in the numerical model prediction which can be attributed to aliasing.

  1. Genetic structure among coastal tailed frog populations of Mount St. Helens is moderated by post-disturbance management

    Treesearch

    Stephen F. Spear; Charles M. Crisafulli; Andrew Storfer

    2012-01-01

    Catastrophic disturbances often provide “natural laboratories” that allow for greater understanding of ecological processes and response of natural populations. The 1980 eruption of the Mount St. Helens volcano in Washington, USA, provided a unique opportunity to test biotic effects of a large-scale stochastic disturbance, as well as the influence of post-disturbance...

  2. Scaling Up Coordinate Descent Algorithms for Large ℓ1 Regularization Problems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Scherrer, Chad; Halappanavar, Mahantesh; Tewari, Ambuj

    2012-07-03

    We present a generic framework for parallel coordinate descent (CD) algorithms that has as special cases the original sequential algorithms of Cyclic CD and Stochastic CD, as well as the recent parallel Shotgun algorithm of Bradley et al. We introduce two novel parallel algorithms that are also special cases---Thread-Greedy CD and Coloring-Based CD---and give performance measurements for an OpenMP implementation of these.

  3. Anomalous Ion Heating, Intrinsic and Induced Rotation in the Pegasus Toroidal Experiment

    NASA Astrophysics Data System (ADS)

    Burke, M. G.; Barr, J. L.; Bongard, M. W.; Fonck, R. J.; Hinson, E. T.; Perry, J. M.; Redd, A. J.; Thome, K. E.

    2014-10-01

    Pegasus plasmas are initiated through either standard, MHD stable, inductive current drive or non-solenoidal local helicity injection (LHI) current drive with strong reconnection activity, providing a rich environment to study ion dynamics. During LHI discharges, a large amount of anomalous impurity ion heating has been observed, with Ti ~ 800 eV but Te < 100 eV. The ion heating is hypothesized to be a result of large-scale magnetic reconnection activity, as the amount of heating scales with increasing fluctuation amplitude of the dominant, edge localized, n = 1 MHD mode. Chordal Ti spatial profiles indicate centrally peaked temperatures, suggesting a region of good confinement near the plasma core surrounded by a stochastic region. LHI plasmas are observed to rotate, perhaps due to an inward radial current generated by the stochastization of the plasma edge by the injected current streams. H-mode plasmas are initiated using a combination of high-field side fueling and Ohmic current drive. This regime shows a significant increase in rotation shear compared to L-mode plasmas. In addition, these plasmas have been observed to rotate in the counter-Ip direction without any external momentum sources. The intrinsic rotation direction is consistent with predictions from the saturated Ohmic confinement regime. Work supported by US DOE Grant DE-FG02-96ER54375.

  4. Efficient stochastic approaches for sensitivity studies of an Eulerian large-scale air pollution model

    NASA Astrophysics Data System (ADS)

    Dimov, I.; Georgieva, R.; Todorov, V.; Ostromsky, Tz.

    2017-10-01

    Reliability of large-scale mathematical models is an important issue when such models are used to support decision makers. Sensitivity analysis of model outputs to variation or natural uncertainties of model inputs is crucial for improving the reliability of mathematical models. A comprehensive experimental study of Monte Carlo algorithms based on Sobol sequences for multidimensional numerical integration has been done. A comparison with Latin hypercube sampling and a particular quasi-Monte Carlo lattice rule based on generalized Fibonacci numbers has been presented. The algorithms have been successfully applied to compute global Sobol sensitivity measures corresponding to the influence of several input parameters (six chemical reactions rates and four different groups of pollutants) on the concentrations of important air pollutants. The concentration values have been generated by the Unified Danish Eulerian Model. The sensitivity study has been done for the areas of several European cities with different geographical locations. The numerical tests show that the stochastic algorithms under consideration are efficient for multidimensional integration and especially for computing small by value sensitivity indices. It is a crucial element since even small indices may be important to be estimated in order to achieve a more accurate distribution of inputs influence and a more reliable interpretation of the mathematical model results.

  5. Integrating Sediment Connectivity into Water Resources Management Trough a Graph Theoretic, Stochastic Modeling Framework.

    NASA Astrophysics Data System (ADS)

    Schmitt, R. J. P.; Castelletti, A.; Bizzi, S.

    2014-12-01

    Understanding sediment transport processes at the river basin scale, their temporal spectra and spatial patterns is key to identify and minimize morphologic risks associated to channel adjustments processes. This work contributes a stochastic framework for modeling bed-load connectivity based on recent advances in the field (e.g., Bizzi & Lerner, 2013; Czubas & Foufoulas-Georgiu, 2014). It presents river managers with novel indicators from reach scale vulnerability to channel adjustment in large river networks with sparse hydrologic and sediment observations. The framework comprises three steps. First, based on a distributed hydrological model and remotely sensed information, the framework identifies a representative grain size class for each reach. Second, sediment residence time distributions are calculated for each reach in a Monte-Carlo approach applying standard sediment transport equations driven by local hydraulic conditions. Third, a network analysis defines the up- and downstream connectivity for various travel times resulting in characteristic up/downstream connectivity signatures for each reach. Channel vulnerability indicators quantify the imbalance between up/downstream connectivity for each travel time domain, representing process dependent latency of morphologic response. Last, based on the stochastic core of the model, a sensitivity analysis identifies drivers of change and major sources of uncertainty in order to target key detrimental processes and to guide effective gathering of additional data. The application, limitation and integration into a decision analytic framework is demonstrated for a major part of the Red River Basin in Northern Vietnam (179.000 km2). Here, a plethora of anthropic alterations ranging from large reservoir construction to land-use changes results in major downstream deterioration and calls for deriving concerted sediment management strategies to mitigate current and limit future morphologic alterations.

  6. Large scale Brownian dynamics of confined suspensions of rigid particles

    NASA Astrophysics Data System (ADS)

    Sprinkle, Brennan; Balboa Usabiaga, Florencio; Patankar, Neelesh A.; Donev, Aleksandar

    2017-12-01

    We introduce methods for large-scale Brownian Dynamics (BD) simulation of many rigid particles of arbitrary shape suspended in a fluctuating fluid. Our method adds Brownian motion to the rigid multiblob method [F. Balboa Usabiaga et al., Commun. Appl. Math. Comput. Sci. 11(2), 217-296 (2016)] at a cost comparable to the cost of deterministic simulations. We demonstrate that we can efficiently generate deterministic and random displacements for many particles using preconditioned Krylov iterative methods, if kernel methods to efficiently compute the action of the Rotne-Prager-Yamakawa (RPY) mobility matrix and its "square" root are available for the given boundary conditions. These kernel operations can be computed with near linear scaling for periodic domains using the positively split Ewald method. Here we study particles partially confined by gravity above a no-slip bottom wall using a graphical processing unit implementation of the mobility matrix-vector product, combined with a preconditioned Lanczos iteration for generating Brownian displacements. We address a major challenge in large-scale BD simulations, capturing the stochastic drift term that arises because of the configuration-dependent mobility. Unlike the widely used Fixman midpoint scheme, our methods utilize random finite differences and do not require the solution of resistance problems or the computation of the action of the inverse square root of the RPY mobility matrix. We construct two temporal schemes which are viable for large-scale simulations, an Euler-Maruyama traction scheme and a trapezoidal slip scheme, which minimize the number of mobility problems to be solved per time step while capturing the required stochastic drift terms. We validate and compare these schemes numerically by modeling suspensions of boomerang-shaped particles sedimented near a bottom wall. Using the trapezoidal scheme, we investigate the steady-state active motion in dense suspensions of confined microrollers, whose height above the wall is set by a combination of thermal noise and active flows. We find the existence of two populations of active particles, slower ones closer to the bottom and faster ones above them, and demonstrate that our method provides quantitative accuracy even with relatively coarse resolutions of the particle geometry.

  7. The importance of stochasticity and internal variability in geomorphic erosion system

    NASA Astrophysics Data System (ADS)

    Kim, J.; Ivanov, V. Y.; Fatichi, S.

    2016-12-01

    Understanding soil erosion is essential for a range of studies but the predictive skill of prognostic models and reliability of national-scale assessments have been repeatedly questioned. Indeed, data from multiple environments indicate that fluvial soil loss is highly non-unique and its frequency distributions exhibit heavy tails. We reveal that these features are attributed to the high sensitivity of erosion response to micro-scale variations of soil erodibility - `geomorphic internal variability'. The latter acts as an intermediary between forcing and erosion dynamics, augmenting the conventionally emphasized effects of `external variability' (climate, topography, land use, management form). Furthermore, we observe a reduction of erosion non-uniqueness at larger temporal scales that correlates with environment stochasticity. Our analysis shows that this effect can be attributed to the larger likelihood of alternating characteristic regimes of sediment dynamics. The corollary of this study is that the glaring gap - the inherently large uncertainties and the fallacy of representativeness of central tendencies - must be conceded in soil loss assessments. Acknowledgement: This research was supported by a grant (16AWMP-B083066-03) from Water Management Research Program funded by Ministry of Land, Infrastructure and Transport of Korean government, and by the faculty research fund of Sejong University in 2016.

  8. A stochastic perturbation method to generate inflow turbulence in large-eddy simulation models: Application to neutrally stratified atmospheric boundary layers

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Muñoz-Esparza, D.; Kosović, B.; Beeck, J. van

    2015-03-15

    Despite the variety of existing methods, efficient generation of turbulent inflow conditions for large-eddy simulation (LES) models remains a challenging and active research area. Herein, we extend our previous research on the cell perturbation method, which uses a novel stochastic approach based upon finite amplitude perturbations of the potential temperature field applied within a region near the inflow boundaries of the LES domain [Muñoz-Esparza et al., “Bridging the transition from mesoscale to microscale turbulence in numerical weather prediction models,” Boundary-Layer Meteorol., 153, 409–440 (2014)]. The objective was twofold: (i) to identify the governing parameters of the method and their optimummore » values and (ii) to generalize the results over a broad range of atmospheric large-scale forcing conditions, U{sub g} = 5 − 25 m s{sup −1}, where U{sub g} is the geostrophic wind. We identified the perturbation Eckert number, Ec=U{sub g}{sup 2}/ρc{sub p}θ{sup ~}{sub pm}, to be the parameter governing the flow transition to turbulence in neutrally stratified boundary layers. Here, θ{sup ~}{sub pm} is the maximum perturbation amplitude applied, c{sub p} is the specific heat capacity at constant pressure, and ρ is the density. The optimal Eckert number was found for nonlinear perturbations allowed by Ec ≈ 0.16, which instigate formation of hairpin-like vortices that most rapidly transition to a developed turbulent state. Larger Ec numbers (linear small-amplitude perturbations) result in streaky structures requiring larger fetches to reach the quasi-equilibrium solution, while smaller Ec numbers lead to buoyancy dominated perturbations exhibiting difficulties for hairpin-like vortices to emerge. Cell perturbations with wavelengths within the inertial range of three-dimensional turbulence achieved identical quasi-equilibrium values of resolved turbulent kinetic energy, q, and Reynolds-shear stress, . In contrast, large-scale perturbations acting at the production range exhibited reduced levels of , due to the formation of coherent streamwise structures, while q was maintained, requiring larger fetches for the turbulent solution to stabilize. Additionally, the cell perturbation method was compared to a synthetic turbulence generator. The proposed stochastic approach provided at least the same efficiency in developing realistic turbulence, while accelerating the formation of large-scales associated with production of turbulent kinetic energy. Also, it is computationally inexpensive and does not require any turbulent information.« less

  9. Large-Scale Gene Relocations following an Ancient Genome Triplication Associated with the Diversification of Core Eudicots.

    PubMed

    Wang, Yupeng; Ficklin, Stephen P; Wang, Xiyin; Feltus, F Alex; Paterson, Andrew H

    2016-01-01

    Different modes of gene duplication including whole-genome duplication (WGD), and tandem, proximal and dispersed duplications are widespread in angiosperm genomes. Small-scale, stochastic gene relocations and transposed gene duplications are widely accepted to be the primary mechanisms for the creation of dispersed duplicates. However, here we show that most surviving ancient dispersed duplicates in core eudicots originated from large-scale gene relocations within a narrow window of time following a genome triplication (γ) event that occurred in the stem lineage of core eudicots. We name these surviving ancient dispersed duplicates as relocated γ duplicates. In Arabidopsis thaliana, relocated γ, WGD and single-gene duplicates have distinct features with regard to gene functions, essentiality, and protein interactions. Relative to γ duplicates, relocated γ duplicates have higher non-synonymous substitution rates, but comparable levels of expression and regulation divergence. Thus, relocated γ duplicates should be distinguished from WGD and single-gene duplicates for evolutionary investigations. Our results suggest large-scale gene relocations following the γ event were associated with the diversification of core eudicots.

  10. Large-Scale Gene Relocations following an Ancient Genome Triplication Associated with the Diversification of Core Eudicots

    PubMed Central

    Wang, Yupeng; Ficklin, Stephen P.; Wang, Xiyin; Feltus, F. Alex; Paterson, Andrew H.

    2016-01-01

    Different modes of gene duplication including whole-genome duplication (WGD), and tandem, proximal and dispersed duplications are widespread in angiosperm genomes. Small-scale, stochastic gene relocations and transposed gene duplications are widely accepted to be the primary mechanisms for the creation of dispersed duplicates. However, here we show that most surviving ancient dispersed duplicates in core eudicots originated from large-scale gene relocations within a narrow window of time following a genome triplication (γ) event that occurred in the stem lineage of core eudicots. We name these surviving ancient dispersed duplicates as relocated γ duplicates. In Arabidopsis thaliana, relocated γ, WGD and single-gene duplicates have distinct features with regard to gene functions, essentiality, and protein interactions. Relative to γ duplicates, relocated γ duplicates have higher non-synonymous substitution rates, but comparable levels of expression and regulation divergence. Thus, relocated γ duplicates should be distinguished from WGD and single-gene duplicates for evolutionary investigations. Our results suggest large-scale gene relocations following the γ event were associated with the diversification of core eudicots. PMID:27195960

  11. Rare events in finite and infinite dimensions

    NASA Astrophysics Data System (ADS)

    Reznikoff, Maria G.

    Thermal noise introduces stochasticity into deterministic equations and makes possible events which are never seen in the zero temperature setting. The driving force behind the thesis work is a desire to bring analysis and probability to bear on a class of relevant and intriguing physical problems, and in so doing, to allow applications to drive the development of new mathematical theory. The unifying theme is the study of rare events under the influence of small, random perturbations, and the manifold mathematical problems which ensue. In the first part, we apply large deviation theory and prefactor estimates to a coherent rotation micromagnetic model in order to analyze thermally activated magnetic switching. We consider recent physical experiments and the mathematical questions "asked" by them. A stochastic resonance type phenomenon is discovered, leading to the definition of finite temperature astroids. Non-Arrhenius behavior is discussed. The analysis is extended to ramped astroids. In addition, we discover that for low damping and ultrashort pulses, deterministic effects can override thermal effects, in accord with very recent ultrashort pulse experiments. Even more interesting, perhaps, is the study of large deviations in the infinite dimensional context, i.e. in spatially extended systems. Inspired by recent numerical investigations, we study the stochastically perturbed Allen Cahn and Cahn Hilliard equations. For the Allen Cahn equation, we study the action minimization problem (a deterministic variational problem) and prove the action scaling in four parameter regimes, via upper and lower bounds. The sharp interface limit is studied. We formally derive a reduced action functional which lends insight into the connection between action minimization and curvature flow. For the Cahn Hilliard equation, we prove upper and lower bounds for the scaling of the energy barrier in the nucleation and growth regime. Finally, we consider rare events in large or infinite domains, in one spatial dimension. We introduce a natural reference measure through which to analyze the invariant measure of stochastically perturbed, nonlinear partial differential equations. Also, for noisy reaction diffusion equations with an asymmetric potential, we discover how to rescale space and time in order to map the dynamics in the zero temperature limit to the Poisson Model, a simple version of the Johnson-Mehl-Avrami-Kolmogorov model for nucleation and growth.

  12. On the reach of perturbative methods for dark matter density fields

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Baldauf, Tobias; Zaldarriaga, Matias; Schaan, Emmanuel, E-mail: baldauf@ias.edu, E-mail: eschaan@astro.princeton.edu, E-mail: matiasz@ias.edu

    We study the mapping from Lagrangian to Eulerian space in the context of the Effective Field Theory (EFT) of Large Scale Structure. We compute Lagrangian displacements with Lagrangian Perturbation Theory (LPT) and perform the full non-perturbative transformation from displacement to density. When expanded up to a given order, this transformation reproduces the standard Eulerian Perturbation Theory (SPT) at the same order. However, the full transformation from displacement to density also includes higher order terms. These terms explicitly resum long wavelength motions, thus making the resulting density field better correlated with the true non-linear density field. As a result, the regimemore » of validity of this approach is expected to extend that of the Eulerian EFT, and match that of the IR-resummed Eulerian EFT. This approach thus effectively enables a test of the IR-resummed EFT at the field level. We estimate the size of stochastic, non-perturbative contributions to the matter density power spectrum. We find that in our highest order calculation, at redshift z = 0 the power spectrum of the density field is reproduced with an accuracy of 1% (10%) up to k = 0.25 hMpc{sup −1} (k = 0.46 hMpc{sup −1}). We believe that the dominant source of the remaining error is the stochastic contribution. Unfortunately, on these scales the stochastic term does not yet scale as k{sup 4} as it does in the very low k regime. Thus, modeling this contribution might be challenging.« less

  13. On the predictivity of pore-scale simulations: Estimating uncertainties with multilevel Monte Carlo

    NASA Astrophysics Data System (ADS)

    Icardi, Matteo; Boccardo, Gianluca; Tempone, Raúl

    2016-09-01

    A fast method with tunable accuracy is proposed to estimate errors and uncertainties in pore-scale and Digital Rock Physics (DRP) problems. The overall predictivity of these studies can be, in fact, hindered by many factors including sample heterogeneity, computational and imaging limitations, model inadequacy and not perfectly known physical parameters. The typical objective of pore-scale studies is the estimation of macroscopic effective parameters such as permeability, effective diffusivity and hydrodynamic dispersion. However, these are often non-deterministic quantities (i.e., results obtained for specific pore-scale sample and setup are not totally reproducible by another ;equivalent; sample and setup). The stochastic nature can arise due to the multi-scale heterogeneity, the computational and experimental limitations in considering large samples, and the complexity of the physical models. These approximations, in fact, introduce an error that, being dependent on a large number of complex factors, can be modeled as random. We propose a general simulation tool, based on multilevel Monte Carlo, that can reduce drastically the computational cost needed for computing accurate statistics of effective parameters and other quantities of interest, under any of these random errors. This is, to our knowledge, the first attempt to include Uncertainty Quantification (UQ) in pore-scale physics and simulation. The method can also provide estimates of the discretization error and it is tested on three-dimensional transport problems in heterogeneous materials, where the sampling procedure is done by generation algorithms able to reproduce realistic consolidated and unconsolidated random sphere and ellipsoid packings and arrangements. A totally automatic workflow is developed in an open-source code [1], that include rigid body physics and random packing algorithms, unstructured mesh discretization, finite volume solvers, extrapolation and post-processing techniques. The proposed method can be efficiently used in many porous media applications for problems such as stochastic homogenization/upscaling, propagation of uncertainty from microscopic fluid and rock properties to macro-scale parameters, robust estimation of Representative Elementary Volume size for arbitrary physics.

  14. Stochastic Convection Parameterizations: The Eddy-Diffusivity/Mass-Flux (EDMF) Approach (Invited)

    NASA Astrophysics Data System (ADS)

    Teixeira, J.

    2013-12-01

    In this presentation it is argued that moist convection parameterizations need to be stochastic in order to be realistic - even in deterministic atmospheric prediction systems. A new unified convection and boundary layer parameterization (EDMF) that optimally combines the Eddy-Diffusivity (ED) approach for smaller-scale boundary layer mixing with the Mass-Flux (MF) approach for larger-scale plumes is discussed. It is argued that for realistic simulations stochastic methods have to be employed in this new unified EDMF. Positive results from the implementation of the EDMF approach in atmospheric models are presented.

  15. A damage analysis for brittle materials using stochastic micro-structural information

    NASA Astrophysics Data System (ADS)

    Lin, Shih-Po; Chen, Jiun-Shyan; Liang, Shixue

    2016-03-01

    In this work, a micro-crack informed stochastic damage analysis is performed to consider the failures of material with stochastic microstructure. The derivation of the damage evolution law is based on the Helmholtz free energy equivalence between cracked microstructure and homogenized continuum. The damage model is constructed under the stochastic representative volume element (SRVE) framework. The characteristics of SRVE used in the construction of the stochastic damage model have been investigated based on the principle of the minimum potential energy. The mesh dependency issue has been addressed by introducing a scaling law into the damage evolution equation. The proposed methods are then validated through the comparison between numerical simulations and experimental observations of a high strength concrete. It is observed that the standard deviation of porosity in the microstructures has stronger effect on the damage states and the peak stresses than its effect on the Young's and shear moduli in the macro-scale responses.

  16. Identification of hydraulic conductivity structure in sand and gravel aquifers: Cape Cod data set

    USGS Publications Warehouse

    Eggleston, J.R.; Rojstaczer, S.A.; Peirce, J.J.

    1996-01-01

    This study evaluates commonly used geostatistical methods to assess reproduction of hydraulic conductivity (K) structure and sensitivity under limiting amounts of data. Extensive conductivity measurements from the Cape Cod sand and gravel aquifer are used to evaluate two geostatistical estimation methods, conditional mean as an estimate and ordinary kriging, and two stochastic simulation methods, simulated annealing and sequential Gaussian simulation. Our results indicate that for relatively homogeneous sand and gravel aquifers such as the Cape Cod aquifer, neither estimation methods nor stochastic simulation methods give highly accurate point predictions of hydraulic conductivity despite the high density of collected data. Although the stochastic simulation methods yielded higher errors than the estimation methods, the stochastic simulation methods yielded better reproduction of the measured In (K) distribution and better reproduction of local contrasts in In (K). The inability of kriging to reproduce high In (K) values, as reaffirmed by this study, provides a strong instigation for choosing stochastic simulation methods to generate conductivity fields when performing fine-scale contaminant transport modeling. Results also indicate that estimation error is relatively insensitive to the number of hydraulic conductivity measurements so long as more than a threshold number of data are used to condition the realizations. This threshold occurs for the Cape Cod site when there are approximately three conductivity measurements per integral volume. The lack of improvement with additional data suggests that although fine-scale hydraulic conductivity structure is evident in the variogram, it is not accurately reproduced by geostatistical estimation methods. If the Cape Cod aquifer spatial conductivity characteristics are indicative of other sand and gravel deposits, then the results on predictive error versus data collection obtained here have significant practical consequences for site characterization. Heavily sampled sand and gravel aquifers, such as Cape Cod and Borden, may have large amounts of redundant data, while in more common real world settings, our results suggest that denser data collection will likely improve understanding of permeability structure.

  17. Towards a theory of cortical columns: From spiking neurons to interacting neural populations of finite size.

    PubMed

    Schwalger, Tilo; Deger, Moritz; Gerstner, Wulfram

    2017-04-01

    Neural population equations such as neural mass or field models are widely used to study brain activity on a large scale. However, the relation of these models to the properties of single neurons is unclear. Here we derive an equation for several interacting populations at the mesoscopic scale starting from a microscopic model of randomly connected generalized integrate-and-fire neuron models. Each population consists of 50-2000 neurons of the same type but different populations account for different neuron types. The stochastic population equations that we find reveal how spike-history effects in single-neuron dynamics such as refractoriness and adaptation interact with finite-size fluctuations on the population level. Efficient integration of the stochastic mesoscopic equations reproduces the statistical behavior of the population activities obtained from microscopic simulations of a full spiking neural network model. The theory describes nonlinear emergent dynamics such as finite-size-induced stochastic transitions in multistable networks and synchronization in balanced networks of excitatory and inhibitory neurons. The mesoscopic equations are employed to rapidly integrate a model of a cortical microcircuit consisting of eight neuron types, which allows us to predict spontaneous population activities as well as evoked responses to thalamic input. Our theory establishes a general framework for modeling finite-size neural population dynamics based on single cell and synapse parameters and offers an efficient approach to analyzing cortical circuits and computations.

  18. Stochastic Flux-Freezing in MHD Turbulence and Reconnection in the Heliosheath

    NASA Astrophysics Data System (ADS)

    Eyink, G. L.; Lalescu, C.; Vishniac, E.

    2012-12-01

    Fast reconnection of the sectored magnetic field in the heliosheath created by flapping of the heliospheric current sheet has been conjectured to accelerate anomalous cosmic rays and to create other signatures observed by the Voyager probes. The reconnecting flux structures could have sizes up to ˜100 AU, much larger than the ion cyclotron radius ˜10^3 km. Hence MHD should be valid at those scales. To account for rapid reconnection of such large-scale structures, we note that the high Reynolds numbers in the heliosheath for motions perpendicular to the magnetic field (Re ˜10^{14}) suggest transition to turbulence. The Lazarian-Vishnian theory of turbulent reconnection can account for the fast rates, but it implies a puzzling breakdown of magnetic flux-freezing in high-conductivity MHD plasmas. We address this paradox with a novel stochastic formulation of flux-freezing for resistive MHD and a numerical Lagrangian study with a spacetime database of MHD turbulence. We report the first observation of Richardson diffusion in MHD turbulence, which leads to "spontaneous stochasticity" of the Lagrangian trajectories and a violation of standard flux-freezing by many orders of magnitude. The work supports a prediction by Lazarian-Opher (2009) of extended thick reconnection zones within the heliosheath, perhaps up to an AU across, although the microscale reconnection events within these zones would have thickness of order the ion cyclotron radius and be described by kinetic Vlasov theory.

  19. Stochastic Flux-Freezing in MHD Turbulence and Reconnection in the Heliosheath (Invited)

    NASA Astrophysics Data System (ADS)

    Eyink, G. L.; Lalescu, C. C.; Vishniac, E. T.

    2013-12-01

    Fast reconnection of the sectored magnetic field in the heliosheath created by flapping of the heliospheric current sheet has been conjectured to accelerate anomalous cosmic rays and to create other signatures observed by the Voyager probes. The reconnecting flux structures could have sizes up to ˜100 AU, much larger than the ion cyclotron radius ˜103 km. Hence MHD should be valid at those scales. To account for rapid reconnection of such large-scale structures, we note that the high Reynolds numbers in the heliosheath for motions perpendicular to the magnetic field (Re ˜1014) suggest transition to turbulence. The Lazarian-Vishnian theory of turbulent reconnection can account for the fast rates, but it implies a puzzling breakdown of magnetic flux-freezing in high-conductivity MHD plasmas. We address this paradox with a novel stochastic formulation of flux-freezing for resistive MHD and a numerical Lagrangian study with a spacetime database of MHD turbulence. We report the first observation of Richardson diffusion in MHD turbulence, which leads to 'spontaneous stochasticity' of the Lagrangian trajectories and a violation of standard flux- freezing by many orders of magnitude. The work supports a prediction by Lazarian-Opher (2009) of extended thick reconnection zones within the heliosheath, perhaps up to an AU across, although the microscale reconnection events within these zones would have thickness of order the ion cyclotron radius and be described by kinetic Vlasov theory.

  20. Anisotropic Stochastic Vortex Structure Method for Simulating Particle Collision in Turbulent Shear Flows

    NASA Astrophysics Data System (ADS)

    Dizaji, Farzad; Marshall, Jeffrey; Grant, John; Jin, Xing

    2017-11-01

    Accounting for the effect of subgrid-scale turbulence on interacting particles remains a challenge when using Reynolds-Averaged Navier Stokes (RANS) or Large Eddy Simulation (LES) approaches for simulation of turbulent particulate flows. The standard stochastic Lagrangian method for introducing turbulence into particulate flow computations is not effective when the particles interact via collisions, contact electrification, etc., since this method is not intended to accurately model relative motion between particles. We have recently developed the stochastic vortex structure (SVS) method and demonstrated its use for accurate simulation of particle collision in homogeneous turbulence; the current work presents an extension of the SVS method to turbulent shear flows. The SVS method simulates subgrid-scale turbulence using a set of randomly-positioned, finite-length vortices to generate a synthetic fluctuating velocity field. It has been shown to accurately reproduce the turbulence inertial-range spectrum and the probability density functions for the velocity and acceleration fields. In order to extend SVS to turbulent shear flows, a new inversion method has been developed to orient the vortices in order to generate a specified Reynolds stress field. The extended SVS method is validated in the present study with comparison to direct numerical simulations for a planar turbulent jet flow. This research was supported by the U.S. National Science Foundation under Grant CBET-1332472.

  1. The predictability of consumer visitation patterns

    NASA Astrophysics Data System (ADS)

    Krumme, Coco; Llorente, Alejandro; Cebrian, Manuel; Pentland, Alex ("Sandy"); Moro, Esteban

    2013-04-01

    We consider hundreds of thousands of individual economic transactions to ask: how predictable are consumers in their merchant visitation patterns? Our results suggest that, in the long-run, much of our seemingly elective activity is actually highly predictable. Notwithstanding a wide range of individual preferences, shoppers share regularities in how they visit merchant locations over time. Yet while aggregate behavior is largely predictable, the interleaving of shopping events introduces important stochastic elements at short time scales. These short- and long-scale patterns suggest a theoretical upper bound on predictability, and describe the accuracy of a Markov model in predicting a person's next location. We incorporate population-level transition probabilities in the predictive models, and find that in many cases these improve accuracy. While our results point to the elusiveness of precise predictions about where a person will go next, they suggest the existence, at large time-scales, of regularities across the population.

  2. The predictability of consumer visitation patterns

    PubMed Central

    Krumme, Coco; Llorente, Alejandro; Cebrian, Manuel; Pentland, Alex ("Sandy"); Moro, Esteban

    2013-01-01

    We consider hundreds of thousands of individual economic transactions to ask: how predictable are consumers in their merchant visitation patterns? Our results suggest that, in the long-run, much of our seemingly elective activity is actually highly predictable. Notwithstanding a wide range of individual preferences, shoppers share regularities in how they visit merchant locations over time. Yet while aggregate behavior is largely predictable, the interleaving of shopping events introduces important stochastic elements at short time scales. These short- and long-scale patterns suggest a theoretical upper bound on predictability, and describe the accuracy of a Markov model in predicting a person's next location. We incorporate population-level transition probabilities in the predictive models, and find that in many cases these improve accuracy. While our results point to the elusiveness of precise predictions about where a person will go next, they suggest the existence, at large time-scales, of regularities across the population. PMID:23598917

  3. Cosmic microwave background bispectrum from primordial magnetic fields on large angular scales.

    PubMed

    Seshadri, T R; Subramanian, Kandaswamy

    2009-08-21

    Primordial magnetic fields lead to non-Gaussian signals in the cosmic microwave background (CMB) even at the lowest order, as magnetic stresses and the temperature anisotropy they induce depend quadratically on the magnetic field. In contrast, CMB non-Gaussianity due to inflationary scalar perturbations arises only as a higher-order effect. We propose a novel probe of stochastic primordial magnetic fields that exploits the characteristic CMB non-Gaussianity that they induce. We compute the CMB bispectrum (b(l1l2l3)) induced by such fields on large angular scales. We find a typical value of l1(l1 + 1)l3(l3 + 1)b(l1l2l3) approximately 10(-22), for magnetic fields of strength B0 approximately 3 nG and with a nearly scale invariant magnetic spectrum. Observational limits on the bispectrum allow us to set upper limits on B0 approximately 35 nG.

  4. Stochastic Fermi Energization of Coronal Plasma during Explosive Magnetic Energy Release

    NASA Astrophysics Data System (ADS)

    Pisokas, Theophilos; Vlahos, Loukas; Isliker, Heinz; Tsiolis, Vassilis; Anastasiadis, Anastasios

    2017-02-01

    The aim of this study is to analyze the interaction of charged particles (ions and electrons) with randomly formed particle scatterers (e.g., large-scale local “magnetic fluctuations” or “coherent magnetic irregularities”) using the setup proposed initially by Fermi. These scatterers are formed by the explosive magnetic energy release and propagate with the Alfvén speed along the irregular magnetic fields. They are large-scale local fluctuations (δB/B ≈ 1) randomly distributed inside the unstable magnetic topology and will here be called Alfvénic Scatterers (AS). We constructed a 3D grid on which a small fraction of randomly chosen grid points are acting as AS. In particular, we study how a large number of test particles evolves inside a collection of AS, analyzing the evolution of their energy distribution and their escape-time distribution. We use a well-established method to estimate the transport coefficients directly from the trajectories of the particles. Using the estimated transport coefficients and solving the Fokker-Planck equation numerically, we can recover the energy distribution of the particles. We have shown that the stochastic Fermi energization of mildly relativistic and relativistic plasma can heat and accelerate the tail of the ambient particle distribution as predicted by Parker & Tidman and Ramaty. The temperature of the hot plasma and the tail of the energetic particles depend on the mean free path (λsc) of the particles between the scatterers inside the energization volume.

  5. Long-term influence of asteroids on planet longitudes and chaotic dynamics of the solar system

    NASA Astrophysics Data System (ADS)

    Woillez, E.; Bouchet, F.

    2017-11-01

    Over timescales much longer than an orbital period, the solar system exhibits large-scale chaotic behavior and can thus be viewed as a stochastic dynamical system. The aim of the present paper is to compare different sources of stochasticity in the solar system. More precisely we studied the importance of the long term influence of asteroids on the chaotic dynamics of the solar system. We show that the effects of asteroids on planets is similar to a white noise process, when those effects are considered on a timescale much larger than the correlation time τϕ ≃ 104 yr of asteroid trajectories. We computed the timescale τe after which the effects of the stochastic evolution of the asteroids lead to a loss of information for the initial conditions of the perturbed Laplace-Lagrange secular dynamics. The order of magnitude of this timescale is precisely determined by theoretical argument, and we find that τe ≃ 104 Myr. Although comparable to the full main-sequence lifetime of the sun, this timescale is considerably longer than the Lyapunov time τI ≃ 10 Myr of the solar system without asteroids. This shows that the external sources of chaos arise as a small perturbation in the stochastic secular behavior of the solar system, rather due to intrinsic chaos.

  6. Improved Climate Simulations through a Stochastic Parameterization of Ocean Eddies

    NASA Astrophysics Data System (ADS)

    Williams, Paul; Howe, Nicola; Gregory, Jonathan; Smith, Robin; Joshi, Manoj

    2016-04-01

    In climate simulations, the impacts of the sub-grid scales on the resolved scales are conventionally represented using deterministic closure schemes, which assume that the impacts are uniquely determined by the resolved scales. Stochastic parameterization relaxes this assumption, by sampling the sub-grid variability in a computationally inexpensive manner. This presentation shows that the simulated climatological state of the ocean is improved in many respects by implementing a simple stochastic parameterization of ocean eddies into a coupled atmosphere-ocean general circulation model. Simulations from a high-resolution, eddy-permitting ocean model are used to calculate the eddy statistics needed to inject realistic stochastic noise into a low-resolution, non-eddy-permitting version of the same model. A suite of four stochastic experiments is then run to test the sensitivity of the simulated climate to the noise definition, by varying the noise amplitude and decorrelation time within reasonable limits. The addition of zero-mean noise to the ocean temperature tendency is found to have a non-zero effect on the mean climate. Specifically, in terms of the ocean temperature and salinity fields both at the surface and at depth, the noise reduces many of the biases in the low-resolution model and causes it to more closely resemble the high-resolution model. The variability of the strength of the global ocean thermohaline circulation is also improved. It is concluded that stochastic ocean perturbations can yield reductions in climate model error that are comparable to those obtained by refining the resolution, but without the increased computational cost. Therefore, stochastic parameterizations of ocean eddies have the potential to significantly improve climate simulations. Reference PD Williams, NJ Howe, JM Gregory, RS Smith, and MM Joshi (2016) Improved Climate Simulations through a Stochastic Parameterization of Ocean Eddies. Journal of Climate, under revision.

  7. What Shapes the Phylogenetic Structure of Anuran Communities in a Seasonal Environment? The Influence of Determinism at Regional Scale to Stochasticity or Antagonistic Forces at Local Scale

    PubMed Central

    Ferreira, Vanda Lúcia; Strüssmann, Christine; Tomas, Walfrido Moraes

    2015-01-01

    Ecological communities are structured by both deterministic and stochastic processes. We investigated phylogenetic patterns at regional and local scales to understand the influences of seasonal processes in shaping the structure of anuran communities in the southern Pantanal wetland, Brazil. We assessed the phylogenetic structure at different scales, using the Net Relatedness Index (NRI), the Nearest Taxon Index (NTI), and phylobetadiversity indexes, as well as a permutation test, to evaluate the effect of seasonality. The anuran community was represented by a non-random set of species with a high degree of phylogenetic relatedness at the regional scale. However, at the local scale the phylogenetic structure of the community was weakly related with the seasonality of the system, indicating that oriented stochastic processes (e.g. colonization, extinction and ecological drift) and/or antagonist forces drive the structure of such communities in the southern Pantanal. PMID:26102202

  8. What Shapes the Phylogenetic Structure of Anuran Communities in a Seasonal Environment? The Influence of Determinism at Regional Scale to Stochasticity or Antagonistic Forces at Local Scale.

    PubMed

    Martins, Clarissa de Araújo; Roque, Fabio de Oliveira; Santos, Bráulio A; Ferreira, Vanda Lúcia; Strüssmann, Christine; Tomas, Walfrido Moraes

    2015-01-01

    Ecological communities are structured by both deterministic and stochastic processes. We investigated phylogenetic patterns at regional and local scales to understand the influences of seasonal processes in shaping the structure of anuran communities in the southern Pantanal wetland, Brazil. We assessed the phylogenetic structure at different scales, using the Net Relatedness Index (NRI), the Nearest Taxon Index (NTI), and phylobetadiversity indexes, as well as a permutation test, to evaluate the effect of seasonality. The anuran community was represented by a non-random set of species with a high degree of phylogenetic relatedness at the regional scale. However, at the local scale the phylogenetic structure of the community was weakly related with the seasonality of the system, indicating that oriented stochastic processes (e.g. colonization, extinction and ecological drift) and/or antagonist forces drive the structure of such communities in the southern Pantanal.

  9. Investment in different sized SMRs: Economic evaluation of stochastic scenarios by INCAS code

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Barenghi, S.; Boarin, S.; Ricotti, M. E.

    2012-07-01

    Small Modular LWR concepts are being developed and proposed to investors worldwide. They capitalize on operating track record of GEN II LWR, while introducing innovative design enhancements allowed by smaller size and additional benefits from the higher degree of modularization and from deployment of multiple units on the same site. (i.e. 'Economy of Multiple' paradigm) Nevertheless Small Modular Reactors pay for a dis-economy of scale that represents a relevant penalty on a capital intensive investment. Investors in the nuclear power generation industry face a very high financial risk, due to high capital commitment and exceptionally long pay-back time. Investment riskmore » arise from uncertainty that affects scenario conditions over such a long time horizon. Risk aversion is increased by current adverse conditions of financial markets and general economic downturn, as is the case nowadays. This work investigates both the investment profitability and risk of alternative investments in a single Large Reactor or in multiple SMR of different sizes drawing information from project's Internal Rate of Return stochastic distribution. multiple SMR deployment on a single site with total power installed, equivalent to a single LR. Uncertain scenario conditions and stochastic input assumptions are included in the analysis, representing investment uncertainty and risk. Results show that, despite the combination of much larger number of stochastic variables in SMR fleets, uncertainty of project profitability is not increased, as compared to LR: SMR have features able to smooth IRR variance and control investment risk. Despite dis-economy of scale, SMR represent a limited capital commitment and a scalable investment option that meet investors' interest, even in developed and mature markets, that are traditional marketplace for LR. (authors)« less

  10. Oxygen Distributions-Evaluation of Computational Methods, Using a Stochastic Model for Large Tumour Vasculature, to Elucidate the Importance of Considering a Complete Vascular Network.

    PubMed

    Lagerlöf, Jakob H; Bernhardt, Peter

    2016-01-01

    To develop a general model that utilises a stochastic method to generate a vessel tree based on experimental data, and an associated irregular, macroscopic tumour. These will be used to evaluate two different methods for computing oxygen distribution. A vessel tree structure, and an associated tumour of 127 cm3, were generated, using a stochastic method and Bresenham's line algorithm to develop trees on two different scales and fusing them together. The vessel dimensions were adjusted through convolution and thresholding and each vessel voxel was assigned an oxygen value. Diffusion and consumption were modelled using a Green's function approach together with Michaelis-Menten kinetics. The computations were performed using a combined tree method (CTM) and an individual tree method (ITM). Five tumour sub-sections were compared, to evaluate the methods. The oxygen distributions of the same tissue samples, using different methods of computation, were considerably less similar (root mean square deviation, RMSD≈0.02) than the distributions of different samples using CTM (0.001< RMSD<0.01). The deviations of ITM from CTM increase with lower oxygen values, resulting in ITM severely underestimating the level of hypoxia in the tumour. Kolmogorov Smirnov (KS) tests showed that millimetre-scale samples may not represent the whole. The stochastic model managed to capture the heterogeneous nature of hypoxic fractions and, even though the simplified computation did not considerably alter the oxygen distribution, it leads to an evident underestimation of tumour hypoxia, and thereby radioresistance. For a trustworthy computation of tumour oxygenation, the interaction between adjacent microvessel trees must not be neglected, why evaluation should be made using high resolution and the CTM, applied to the entire tumour.

  11. Oxygen Distributions—Evaluation of Computational Methods, Using a Stochastic Model for Large Tumour Vasculature, to Elucidate the Importance of Considering a Complete Vascular Network

    PubMed Central

    Bernhardt, Peter

    2016-01-01

    Purpose To develop a general model that utilises a stochastic method to generate a vessel tree based on experimental data, and an associated irregular, macroscopic tumour. These will be used to evaluate two different methods for computing oxygen distribution. Methods A vessel tree structure, and an associated tumour of 127 cm3, were generated, using a stochastic method and Bresenham’s line algorithm to develop trees on two different scales and fusing them together. The vessel dimensions were adjusted through convolution and thresholding and each vessel voxel was assigned an oxygen value. Diffusion and consumption were modelled using a Green’s function approach together with Michaelis-Menten kinetics. The computations were performed using a combined tree method (CTM) and an individual tree method (ITM). Five tumour sub-sections were compared, to evaluate the methods. Results The oxygen distributions of the same tissue samples, using different methods of computation, were considerably less similar (root mean square deviation, RMSD≈0.02) than the distributions of different samples using CTM (0.001< RMSD<0.01). The deviations of ITM from CTM increase with lower oxygen values, resulting in ITM severely underestimating the level of hypoxia in the tumour. Kolmogorov Smirnov (KS) tests showed that millimetre-scale samples may not represent the whole. Conclusions The stochastic model managed to capture the heterogeneous nature of hypoxic fractions and, even though the simplified computation did not considerably alter the oxygen distribution, it leads to an evident underestimation of tumour hypoxia, and thereby radioresistance. For a trustworthy computation of tumour oxygenation, the interaction between adjacent microvessel trees must not be neglected, why evaluation should be made using high resolution and the CTM, applied to the entire tumour. PMID:27861529

  12. Nonlinear Image Denoising Methodologies

    DTIC Science & Technology

    2002-05-01

    53 5.3 A Multiscale Approach to Scale-Space Analysis . . . . . . . . . . . . . . . . 53 5.4...etc. In this thesis, Our approach to denoising is first based on a controlled nonlinear stochastic random walk to achieve a scale space analysis ( as in... stochastic treatment or interpretation of the diffusion. In addition, unless a specific stopping time is known to be adequate, the resulting evolution

  13. Spatially explicit and stochastic simulation of forest landscape fire disturbance and succession

    Treesearch

    Hong S. He; David J. Mladenoff

    1999-01-01

    Understanding disturbance and recovery of forest landscapes is a challenge because of complex interactions over a range of temporal and spatial scales. Landscape simulation models offer an approach to studying such systems at broad scales. Fire can be simulated spatially using mechanistic or stochastic approaches. We describe the fire module in a spatially explicit,...

  14. Hybrid Markov-mass action law model for cell activation by rare binding events: Application to calcium induced vesicular release at neuronal synapses.

    PubMed

    Guerrier, Claire; Holcman, David

    2016-10-18

    Binding of molecules, ions or proteins to small target sites is a generic step of cell activation. This process relies on rare stochastic events where a particle located in a large bulk has to find small and often hidden targets. We present here a hybrid discrete-continuum model that takes into account a stochastic regime governed by rare events and a continuous regime in the bulk. The rare discrete binding events are modeled by a Markov chain for the encounter of small targets by few Brownian particles, for which the arrival time is Poissonian. The large ensemble of particles is described by mass action laws. We use this novel model to predict the time distribution of vesicular release at neuronal synapses. Vesicular release is triggered by the binding of few calcium ions that can originate either from the synaptic bulk or from the entry through calcium channels. We report here that the distribution of release time is bimodal although it is triggered by a single fast action potential. While the first peak follows a stimulation, the second corresponds to the random arrival over much longer time of ions located in the synaptic terminal to small binding vesicular targets. To conclude, the present multiscale stochastic modeling approach allows studying cellular events based on integrating discrete molecular events over several time scales.

  15. Final Technical Report: Quantification of Uncertainty in Extreme Scale Computations (QUEST)

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Knio, Omar M.

    QUEST is a SciDAC Institute comprising Sandia National Laboratories, Los Alamos National Laboratory, University of Southern California, Massachusetts Institute of Technology, University of Texas at Austin, and Duke University. The mission of QUEST is to: (1) develop a broad class of uncertainty quantification (UQ) methods/tools, and (2) provide UQ expertise and software to other SciDAC projects, thereby enabling/guiding their UQ activities. The Duke effort focused on the development of algorithms and utility software for non-intrusive sparse UQ representations, and on participation in the organization of annual workshops and tutorials to disseminate UQ tools to the community, and to gather inputmore » in order to adapt approaches to the needs of SciDAC customers. In particular, fundamental developments were made in (a) multiscale stochastic preconditioners, (b) gradient-based approaches to inverse problems, (c) adaptive pseudo-spectral approximations, (d) stochastic limit cycles, and (e) sensitivity analysis tools for noisy systems. In addition, large-scale demonstrations were performed, namely in the context of ocean general circulation models.« less

  16. Final Technical Report: Mathematical Foundations for Uncertainty Quantification in Materials Design

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Plechac, Petr; Vlachos, Dionisios G.

    We developed path-wise information theory-based and goal-oriented sensitivity analysis and parameter identification methods for complex high-dimensional dynamics and in particular of non-equilibrium extended molecular systems. The combination of these novel methodologies provided the first methods in the literature which are capable to handle UQ questions for stochastic complex systems with some or all of the following features: (a) multi-scale stochastic models such as (bio)chemical reaction networks, with a very large number of parameters, (b) spatially distributed systems such as Kinetic Monte Carlo or Langevin Dynamics, (c) non-equilibrium processes typically associated with coupled physico-chemical mechanisms, driven boundary conditions, hybrid micro-macro systems,more » etc. A particular computational challenge arises in simulations of multi-scale reaction networks and molecular systems. Mathematical techniques were applied to in silico prediction of novel materials with emphasis on the effect of microstructure on model uncertainty quantification (UQ). We outline acceleration methods to make calculations of real chemistry feasible followed by two complementary tasks on structure optimization and microstructure-induced UQ.« less

  17. Stochastic architecture for Hopfield neural nets

    NASA Technical Reports Server (NTRS)

    Pavel, Sandy

    1992-01-01

    An expandable stochastic digital architecture for recurrent (Hopfield like) neural networks is proposed. The main features and basic principles of stochastic processing are presented. The stochastic digital architecture is based on a chip with n full interconnected neurons with a pipeline, bit processing structure. For large applications, a flexible way to interconnect many such chips is provided.

  18. Suppression of large edge-localized modes in high-confinement DIII-D plasmas with a stochastic magnetic boundary.

    PubMed

    Evans, T E; Moyer, R A; Thomas, P R; Watkins, J G; Osborne, T H; Boedo, J A; Doyle, E J; Fenstermacher, M E; Finken, K H; Groebner, R J; Groth, M; Harris, J H; La Haye, R J; Lasnier, C J; Masuzaki, S; Ohyabu, N; Pretty, D G; Rhodes, T L; Reimerdes, H; Rudakov, D L; Schaffer, M J; Wang, G; Zeng, L

    2004-06-11

    A stochastic magnetic boundary, produced by an applied edge resonant magnetic perturbation, is used to suppress most large edge-localized modes (ELMs) in high confinement (H-mode) plasmas. The resulting H mode displays rapid, small oscillations with a bursty character modulated by a coherent 130 Hz envelope. The H mode transport barrier and core confinement are unaffected by the stochastic boundary, despite a threefold drop in the toroidal rotation. These results demonstrate that stochastic boundaries are compatible with H modes and may be attractive for ELM control in next-step fusion tokamaks.

  19. Fast stochastic algorithm for simulating evolutionary population dynamics

    NASA Astrophysics Data System (ADS)

    Tsimring, Lev; Hasty, Jeff; Mather, William

    2012-02-01

    Evolution and co-evolution of ecological communities are stochastic processes often characterized by vastly different rates of reproduction and mutation and a coexistence of very large and very small sub-populations of co-evolving species. This creates serious difficulties for accurate statistical modeling of evolutionary dynamics. In this talk, we introduce a new exact algorithm for fast fully stochastic simulations of birth/death/mutation processes. It produces a significant speedup compared to the direct stochastic simulation algorithm in a typical case when the total population size is large and the mutation rates are much smaller than birth/death rates. We illustrate the performance of the algorithm on several representative examples: evolution on a smooth fitness landscape, NK model, and stochastic predator-prey system.

  20. A large deviations principle for stochastic flows of viscous fluids

    NASA Astrophysics Data System (ADS)

    Cipriano, Fernanda; Costa, Tiago

    2018-04-01

    We study the well-posedness of a stochastic differential equation on the two dimensional torus T2, driven by an infinite dimensional Wiener process with drift in the Sobolev space L2 (0 , T ;H1 (T2)) . The solution corresponds to a stochastic Lagrangian flow in the sense of DiPerna Lions. By taking into account that the motion of a viscous incompressible fluid on the torus can be described through a suitable stochastic differential equation of the previous type, we study the inviscid limit. By establishing a large deviations principle, we show that, as the viscosity goes to zero, the Lagrangian stochastic Navier-Stokes flow approaches the Euler deterministic Lagrangian flow with an exponential rate function.

  1. Spatial scale affects the relative role of stochasticity versus determinism in soil bacterial communities in wheat fields across the North China Plain.

    PubMed

    Shi, Yu; Li, Yuntao; Xiang, Xingjia; Sun, Ruibo; Yang, Teng; He, Dan; Zhang, Kaoping; Ni, Yingying; Zhu, Yong-Guan; Adams, Jonathan M; Chu, Haiyan

    2018-02-05

    The relative importance of stochasticity versus determinism in soil bacterial communities is unclear, as are the possible influences that alter the balance between these. Here, we investigated the influence of spatial scale on the relative role of stochasticity and determinism in agricultural monocultures consisting only of wheat, thereby minimizing the influence of differences in plant species cover and in cultivation/disturbance regime, extending across a wide range of soils and climates of the North China Plain (NCP). We sampled 243 sites across 1092 km and sequenced the 16S rRNA bacterial gene using MiSeq. We hypothesized that determinism would play a relatively stronger role at the broadest scales, due to the strong influence of climate and soil differences in selecting many distinct OTUs of bacteria adapted to the different environments. In order to test the more general applicability of the hypothesis, we also compared with a natural ecosystem on the Tibetan Plateau. Our results revealed that the relative importance of stochasticity vs. determinism did vary with spatial scale, in the direction predicted. On the North China Plain, stochasticity played a dominant role from 150 to 900 km (separation between pairs of sites) and determinism dominated at more than 900 km (broad scale). On the Tibetan Plateau, determinism played a dominant role from 130 to 1200 km and stochasticity dominated at less than 130 km. Among the identifiable deterministic factors, soil pH showed the strongest influence on soil bacterial community structure and diversity across the North China Plain. Together, 23.9% of variation in soil microbial community composition could be explained, with environmental factors accounting for 19.7% and spatial parameters 4.1%. Our findings revealed that (1) stochastic processes are relatively more important on the North China Plain, while deterministic processes are more important on the Tibetan Plateau; (2) soil pH was the major factor in shaping soil bacterial community structure of the North China Plain; and (3) most variation in soil microbial community composition could not be explained with existing environmental and spatial factors. Further studies are needed to dissect the influence of stochastic factors (e.g., mutations or extinctions) on soil microbial community distribution, which might make it easier to predictably manipulate the microbial community to produce better yield and soil sustainability outcomes.

  2. Portable parallel stochastic optimization for the design of aeropropulsion components

    NASA Technical Reports Server (NTRS)

    Sues, Robert H.; Rhodes, G. S.

    1994-01-01

    This report presents the results of Phase 1 research to develop a methodology for performing large-scale Multi-disciplinary Stochastic Optimization (MSO) for the design of aerospace systems ranging from aeropropulsion components to complete aircraft configurations. The current research recognizes that such design optimization problems are computationally expensive, and require the use of either massively parallel or multiple-processor computers. The methodology also recognizes that many operational and performance parameters are uncertain, and that uncertainty must be considered explicitly to achieve optimum performance and cost. The objective of this Phase 1 research was to initialize the development of an MSO methodology that is portable to a wide variety of hardware platforms, while achieving efficient, large-scale parallelism when multiple processors are available. The first effort in the project was a literature review of available computer hardware, as well as review of portable, parallel programming environments. The first effort was to implement the MSO methodology for a problem using the portable parallel programming language, Parallel Virtual Machine (PVM). The third and final effort was to demonstrate the example on a variety of computers, including a distributed-memory multiprocessor, a distributed-memory network of workstations, and a single-processor workstation. Results indicate the MSO methodology can be well-applied towards large-scale aerospace design problems. Nearly perfect linear speedup was demonstrated for computation of optimization sensitivity coefficients on both a 128-node distributed-memory multiprocessor (the Intel iPSC/860) and a network of workstations (speedups of almost 19 times achieved for 20 workstations). Very high parallel efficiencies (75 percent for 31 processors and 60 percent for 50 processors) were also achieved for computation of aerodynamic influence coefficients on the Intel. Finally, the multi-level parallelization strategy that will be needed for large-scale MSO problems was demonstrated to be highly efficient. The same parallel code instructions were used on both platforms, demonstrating portability. There are many applications for which MSO can be applied, including NASA's High-Speed-Civil Transport, and advanced propulsion systems. The use of MSO will reduce design and development time and testing costs dramatically.

  3. On generic obstructions to recovering correct statistics from climate simulations: Homogenization for deterministic maps and multiplicative noise

    NASA Astrophysics Data System (ADS)

    Gottwald, Georg; Melbourne, Ian

    2013-04-01

    Whereas diffusion limits of stochastic multi-scale systems have a long and successful history, the case of constructing stochastic parametrizations of chaotic deterministic systems has been much less studied. We present rigorous results of convergence of a chaotic slow-fast system to a stochastic differential equation with multiplicative noise. Furthermore we present rigorous results for chaotic slow-fast maps, occurring as numerical discretizations of continuous time systems. This raises the issue of how to interpret certain stochastic integrals; surprisingly the resulting integrals of the stochastic limit system are generically neither of Stratonovich nor of Ito type in the case of maps. It is shown that the limit system of a numerical discretisation is different to the associated continuous time system. This has important consequences when interpreting the statistics of long time simulations of multi-scale systems - they may be very different to the one of the original continuous time system which we set out to study.

  4. Large-Scale CTRW Analysis of Push-Pull Tracer Tests and Other Transport in Heterogeneous Porous Media

    NASA Astrophysics Data System (ADS)

    Hansen, S. K.; Berkowitz, B.

    2014-12-01

    Recently, we developed an alternative CTRW formulation which uses a "latching" upscaling scheme to rigorously map continuous or fine-scale stochastic solute motion onto discrete transitions on an arbitrarily coarse lattice (with spacing potentially on the meter scale or more). This approach enables model simplification, among many other things. Under advection, for example, we see that many relevant anomalous transport problems may be mapped into 1D, with latching to a sequence of successive, uniformly spaced planes. On this formulation (which we term RP-CTRW), the spatial transition vector may generally be made deterministic, with CTRW waiting time distributions encapsulating all the stochastic behavior. We demonstrate the excellent performance of this technique alongside Pareto-distributed waiting times in explaining experiments across a variety of scales using only two degrees of freedom. An interesting new application of the RP-CTRW technique is the analysis of radial (push-pull) tracer tests. Given modern computational power, random walk simulations are a natural fit for the inverse problem of inferring subsurface parameters from push-pull test data, and we propose them as an alternative to the classical type curve approach. In particular, we explore the visibility of heterogeneity through non-Fickian behavior in push-pull tests, and illustrate the ability of a radial RP-CTRW technique to encapsulate this behavior using a sparse parameterization which has predictive value.

  5. Stochastic dynamic modeling of regular and slow earthquakes

    NASA Astrophysics Data System (ADS)

    Aso, N.; Ando, R.; Ide, S.

    2017-12-01

    Both regular and slow earthquakes are slip phenomena on plate boundaries and are simulated by a (quasi-)dynamic modeling [Liu and Rice, 2005]. In these numerical simulations, spatial heterogeneity is usually considered not only for explaining real physical properties but also for evaluating the stability of the calculations or the sensitivity of the results on the condition. However, even though we discretize the model space with small grids, heterogeneity at smaller scales than the grid size is not considered in the models with deterministic governing equations. To evaluate the effect of heterogeneity at the smaller scales we need to consider stochastic interactions between slip and stress in a dynamic modeling. Tidal stress is known to trigger or affect both regular and slow earthquakes [Yabe et al., 2015; Ide et al., 2016], and such an external force with fluctuation can also be considered as a stochastic external force. A healing process of faults may also be stochastic, so we introduce stochastic friction law. In the present study, we propose a stochastic dynamic model to explain both regular and slow earthquakes. We solve mode III problem, which corresponds to the rupture propagation along the strike direction. We use BIEM (boundary integral equation method) scheme to simulate slip evolution, but we add stochastic perturbations in the governing equations, which is usually written in a deterministic manner. As the simplest type of perturbations, we adopt Gaussian deviations in the formulation of the slip-stress kernel, external force, and friction. By increasing the amplitude of perturbations of the slip-stress kernel, we reproduce complicated rupture process of regular earthquakes including unilateral and bilateral ruptures. By perturbing external force, we reproduce slow rupture propagation at a scale of km/day. The slow propagation generated by a combination of fast interaction at S-wave velocity is analogous to the kinetic theory of gasses: thermal diffusion appears much slower than the particle velocity of each molecule. The concept of stochastic triggering originates in the Brownian walk model [Ide, 2008], and the present study introduces the stochastic dynamics into dynamic simulations. The stochastic dynamic model has the potential to explain both regular and slow earthquakes more realistically.

  6. Transient ensemble dynamics in time-independent galactic potentials

    NASA Astrophysics Data System (ADS)

    Mahon, M. Elaine; Abernathy, Robert A.; Bradley, Brendan O.; Kandrup, Henry E.

    1995-07-01

    This paper summarizes a numerical investigation of the short-time, possibly transient, behaviour of ensembles of stochastic orbits evolving in fixed non-integrable potentials, with the aim of deriving insights into the structure and evolution of galaxies. The simulations involved three different two-dimensional potentials, quite different in appearance. However, despite these differences, ensembles in all three potentials exhibit similar behaviour. This suggests that the conclusions inferred from the simulations are robust, relying only on basic topological properties, e.g., the existence of KAM tori and cantori. Generic ensembles of initial conditions, corresponding to stochastic orbits, exhibit a rapid coarse-grained approach towards a near-invariant distribution on a time-scale <>t_H, although various irregularities associated with external and/or internal irregularities can drastically accelerate this process. A principal tool in the analysis is the notion of a local Liapounov exponent, which provides a statistical characterization of the overall instability of stochastic orbits over finite time intervals. In particular, there is a precise sense in which confined stochastic orbits are less unstable, with smaller local Liapounov exponents, than are unconfined stochastic orbits.

  7. Anomalous Fluctuations in Autoregressive Models with Long-Term Memory

    NASA Astrophysics Data System (ADS)

    Sakaguchi, Hidetsugu; Honjo, Haruo

    2015-10-01

    An autoregressive model with a power-law type memory kernel is studied as a stochastic process that exhibits a self-affine-fractal-like behavior for a small time scale. We find numerically that the root-mean-square displacement Δ(m) for the time interval m increases with a power law as mα with α < 1/2 for small m but saturates at sufficiently large m. The exponent α changes with the power exponent of the memory kernel.

  8. Multiscale models and stochastic simulation methods for computing rare but key binding events in cell biology

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Guerrier, C.; Holcman, D., E-mail: david.holcman@ens.fr; Mathematical Institute, Oxford OX2 6GG, Newton Institute

    The main difficulty in simulating diffusion processes at a molecular level in cell microdomains is due to the multiple scales involving nano- to micrometers. Few to many particles have to be simulated and simultaneously tracked while there are exploring a large portion of the space for binding small targets, such as buffers or active sites. Bridging the small and large spatial scales is achieved by rare events representing Brownian particles finding small targets and characterized by long-time distribution. These rare events are the bottleneck of numerical simulations. A naive stochastic simulation requires running many Brownian particles together, which is computationallymore » greedy and inefficient. Solving the associated partial differential equations is also difficult due to the time dependent boundary conditions, narrow passages and mixed boundary conditions at small windows. We present here two reduced modeling approaches for a fast computation of diffusing fluxes in microdomains. The first approach is based on a Markov mass-action law equations coupled to a Markov chain. The second is a Gillespie's method based on the narrow escape theory for coarse-graining the geometry of the domain into Poissonian rates. The main application concerns diffusion in cellular biology, where we compute as an example the distribution of arrival times of calcium ions to small hidden targets to trigger vesicular release.« less

  9. Exploration of a High Luminosity 100 TeV Proton Antiproton Collider

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Oliveros, Sandra J.; Summers, Don; Cremaldi, Lucien

    New physics is being explored with the Large Hadron Collider at CERN and with Intensity Frontier programs at Fermilab and KEK. The energy scale for new physics is known to be in the multi-TeV range, signaling the need for a future collider which well surpasses this energy scale. We explore a 10more » $$^{\\,34}$$ cm$$^{-2}$$ s$$^{-1}$$ luminosity, 100 TeV $$p\\bar{p}$$ collider with 7$$\\times$$ the energy of the LHC but only 2$$\\times$$ as much NbTi superconductor, motivating the choice of 4.5 T single bore dipoles. The cross section for many high mass states is 10 times higher in $$p\\bar{p}$$ than $pp$ collisions. Antiquarks for production can come directly from an antiproton rather than indirectly from gluon splitting. The higher cross sections reduce the synchrotron radiation in superconducting magnets and the number of events per beam crossing, because lower beam currents can produce the same rare event rates. Events are more centrally produced, allowing a more compact detector with less space between quadrupole triplets and a smaller $$\\beta^{*}$$ for higher luminosity. A Fermilab-like $$\\bar p$$ source would disperse the beam into 12 momentum channels to capture more antiprotons. Because stochastic cooling time scales as the number of particles, 12 cooling ring sets would be used. Each set would include phase rotation to lower momentum spreads, equalize all momentum channels, and stochastically cool. One electron cooling ring would follow the stochastic cooling rings. Finally antiprotons would be recycled during runs without leaving the collider ring by joining them to new bunches with synchrotron damping.« less

  10. Development and verification of a real-time stochastic precipitation nowcasting system for urban hydrology in Belgium

    NASA Astrophysics Data System (ADS)

    Foresti, L.; Reyniers, M.; Seed, A.; Delobbe, L.

    2016-01-01

    The Short-Term Ensemble Prediction System (STEPS) is implemented in real-time at the Royal Meteorological Institute (RMI) of Belgium. The main idea behind STEPS is to quantify the forecast uncertainty by adding stochastic perturbations to the deterministic Lagrangian extrapolation of radar images. The stochastic perturbations are designed to account for the unpredictable precipitation growth and decay processes and to reproduce the dynamic scaling of precipitation fields, i.e., the observation that large-scale rainfall structures are more persistent and predictable than small-scale convective cells. This paper presents the development, adaptation and verification of the STEPS system for Belgium (STEPS-BE). STEPS-BE provides in real-time 20-member ensemble precipitation nowcasts at 1 km and 5 min resolutions up to 2 h lead time using a 4 C-band radar composite as input. In the context of the PLURISK project, STEPS forecasts were generated to be used as input in sewer system hydraulic models for nowcasting urban inundations in the cities of Ghent and Leuven. Comprehensive forecast verification was performed in order to detect systematic biases over the given urban areas and to analyze the reliability of probabilistic forecasts for a set of case studies in 2013 and 2014. The forecast biases over the cities of Leuven and Ghent were found to be small, which is encouraging for future integration of STEPS nowcasts into the hydraulic models. Probabilistic forecasts of exceeding 0.5 mm h-1 are reliable up to 60-90 min lead time, while the ones of exceeding 5.0 mm h-1 are only reliable up to 30 min. The STEPS ensembles are slightly under-dispersive and represent only 75-90 % of the forecast errors.

  11. Development and verification of a real-time stochastic precipitation nowcasting system for urban hydrology in Belgium

    NASA Astrophysics Data System (ADS)

    Foresti, L.; Reyniers, M.; Seed, A.; Delobbe, L.

    2015-07-01

    The Short-Term Ensemble Prediction System (STEPS) is implemented in real-time at the Royal Meteorological Institute (RMI) of Belgium. The main idea behind STEPS is to quantify the forecast uncertainty by adding stochastic perturbations to the deterministic Lagrangian extrapolation of radar images. The stochastic perturbations are designed to account for the unpredictable precipitation growth and decay processes and to reproduce the dynamic scaling of precipitation fields, i.e. the observation that large scale rainfall structures are more persistent and predictable than small scale convective cells. This paper presents the development, adaptation and verification of the system STEPS for Belgium (STEPS-BE). STEPS-BE provides in real-time 20 member ensemble precipitation nowcasts at 1 km and 5 min resolution up to 2 h lead time using a 4 C-band radar composite as input. In the context of the PLURISK project, STEPS forecasts were generated to be used as input in sewer system hydraulic models for nowcasting urban inundations in the cities of Ghent and Leuven. Comprehensive forecast verification was performed in order to detect systematic biases over the given urban areas and to analyze the reliability of probabilistic forecasts for a set of case studies in 2013 and 2014. The forecast biases over the cities of Leuven and Ghent were found to be small, which is encouraging for future integration of STEPS nowcasts into the hydraulic models. Probabilistic forecasts of exceeding 0.5 mm h-1 are reliable up to 60-90 min lead time, while the ones of exceeding 5.0 mm h-1 are only reliable up to 30 min. The STEPS ensembles are slightly under-dispersive and represent only 80-90 % of the forecast errors.

  12. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Prinja, A. K.

    The Karhunen-Loeve stochastic spectral expansion of a random binary mixture of immiscible fluids in planar geometry is used to explore asymptotic limits of radiation transport in such mixtures. Under appropriate scalings of mixing parameters - correlation length, volume fraction, and material cross sections - and employing multiple- scale expansion of the angular flux, previously established atomic mix and diffusion limits are reproduced. When applied to highly contrasting material properties in the small cor- relation length limit, the methodology yields a nonstandard reflective medium transport equation that merits further investigation. Finally, a hybrid closure is proposed that produces both small andmore » large correlation length limits of the closure condition for the material averaged equations.« less

  13. Scaling theory for the quasideterministic limit of continuous bifurcations.

    PubMed

    Kessler, David A; Shnerb, Nadav M

    2012-05-01

    Deterministic rate equations are widely used in the study of stochastic, interacting particles systems. This approach assumes that the inherent noise, associated with the discreteness of the elementary constituents, may be neglected when the number of particles N is large. Accordingly, it fails close to the extinction transition, when the amplitude of stochastic fluctuations is comparable with the size of the population. Here we present a general scaling theory of the transition regime for spatially extended systems. We demonstrate this through a detailed study of two fundamental models for out-of-equilibrium phase transitions: the Susceptible-Infected-Susceptible (SIS) that belongs to the directed percolation equivalence class and the Susceptible-Infected-Recovered (SIR) model belonging to the dynamic percolation class. Implementing the Ginzburg criteria we show that the width of the fluctuation-dominated region scales like N^{-κ}, where N is the number of individuals per site and κ=2/(d_{u}-d), d_{u} is the upper critical dimension. Other exponents that control the approach to the deterministic limit are shown to be calculable once κ is known. The theory is extended to include the corrections to the front velocity above the transition. It is supported by the results of extensive numerical simulations for systems of various dimensionalities.

  14. From Stochastic Foam to Designed Structure: Balancing Cost and Performance of Cellular Metals

    PubMed Central

    Lehmhus, Dirk; Vesenjak, Matej

    2017-01-01

    Over the past two decades, a large number of metallic foams have been developed. In recent years research on this multi-functional material class has further intensified. However, despite their unique properties only a limited number of large-scale applications have emerged. One important reason for this sluggish uptake is their high cost. Many cellular metals require expensive raw materials, complex manufacturing procedures, or a combination thereof. Some attempts have been made to decrease costs by introducing novel foams based on cheaper components and new manufacturing procedures. However, this has often yielded materials with unreliable properties that inhibit utilization of their full potential. The resulting balance between cost and performance of cellular metals is probed in this editorial, which attempts to consider cost not in absolute figures, but in relation to performance. To approach such a distinction, an alternative classification of cellular metals is suggested which centers on structural aspects and the effort of realizing them. The range thus covered extends from fully stochastic foams to cellular structures designed-to-purpose. PMID:28786935

  15. Coupled Finite Volume and Finite Element Method Analysis of a Complex Large-Span Roof Structure

    NASA Astrophysics Data System (ADS)

    Szafran, J.; Juszczyk, K.; Kamiński, M.

    2017-12-01

    The main goal of this paper is to present coupled Computational Fluid Dynamics and structural analysis for the precise determination of wind impact on internal forces and deformations of structural elements of a longspan roof structure. The Finite Volume Method (FVM) serves for a solution of the fluid flow problem to model the air flow around the structure, whose results are applied in turn as the boundary tractions in the Finite Element Method problem structural solution for the linear elastostatics with small deformations. The first part is carried out with the use of ANSYS 15.0 computer system, whereas the FEM system Robot supports stress analysis in particular roof members. A comparison of the wind pressure distribution throughout the roof surface shows some differences with respect to that available in the engineering designing codes like Eurocode, which deserves separate further numerical studies. Coupling of these two separate numerical techniques appears to be promising in view of future computational models of stochastic nature in large scale structural systems due to the stochastic perturbation method.

  16. Spatial scaling patterns and functional redundancies in a changing boreal lake landscape

    USGS Publications Warehouse

    Angeler, David G.; Allen, Craig R.; Uden, Daniel R.; Johnson, Richard K.

    2015-01-01

    Global transformations extend beyond local habitats; therefore, larger-scale approaches are needed to assess community-level responses and resilience to unfolding environmental changes. Using longterm data (1996–2011), we evaluated spatial patterns and functional redundancies in the littoral invertebrate communities of 85 Swedish lakes, with the objective of assessing their potential resilience to environmental change at regional scales (that is, spatial resilience). Multivariate spatial modeling was used to differentiate groups of invertebrate species exhibiting spatial patterns in composition and abundance (that is, deterministic species) from those lacking spatial patterns (that is, stochastic species). We then determined the functional feeding attributes of the deterministic and stochastic invertebrate species, to infer resilience. Between one and three distinct spatial patterns in invertebrate composition and abundance were identified in approximately one-third of the species; the remainder were stochastic. We observed substantial differences in metrics between deterministic and stochastic species. Functional richness and diversity decreased over time in the deterministic group, suggesting a loss of resilience in regional invertebrate communities. However, taxon richness and redundancy increased monotonically in the stochastic group, indicating the capacity of regional invertebrate communities to adapt to change. Our results suggest that a refined picture of spatial resilience emerges if patterns of both the deterministic and stochastic species are accounted for. Spatially extensive monitoring may help increase our mechanistic understanding of community-level responses and resilience to regional environmental change, insights that are critical for developing management and conservation agendas in this current period of rapid environmental transformation.

  17. Development of incremental dynamical downscaling and analysis system for regional scale climate change projections

    NASA Astrophysics Data System (ADS)

    Wakazuki, Yasutaka; Hara, Masayuki; Fujita, Mikiko; Ma, Xieyao; Kimura, Fujio

    2013-04-01

    Regional scale climate change projections play an important role in assessments of influences of global warming and include statistical (SD) and dynamical downscaling (DD) approaches. One of DD methods is developed basing on the pseudo-global-warming (PGW) method developed by Kimura and Kitoh (2007) in this study. In general, DD uses regional climate model (RCM) with lateral boundary data. In PGW method, the climatological mean difference estimated by GCMs are added to the objective analysis data (ANAL), and the data are used as the lateral boundary data in the future climate simulations. The ANAL is also used as the lateral boundary conditions of the present climate simulation. One of merits of the PGW method is that influences of biases of GCMs in RCM simulations are reduced. However, the PGW method does not treat climate changes in relative humidity, year-to-year variation, and short-term disturbances. The developing new downscaling method is named as the incremental dynamical downscaling and analysis system (InDDAS). The InDDAS treat climate changes in relative humidity and year-to-year variations. On the other hand, uncertainties of climate change projections estimated by many GCMs are large and are not negligible. Thus, stochastic regional scale climate change projections are expected for assessments of influences of global warming. Many RCM runs must be performed to make stochastic information. However, the computational costs are huge because grid size of RCM runs should be small to resolve heavy rainfall phenomena. Therefore, the number of runs to make stochastic information must be reduced. In InDDAS, climatological differences added to ANAL become statistically pre-analyzed information. The climatological differences of many GCMs are divided into mean climatological difference (MD) and departures from MD. The departures are analyzed by principal component analysis, and positive and negative perturbations (positive and negative standard deviations multiplied by departure patterns (eigenvectors)) with multi modes are added to MD. Consequently, the most likely future states are calculated with climatological difference of MD. For example, future states in cases that temperature increase is large and small are calculated with MD plus positive and negative perturbations of the first mode.

  18. Environmental stochasticity controls soil erosion variability

    PubMed Central

    Kim, Jongho; Ivanov, Valeriy Y.; Fatichi, Simone

    2016-01-01

    Understanding soil erosion by water is essential for a range of research areas but the predictive skill of prognostic models has been repeatedly questioned because of scale limitations of empirical data and the high variability of soil loss across space and time scales. Improved understanding of the underlying processes and their interactions are needed to infer scaling properties of soil loss and better inform predictive methods. This study uses data from multiple environments to highlight temporal-scale dependency of soil loss: erosion variability decreases at larger scales but the reduction rate varies with environment. The reduction of variability of the geomorphic response is attributed to a ‘compensation effect’: temporal alternation of events that exhibit either source-limited or transport-limited regimes. The rate of reduction is related to environment stochasticity and a novel index is derived to reflect the level of variability of intra- and inter-event hydrometeorologic conditions. A higher stochasticity index implies a larger reduction of soil loss variability (enhanced predictability at the aggregated temporal scales) with respect to the mean hydrologic forcing, offering a promising indicator for estimating the degree of uncertainty of erosion assessments. PMID:26925542

  19. Large Deviations for Stochastic Models of Two-Dimensional Second Grade Fluids

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zhai, Jianliang, E-mail: zhaijl@ustc.edu.cn; Zhang, Tusheng, E-mail: Tusheng.Zhang@manchester.ac.uk

    2017-06-15

    In this paper, we establish a large deviation principle for stochastic models of incompressible second grade fluids. The weak convergence method introduced by Budhiraja and Dupuis (Probab Math Statist 20:39–61, 2000) plays an important role.

  20. Towards a theory of cortical columns: From spiking neurons to interacting neural populations of finite size

    PubMed Central

    Gerstner, Wulfram

    2017-01-01

    Neural population equations such as neural mass or field models are widely used to study brain activity on a large scale. However, the relation of these models to the properties of single neurons is unclear. Here we derive an equation for several interacting populations at the mesoscopic scale starting from a microscopic model of randomly connected generalized integrate-and-fire neuron models. Each population consists of 50–2000 neurons of the same type but different populations account for different neuron types. The stochastic population equations that we find reveal how spike-history effects in single-neuron dynamics such as refractoriness and adaptation interact with finite-size fluctuations on the population level. Efficient integration of the stochastic mesoscopic equations reproduces the statistical behavior of the population activities obtained from microscopic simulations of a full spiking neural network model. The theory describes nonlinear emergent dynamics such as finite-size-induced stochastic transitions in multistable networks and synchronization in balanced networks of excitatory and inhibitory neurons. The mesoscopic equations are employed to rapidly integrate a model of a cortical microcircuit consisting of eight neuron types, which allows us to predict spontaneous population activities as well as evoked responses to thalamic input. Our theory establishes a general framework for modeling finite-size neural population dynamics based on single cell and synapse parameters and offers an efficient approach to analyzing cortical circuits and computations. PMID:28422957

  1. Linear regulator design for stochastic systems by a multiple time scales method

    NASA Technical Reports Server (NTRS)

    Teneketzis, D.; Sandell, N. R., Jr.

    1976-01-01

    A hierarchically-structured, suboptimal controller for a linear stochastic system composed of fast and slow subsystems is considered. The controller is optimal in the limit as the separation of time scales of the subsystems becomes infinite. The methodology is illustrated by design of a controller to suppress the phugoid and short period modes of the longitudinal dynamics of the F-8 aircraft.

  2. On the Fluctuating Component of the Sun's Large-Scale Magnetic Field

    NASA Astrophysics Data System (ADS)

    Wang, Y.-M.; Sheeley, N. R., Jr.

    2003-06-01

    The Sun's large-scale magnetic field and its proxies are known to undergo substantial variations on timescales much less than a solar cycle but longer than a rotation period. Examples of such variations include the double activity maximum inferred by Gnevyshev, the large peaks in the interplanetary field strength observed in 1982 and 1991, and the 1.3-1.4 yr periodicities detected over limited time intervals in solar wind speed and geomagnetic activity. We consider the question of the extent to which these variations are stochastic in nature. For this purpose, we simulate the evolution of the Sun's equatorial dipole strength and total open flux under the assumption that the active region sources (BMRs) are distributed randomly in longitude. The results are then interpreted with the help of a simple random walk model including dissipation. We find that the equatorial dipole and open flux generally exhibit multiple peaks during each 11 yr cycle, with the highest peak as likely to occur during the declining phase as at sunspot maximum. The widths of the peaks are determined by the timescale τ~1 yr for the equatorial dipole to decay through the combined action of meridional flow, differential rotation, and supergranular diffusion. The amplitudes of the fluctuations depend on the strengths and longitudinal phase relations of the BMRs, as well as on the relative rates of flux emergence and decay. We conclude that stochastic processes provide a viable explanation for the ``Gnevyshev gaps'' and for the existence of quasi periodicities in the range ~1-3 yr.

  3. Stochastic inflation lattice simulations - Ultra-large scale structure of the universe

    NASA Technical Reports Server (NTRS)

    Salopek, D. S.

    1991-01-01

    Non-Gaussian fluctuations for structure formation may arise in inflation from the nonlinear interaction of long wavelength gravitational and scalar fields. Long wavelength fields have spatial gradients, a (exp -1), small compared to the Hubble radius, and they are described in terms of classical random fields that are fed by short wavelength quantum noise. Lattice Langevin calculations are given for a toy model with a scalar field interacting with an exponential potential where one can obtain exact analytic solutions of the Fokker-Planck equation. For single scalar field models that are consistent with current microwave background fluctuations, the fluctuations are Gaussian. However, for scales much larger than our observable Universe, one expects large metric fluctuations that are non-Gaussian. This example illuminates non-Gaussian models involving multiple scalar fields which are consistent with current microwave background limits.

  4. Measurement of large parallel and perpendicular electric fields on electron spatial scales in the terrestrial bow shock.

    PubMed

    Bale, S D; Mozer, F S

    2007-05-18

    Large parallel (

  5. Delensing CMB polarization with external datasets

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Smith, Kendrick M.; Hanson, Duncan; LoVerde, Marilena

    2012-06-01

    One of the primary scientific targets of current and future CMB polarization experiments is the search for a stochastic background of gravity waves in the early universe. As instrumental sensitivity improves, the limiting factor will eventually be B-mode power generated by gravitational lensing, which can be removed through use of so-called ''delensing'' algorithms. We forecast prospects for delensing using lensing maps which are obtained externally to CMB polarization: either from large-scale structure observations, or from high-resolution maps of CMB temperature. We conclude that the forecasts in either case are not encouraging, and that significantly delensing large-scale CMB polarization requires high-resolutionmore » polarization maps with sufficient sensitivity to measure the lensing B-mode. We also present a simple formalism for including delensing in CMB forecasts which is computationally fast and agrees well with Monte Carlos.« less

  6. Schramm-Loewner (SLE) analysis of quasi two-dimensional turbulent flows

    NASA Astrophysics Data System (ADS)

    Thalabard, Simon

    2012-02-01

    Quasi two-dimensional turbulence can be observed in several cases: for example, in the laboratory using liquid soap films, or as the result of a strong imposed rotation as obtained in three-dimensional large direct numerical simulations. We study and contrast SLE properties of such flows, in the former case in the inverse cascade of energy to large scale, and in the latter in the direct cascade of energy to small scales in the presence of a fully-helical forcing. We thus examine the geometric properties of these quasi 2D regimes in the context of stochastic geometry, as was done for the 2D inverse cascade by Bernard et al. (2006). We show that in both cases the data is compatible with self-similarity and with SLE behaviors, whose different diffusivities can be heuristically determined.

  7. Effective long wavelength scalar dynamics in de Sitter

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Moss, Ian; Rigopoulos, Gerasimos, E-mail: ian.moss@newcastle.ac.uk, E-mail: gerasimos.rigopoulos@ncl.ac.uk

    We discuss the effective infrared theory governing a light scalar's long wavelength dynamics in de Sitter spacetime. We show how the separation of scales around the physical curvature radius k / a ∼ H can be performed consistently with a window function and how short wavelengths can be integrated out in the Schwinger-Keldysh path integral formalism. At leading order, and for time scales Δ t >> H {sup −1}, this results in the well-known Starobinsky stochastic evolution. However, our approach allows for the computation of quantum UV corrections, generating an effective potential on which the stochastic dynamics takes place. Themore » long wavelength stochastic dynamical equations are now second order in time, incorporating temporal scales Δ t ∼ H {sup −1} and resulting in a Kramers equation for the probability distribution—more precisely the Wigner function—in contrast to the more usual Fokker-Planck equation. This feature allows us to non-perturbatively evaluate, within the stochastic formalism, not only expectation values of field correlators, but also the stress-energy tensor of φ.« less

  8. Proper orthogonal decomposition-based spectral higher-order stochastic estimation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Baars, Woutijn J., E-mail: wbaars@unimelb.edu.au; Tinney, Charles E.

    A unique routine, capable of identifying both linear and higher-order coherence in multiple-input/output systems, is presented. The technique combines two well-established methods: Proper Orthogonal Decomposition (POD) and Higher-Order Spectra Analysis. The latter of these is based on known methods for characterizing nonlinear systems by way of Volterra series. In that, both linear and higher-order kernels are formed to quantify the spectral (nonlinear) transfer of energy between the system's input and output. This reduces essentially to spectral Linear Stochastic Estimation when only first-order terms are considered, and is therefore presented in the context of stochastic estimation as spectral Higher-Order Stochastic Estimationmore » (HOSE). The trade-off to seeking higher-order transfer kernels is that the increased complexity restricts the analysis to single-input/output systems. Low-dimensional (POD-based) analysis techniques are inserted to alleviate this void as POD coefficients represent the dynamics of the spatial structures (modes) of a multi-degree-of-freedom system. The mathematical framework behind this POD-based HOSE method is first described. The method is then tested in the context of jet aeroacoustics by modeling acoustically efficient large-scale instabilities as combinations of wave packets. The growth, saturation, and decay of these spatially convecting wave packets are shown to couple both linearly and nonlinearly in the near-field to produce waveforms that propagate acoustically to the far-field for different frequency combinations.« less

  9. Stochastic integrated assessment of climate tipping points indicates the need for strict climate policy

    NASA Astrophysics Data System (ADS)

    Lontzek, Thomas S.; Cai, Yongyang; Judd, Kenneth L.; Lenton, Timothy M.

    2015-05-01

    Perhaps the most `dangerous’ aspect of future climate change is the possibility that human activities will push parts of the climate system past tipping points, leading to irreversible impacts. The likelihood of such large-scale singular events is expected to increase with global warming, but is fundamentally uncertain. A key question is how should the uncertainty surrounding tipping events affect climate policy? We address this using a stochastic integrated assessment model, based on the widely used deterministic DICE model. The temperature-dependent likelihood of tipping is calibrated using expert opinions, which we find to be internally consistent. The irreversible impacts of tipping events are assumed to accumulate steadily over time (rather than instantaneously), consistent with scientific understanding. Even with conservative assumptions about the rate and impacts of a stochastic tipping event, today’s optimal carbon tax is increased by ~50%. For a plausibly rapid, high-impact tipping event, today’s optimal carbon tax is increased by >200%. The additional carbon tax to delay climate tipping grows at only about half the rate of the baseline carbon tax. This implies that the effective discount rate for the costs of stochastic climate tipping is much lower than the discount rate for deterministic climate damages. Our results support recent suggestions that the costs of carbon emission used to inform policy are being underestimated, and that uncertain future climate damages should be discounted at a low rate.

  10. Improved Climate Simulations through a Stochastic Parameterization of Ocean Eddies

    NASA Astrophysics Data System (ADS)

    Williams, Paul; Howe, Nicola; Gregory, Jonathan; Smith, Robin; Joshi, Manoj

    2017-04-01

    In climate simulations, the impacts of the subgrid scales on the resolved scales are conventionally represented using deterministic closure schemes, which assume that the impacts are uniquely determined by the resolved scales. Stochastic parameterization relaxes this assumption, by sampling the subgrid variability in a computationally inexpensive manner. This study shows that the simulated climatological state of the ocean is improved in many respects by implementing a simple stochastic parameterization of ocean eddies into a coupled atmosphere-ocean general circulation model. Simulations from a high-resolution, eddy-permitting ocean model are used to calculate the eddy statistics needed to inject realistic stochastic noise into a low-resolution, non-eddy-permitting version of the same model. A suite of four stochastic experiments is then run to test the sensitivity of the simulated climate to the noise definition by varying the noise amplitude and decorrelation time within reasonable limits. The addition of zero-mean noise to the ocean temperature tendency is found to have a nonzero effect on the mean climate. Specifically, in terms of the ocean temperature and salinity fields both at the surface and at depth, the noise reduces many of the biases in the low-resolution model and causes it to more closely resemble the high-resolution model. The variability of the strength of the global ocean thermohaline circulation is also improved. It is concluded that stochastic ocean perturbations can yield reductions in climate model error that are comparable to those obtained by refining the resolution, but without the increased computational cost. Therefore, stochastic parameterizations of ocean eddies have the potential to significantly improve climate simulations. Reference Williams PD, Howe NJ, Gregory JM, Smith RS, and Joshi MM (2016) Improved Climate Simulations through a Stochastic Parameterization of Ocean Eddies. Journal of Climate, 29, 8763-8781. http://dx.doi.org/10.1175/JCLI-D-15-0746.1

  11. Coupled stochastic soil moisture simulation-optimization model of deficit irrigation

    NASA Astrophysics Data System (ADS)

    Alizadeh, Hosein; Mousavi, S. Jamshid

    2013-07-01

    This study presents an explicit stochastic optimization-simulation model of short-term deficit irrigation management for large-scale irrigation districts. The model which is a nonlinear nonconvex program with an economic objective function is built on an agrohydrological simulation component. The simulation component integrates (1) an explicit stochastic model of soil moisture dynamics of the crop-root zone considering interaction of stochastic rainfall and irrigation with shallow water table effects, (2) a conceptual root zone salt balance model, and 3) the FAO crop yield model. Particle Swarm Optimization algorithm, linked to the simulation component, solves the resulting nonconvex program with a significantly better computational performance compared to a Monte Carlo-based implicit stochastic optimization model. The model has been tested first by applying it in single-crop irrigation problems through which the effects of the severity of water deficit on the objective function (net benefit), root-zone water balance, and irrigation water needs have been assessed. Then, the model has been applied in Dasht-e-Abbas and Ein-khosh Fakkeh Irrigation Districts (DAID and EFID) of the Karkheh Basin in southwest of Iran. While the maximum net benefit has been obtained for a stress-avoidance (SA) irrigation policy, the highest water profitability has been resulted when only about 60% of the water used in the SA policy is applied. The DAID with respectively 33% of total cultivated area and 37% of total applied water has produced only 14% of the total net benefit due to low-valued crops and adverse soil and shallow water table conditions.

  12. A novel stochastic modeling method to simulate cooling loads in residential districts

    DOE PAGES

    An, Jingjing; Yan, Da; Hong, Tianzhen; ...

    2017-09-04

    District cooling systems are widely used in urban residential communities in China. Most of such systems are oversized, which leads to wasted investment, low operational efficiency and, thus, waste of energy. The accurate prediction of district cooling loads that can support the rightsizing of cooling plant equipment remains a challenge. This study develops a novel stochastic modeling method that consists of (1) six prototype house models representing most apartments in a district, (2) occupant behavior models of residential buildings reflecting their spatial and temporal diversity as well as their complexity based on a large-scale residential survey in China, and (3)more » a stochastic sampling process to represent all apartments and occupants in the district. The stochastic method was applied to a case study using the Designer's Simulation Toolkit (DeST) to simulate the cooling loads of a residential district in Wuhan, China. The simulation results agreed well with the measured data based on five performance metrics representing the aggregated cooling consumption, the peak cooling loads, the spatial load distribution, the temporal load distribution and the load profiles. Two prevalent simulation methods were also employed to simulate the district cooling loads. Here, the results showed that oversimplified assumptions about occupant behavior could lead to significant overestimation of the peak cooling load and the total cooling loads in the district. Future work will aim to simplify the workflow and data requirements of the stochastic method for its application, and to explore its use in predicting district heating loads and in commercial or mixed-use districts.« less

  13. Slow-fast stochastic diffusion dynamics and quasi-stationarity for diploid populations with varying size.

    PubMed

    Coron, Camille

    2016-01-01

    We are interested in the long-time behavior of a diploid population with sexual reproduction and randomly varying population size, characterized by its genotype composition at one bi-allelic locus. The population is modeled by a 3-dimensional birth-and-death process with competition, weak cooperation and Mendelian reproduction. This stochastic process is indexed by a scaling parameter K that goes to infinity, following a large population assumption. When the individual birth and natural death rates are of order K, the sequence of stochastic processes indexed by K converges toward a new slow-fast dynamics with variable population size. We indeed prove the convergence toward 0 of a fast variable giving the deviation of the population from quasi Hardy-Weinberg equilibrium, while the sequence of slow variables giving the respective numbers of occurrences of each allele converges toward a 2-dimensional diffusion process that reaches (0,0) almost surely in finite time. The population size and the proportion of a given allele converge toward a Wright-Fisher diffusion with stochastically varying population size and diploid selection. We insist on differences between haploid and diploid populations due to population size stochastic variability. Using a non trivial change of variables, we study the absorption of this diffusion and its long time behavior conditioned on non-extinction. In particular we prove that this diffusion starting from any non-trivial state and conditioned on not hitting (0,0) admits a unique quasi-stationary distribution. We give numerical approximations of this quasi-stationary behavior in three biologically relevant cases: neutrality, overdominance, and separate niches.

  14. A novel stochastic modeling method to simulate cooling loads in residential districts

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    An, Jingjing; Yan, Da; Hong, Tianzhen

    District cooling systems are widely used in urban residential communities in China. Most of such systems are oversized, which leads to wasted investment, low operational efficiency and, thus, waste of energy. The accurate prediction of district cooling loads that can support the rightsizing of cooling plant equipment remains a challenge. This study develops a novel stochastic modeling method that consists of (1) six prototype house models representing most apartments in a district, (2) occupant behavior models of residential buildings reflecting their spatial and temporal diversity as well as their complexity based on a large-scale residential survey in China, and (3)more » a stochastic sampling process to represent all apartments and occupants in the district. The stochastic method was applied to a case study using the Designer's Simulation Toolkit (DeST) to simulate the cooling loads of a residential district in Wuhan, China. The simulation results agreed well with the measured data based on five performance metrics representing the aggregated cooling consumption, the peak cooling loads, the spatial load distribution, the temporal load distribution and the load profiles. Two prevalent simulation methods were also employed to simulate the district cooling loads. Here, the results showed that oversimplified assumptions about occupant behavior could lead to significant overestimation of the peak cooling load and the total cooling loads in the district. Future work will aim to simplify the workflow and data requirements of the stochastic method for its application, and to explore its use in predicting district heating loads and in commercial or mixed-use districts.« less

  15. Statistical mechanics of neocortical interactions: A scaling paradigm applied to electroencephalography

    NASA Astrophysics Data System (ADS)

    Ingber, Lester

    1991-09-01

    A series of papers has developed a statistical mechanics of neocortical interactions (SMNI), deriving aggregate behavior of experimentally observed columns of neurons from statistical electrical-chemical properties of synaptic interactions. While not useful to yield insights at the single-neuron level, SMNI has demonstrated its capability in describing large-scale properties of short-term memory and electroencephalographic (EEG) systematics. The necessity of including nonlinear and stochastic structures in this development has been stressed. In this paper, a more stringent test is placed on SMNI: The algebraic and numerical algorithms previously developed in this and similar systems are brought to bear to fit large sets of EEG and evoked-potential data being collected to investigate genetic predispositions to alcoholism and to extract brain ``signatures'' of short-term memory. Using the numerical algorithm of very fast simulated reannealing, it is demonstrated that SMNI can indeed fit these data within experimentally observed ranges of its underlying neuronal-synaptic parameters, and the quantitative modeling results are used to examine physical neocortical mechanisms to discriminate high-risk and low-risk populations genetically predisposed to alcoholism. Since this study is a control to span relatively long time epochs, similar to earlier attempts to establish such correlations, this discrimination is inconclusive because of other neuronal activity which can mask such effects. However, the SMNI model is shown to be consistent with EEG data during selective attention tasks and with neocortical mechanisms describing short-term memory previously published using this approach. This paper explicitly identifies similar nonlinear stochastic mechanisms of interaction at the microscopic-neuronal, mesoscopic-columnar, and macroscopic-regional scales of neocortical interactions. These results give strong quantitative support for an accurate intuitive picture, portraying neocortical interactions as having common algebraic or physics mechanisms that scale across quite disparate spatial scales and functional or behavioral phenomena, i.e., describing interactions among neurons, columns of neurons, and regional masses of neurons.

  16. Efficient coarse simulation of a growing avascular tumor

    PubMed Central

    Kavousanakis, Michail E.; Liu, Ping; Boudouvis, Andreas G.; Lowengrub, John; Kevrekidis, Ioannis G.

    2013-01-01

    The subject of this work is the development and implementation of algorithms which accelerate the simulation of early stage tumor growth models. Among the different computational approaches used for the simulation of tumor progression, discrete stochastic models (e.g., cellular automata) have been widely used to describe processes occurring at the cell and subcell scales (e.g., cell-cell interactions and signaling processes). To describe macroscopic characteristics (e.g., morphology) of growing tumors, large numbers of interacting cells must be simulated. However, the high computational demands of stochastic models make the simulation of large-scale systems impractical. Alternatively, continuum models, which can describe behavior at the tumor scale, often rely on phenomenological assumptions in place of rigorous upscaling of microscopic models. This limits their predictive power. In this work, we circumvent the derivation of closed macroscopic equations for the growing cancer cell populations; instead, we construct, based on the so-called “equation-free” framework, a computational superstructure, which wraps around the individual-based cell-level simulator and accelerates the computations required for the study of the long-time behavior of systems involving many interacting cells. The microscopic model, e.g., a cellular automaton, which simulates the evolution of cancer cell populations, is executed for relatively short time intervals, at the end of which coarse-scale information is obtained. These coarse variables evolve on slower time scales than each individual cell in the population, enabling the application of forward projection schemes, which extrapolate their values at later times. This technique is referred to as coarse projective integration. Increasing the ratio of projection times to microscopic simulator execution times enhances the computational savings. Crucial accuracy issues arising for growing tumors with radial symmetry are addressed by applying the coarse projective integration scheme in a cotraveling (cogrowing) frame. As a proof of principle, we demonstrate that the application of this scheme yields highly accurate solutions, while preserving the computational savings of coarse projective integration. PMID:22587128

  17. A theoretically consistent stochastic cascade for temporal disaggregation of intermittent rainfall

    NASA Astrophysics Data System (ADS)

    Lombardo, F.; Volpi, E.; Koutsoyiannis, D.; Serinaldi, F.

    2017-06-01

    Generating fine-scale time series of intermittent rainfall that are fully consistent with any given coarse-scale totals is a key and open issue in many hydrological problems. We propose a stationary disaggregation method that simulates rainfall time series with given dependence structure, wet/dry probability, and marginal distribution at a target finer (lower-level) time scale, preserving full consistency with variables at a parent coarser (higher-level) time scale. We account for the intermittent character of rainfall at fine time scales by merging a discrete stochastic representation of intermittency and a continuous one of rainfall depths. This approach yields a unique and parsimonious mathematical framework providing general analytical formulations of mean, variance, and autocorrelation function (ACF) for a mixed-type stochastic process in terms of mean, variance, and ACFs of both continuous and discrete components, respectively. To achieve the full consistency between variables at finer and coarser time scales in terms of marginal distribution and coarse-scale totals, the generated lower-level series are adjusted according to a procedure that does not affect the stochastic structure implied by the original model. To assess model performance, we study rainfall process as intermittent with both independent and dependent occurrences, where dependence is quantified by the probability that two consecutive time intervals are dry. In either case, we provide analytical formulations of main statistics of our mixed-type disaggregation model and show their clear accordance with Monte Carlo simulations. An application to rainfall time series from real world is shown as a proof of concept.

  18. Can power-law scaling and neuronal avalanches arise from stochastic dynamics?

    PubMed

    Touboul, Jonathan; Destexhe, Alain

    2010-02-11

    The presence of self-organized criticality in biology is often evidenced by a power-law scaling of event size distributions, which can be measured by linear regression on logarithmic axes. We show here that such a procedure does not necessarily mean that the system exhibits self-organized criticality. We first provide an analysis of multisite local field potential (LFP) recordings of brain activity and show that event size distributions defined as negative LFP peaks can be close to power-law distributions. However, this result is not robust to change in detection threshold, or when tested using more rigorous statistical analyses such as the Kolmogorov-Smirnov test. Similar power-law scaling is observed for surrogate signals, suggesting that power-law scaling may be a generic property of thresholded stochastic processes. We next investigate this problem analytically, and show that, indeed, stochastic processes can produce spurious power-law scaling without the presence of underlying self-organized criticality. However, this power-law is only apparent in logarithmic representations, and does not survive more rigorous analysis such as the Kolmogorov-Smirnov test. The same analysis was also performed on an artificial network known to display self-organized criticality. In this case, both the graphical representations and the rigorous statistical analysis reveal with no ambiguity that the avalanche size is distributed as a power-law. We conclude that logarithmic representations can lead to spurious power-law scaling induced by the stochastic nature of the phenomenon. This apparent power-law scaling does not constitute a proof of self-organized criticality, which should be demonstrated by more stringent statistical tests.

  19. Dynamic structural disorder in supported nanoscale catalysts

    NASA Astrophysics Data System (ADS)

    Rehr, J. J.; Vila, F. D.

    2014-04-01

    We investigate the origin and physical effects of "dynamic structural disorder" (DSD) in supported nano-scale catalysts. DSD refers to the intrinsic fluctuating, inhomogeneous structure of such nano-scale systems. In contrast to bulk materials, nano-scale systems exhibit substantial fluctuations in structure, charge, temperature, and other quantities, as well as large surface effects. The DSD is driven largely by the stochastic librational motion of the center of mass and fluxional bonding at the nanoparticle surface due to thermal coupling with the substrate. Our approach for calculating and understanding DSD is based on a combination of real-time density functional theory/molecular dynamics simulations, transient coupled-oscillator models, and statistical mechanics. This approach treats thermal and dynamic effects over multiple time-scales, and includes bond-stretching and -bending vibrations, and transient tethering to the substrate at longer ps time-scales. Potential effects on the catalytic properties of these clusters are briefly explored. Model calculations of molecule-cluster interactions and molecular dissociation reaction paths are presented in which the reactant molecules are adsorbed on the surface of dynamically sampled clusters. This model suggests that DSD can affect both the prefactors and distribution of energy barriers in reaction rates, and thus can significantly affect catalytic activity at the nano-scale.

  20. A Scalable Approach to Probabilistic Latent Space Inference of Large-Scale Networks

    PubMed Central

    Yin, Junming; Ho, Qirong; Xing, Eric P.

    2014-01-01

    We propose a scalable approach for making inference about latent spaces of large networks. With a succinct representation of networks as a bag of triangular motifs, a parsimonious statistical model, and an efficient stochastic variational inference algorithm, we are able to analyze real networks with over a million vertices and hundreds of latent roles on a single machine in a matter of hours, a setting that is out of reach for many existing methods. When compared to the state-of-the-art probabilistic approaches, our method is several orders of magnitude faster, with competitive or improved accuracy for latent space recovery and link prediction. PMID:25400487

  1. Planetary Rings

    NASA Astrophysics Data System (ADS)

    Esposito, Larry W.

    2011-07-01

    Preface; 1. Introduction: the allure of ringed planets; 2. Studies of planetary rings 1610-2004; 3. Diversity of planetary rings; 4. Individual ring particles and their collisions; 5. Large-scale ring evolution; 6. Moons confine and sculpt rings; 7. Explaining ring phenomena; 8. N-Body simulations; 9. Stochastic models; 10. Age and evolution of rings; 11. Saturn's mysterious F ring; 12. Neptune's partial rings; 13. Jupiter's ring-moon system after Galileo; 14. Ring photometry; 15. Dusty rings; 16. Cassini observations; 17. Summary: the big questions; Glossary; References; Index.

  2. Modeling the Webgraph: How Far We Are

    NASA Astrophysics Data System (ADS)

    Donato, Debora; Laura, Luigi; Leonardi, Stefano; Millozzi, Stefano

    The following sections are included: * Introduction * Preliminaries * WebBase * In-degree and out-degree * PageRank * Bipartite cliques * Strongly connected components * Stochastic models of the webgraph * Models of the webgraph * A multi-layer model * Large scale simulation * Algorithmic techniques for generating and measuring webgraphs * Data representation and multifiles * Generating webgraphs * Traversal with two bits for each node * Semi-external breadth first search * Semi-external depth first search * Computation of the SCCs * Computation of the bow-tie regions * Disjoint bipartite cliques * PageRank * Summary and outlook

  3. Control of stochastic sensitivity in a stabilization problem for gas discharge system

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bashkirtseva, Irina

    2015-11-30

    We consider a nonlinear dynamic stochastic system with control. A problem of stochastic sensitivity synthesis of the equilibrium is studied. A mathematical technique of the solution of this problem is discussed. This technique is applied to the problem of the stabilization of the operating mode for the stochastic gas discharge system. We construct a feedback regulator that reduces the stochastic sensitivity of the equilibrium, suppresses large-amplitude oscillations, and provides a proper operation of this engineering device.

  4. Stochastic Convection Parameterizations

    NASA Technical Reports Server (NTRS)

    Teixeira, Joao; Reynolds, Carolyn; Suselj, Kay; Matheou, Georgios

    2012-01-01

    computational fluid dynamics, radiation, clouds, turbulence, convection, gravity waves, surface interaction, radiation interaction, cloud and aerosol microphysics, complexity (vegetation, biogeochemistry, radiation versus turbulence/convection stochastic approach, non-linearities, Monte Carlo, high resolutions, large-Eddy Simulations, cloud structure, plumes, saturation in tropics, forecasting, parameterizations, stochastic, radiation-clod interaction, hurricane forecasts

  5. Single particle momentum and angular distributions in hadron-hadron collisions at ultrahigh energies

    NASA Technical Reports Server (NTRS)

    Chou, T. T.; Chen, N. Y.

    1985-01-01

    The forward-backward charged multiplicity distribution (P n sub F, n sub B) of events in the 540 GeV antiproton-proton collider has been extensively studied by the UA5 Collaboration. It was pointed out that the distribution with respect to n = n sub F + n sub B satisfies approximate KNO scaling and that with respect to Z = n sub F - n sub B is binomial. The geometrical model of hadron-hadron collision interprets the large multiplicity fluctuation as due to the widely different nature of collisions at different impact parameters b. For a single impact parameter b, the collision in the geometrical model should exhibit stochastic behavior. This separation of the stochastic and nonstochastic (KNO) aspects of multiparticle production processes gives conceptually a lucid and attractive picture of such collisions, leading to the concept of partition temperature T sub p and the single particle momentum spectrum to be discussed in detail.

  6. Extreme fluctuations in stochastic network coordination with time delays

    NASA Astrophysics Data System (ADS)

    Hunt, D.; Molnár, F.; Szymanski, B. K.; Korniss, G.

    2015-12-01

    We study the effects of uniform time delays on the extreme fluctuations in stochastic synchronization and coordination problems with linear couplings in complex networks. We obtain the average size of the fluctuations at the nodes from the behavior of the underlying modes of the network. We then obtain the scaling behavior of the extreme fluctuations with system size, as well as the distribution of the extremes on complex networks, and compare them to those on regular one-dimensional lattices. For large complex networks, when the delay is not too close to the critical one, fluctuations at the nodes effectively decouple, and the limit distributions converge to the Fisher-Tippett-Gumbel density. In contrast, fluctuations in low-dimensional spatial graphs are strongly correlated, and the limit distribution of the extremes is the Airy density. Finally, we also explore the effects of nonlinear couplings on the stability and on the extremes of the synchronization landscapes.

  7. Finding Order in Randomness: Single-Molecule Studies Reveal Stochastic RNA Processing | Center for Cancer Research

    Cancer.gov

    Producing a functional eukaryotic messenger RNA (mRNA) requires the coordinated activity of several large protein complexes to initiate transcription, elongate nascent transcripts, splice together exons, and cleave and polyadenylate the 3’ end. Kinetic competition between these various processes has been proposed to regulate mRNA maturation, but this model could lead to multiple, randomly determined, or stochastic, pathways or outcomes. Regulatory checkpoints have been suggested as a means of ensuring quality control. However, current methods have been unable to tease apart the contributions of these processes at a single gene or on a time scale that could provide mechanistic insight. To begin to investigate the kinetic relationship between transcription and splicing, Daniel Larson, Ph.D., of CCR’s Laboratory of Receptor Biology and Gene Expression, and his colleagues employed a single-molecule RNA imaging approach to monitor production and processing of a human β-globin reporter gene in living cells.

  8. Stochastic competitive learning in complex networks.

    PubMed

    Silva, Thiago Christiano; Zhao, Liang

    2012-03-01

    Competitive learning is an important machine learning approach which is widely employed in artificial neural networks. In this paper, we present a rigorous definition of a new type of competitive learning scheme realized on large-scale networks. The model consists of several particles walking within the network and competing with each other to occupy as many nodes as possible, while attempting to reject intruder particles. The particle's walking rule is composed of a stochastic combination of random and preferential movements. The model has been applied to solve community detection and data clustering problems. Computer simulations reveal that the proposed technique presents high precision of community and cluster detections, as well as low computational complexity. Moreover, we have developed an efficient method for estimating the most likely number of clusters by using an evaluator index that monitors the information generated by the competition process itself. We hope this paper will provide an alternative way to the study of competitive learning..

  9. Constraining stochastic gravitational wave background from weak lensing of CMB B-modes

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Shaikh, Shabbir; Mukherjee, Suvodip; Souradeep, Tarun

    2016-09-01

    A stochastic gravitational wave background (SGWB) will affect the CMB anisotropies via weak lensing. Unlike weak lensing due to large scale structure which only deflects photon trajectories, a SGWB has an additional effect of rotating the polarization vector along the trajectory. We study the relative importance of these two effects, deflection and rotation, specifically in the context of E-mode to B-mode power transfer caused by weak lensing due to SGWB. Using weak lensing distortion of the CMB as a probe, we derive constraints on the spectral energy density (Ω{sub GW}) of the SGWB, sourced at different redshifts, without assuming anymore » particular model for its origin. We present these bounds on Ω{sub GW} for different power-law models characterizing the SGWB, indicating the threshold above which observable imprints of SGWB must be present in CMB.« less

  10. Incorporating variability in simulations of seasonally forced phenology using integral projection models

    DOE PAGES

    Goodsman, Devin W.; Aukema, Brian H.; McDowell, Nate G.; ...

    2017-11-26

    Phenology models are becoming increasingly important tools to accurately predict how climate change will impact the life histories of organisms. We propose a class of integral projection phenology models derived from stochastic individual-based models of insect development and demography. Our derivation, which is based on the rate summation concept, produces integral projection models that capture the effect of phenotypic rate variability on insect phenology, but which are typically more computationally frugal than equivalent individual-based phenology models. We demonstrate our approach using a temperature-dependent model of the demography of the mountain pine beetle (Dendroctonus ponderosae Hopkins), an insect that kills maturemore » pine trees. This work illustrates how a wide range of stochastic phenology models can be reformulated as integral projection models. Due to their computational efficiency, these integral projection models are suitable for deployment in large-scale simulations, such as studies of altered pest distributions under climate change.« less

  11. Incorporating variability in simulations of seasonally forced phenology using integral projection models

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Goodsman, Devin W.; Aukema, Brian H.; McDowell, Nate G.

    Phenology models are becoming increasingly important tools to accurately predict how climate change will impact the life histories of organisms. We propose a class of integral projection phenology models derived from stochastic individual-based models of insect development and demography. Our derivation, which is based on the rate summation concept, produces integral projection models that capture the effect of phenotypic rate variability on insect phenology, but which are typically more computationally frugal than equivalent individual-based phenology models. We demonstrate our approach using a temperature-dependent model of the demography of the mountain pine beetle (Dendroctonus ponderosae Hopkins), an insect that kills maturemore » pine trees. This work illustrates how a wide range of stochastic phenology models can be reformulated as integral projection models. Due to their computational efficiency, these integral projection models are suitable for deployment in large-scale simulations, such as studies of altered pest distributions under climate change.« less

  12. Statistical analysis of Hasegawa-Wakatani turbulence

    NASA Astrophysics Data System (ADS)

    Anderson, Johan; Hnat, Bogdan

    2017-06-01

    Resistive drift wave turbulence is a multipurpose paradigm that can be used to understand transport at the edge of fusion devices. The Hasegawa-Wakatani model captures the essential physics of drift turbulence while retaining the simplicity needed to gain a qualitative understanding of this process. We provide a theoretical interpretation of numerically generated probability density functions (PDFs) of intermittent events in Hasegawa-Wakatani turbulence with enforced equipartition of energy in large scale zonal flows, and small scale drift turbulence. We find that for a wide range of adiabatic index values, the stochastic component representing the small scale turbulent eddies of the flow, obtained from the autoregressive integrated moving average model, exhibits super-diffusive statistics, consistent with intermittent transport. The PDFs of large events (above one standard deviation) are well approximated by the Laplace distribution, while small events often exhibit a Gaussian character. Furthermore, there exists a strong influence of zonal flows, for example, via shearing and then viscous dissipation maintaining a sub-diffusive character of the fluxes.

  13. Macroweather Predictions and Climate Projections using Scaling and Historical Observations

    NASA Astrophysics Data System (ADS)

    Hébert, R.; Lovejoy, S.; Del Rio Amador, L.

    2017-12-01

    There are two fundamental time scales that are pertinent to decadal forecasts and multidecadal projections. The first is the lifetime of planetary scale structures, about 10 days (equal to the deterministic predictability limit), and the second is - in the anthropocene - the scale at which the forced anthropogenic variability exceeds the internal variability (around 16 - 18 years). These two time scales define three regimes of variability: weather, macroweather and climate that are respectively characterized by increasing, decreasing and then increasing varibility with scale.We discuss how macroweather temperature variability can be skilfully predicted to its theoretical stochastic predictability limits by exploiting its long-range memory with the Stochastic Seasonal and Interannual Prediction System (StocSIPS). At multi-decadal timescales, the temperature response to forcing is approximately linear and this can be exploited to make projections with a Green's function, or Climate Response Function (CRF). To make the problem tractable, we exploit the temporal scaling symmetry and restrict our attention to global mean forcing and temperature response using a scaling CRF characterized by the scaling exponent H and an inner scale of linearity τ. An aerosol linear scaling factor α and a non-linear volcanic damping exponent ν were introduced to account for the large uncertainty in these forcings. We estimate the model and forcing parameters by Bayesian inference using historical data and these allow us to analytically calculate a median (and likely 66% range) for the transient climate response, and for the equilibrium climate sensitivity: 1.6K ([1.5,1.8]K) and 2.4K ([1.9,3.4]K) respectively. Aerosol forcing typically has large uncertainty and we find a modern (2005) forcing very likely range (90%) of [-1.0, -0.3] Wm-2 with median at -0.7 Wm-2. Projecting to 2100, we find that to keep the warming below 1.5 K, future emissions must undergo cuts similar to Representative Concentration Pathway (RCP) 2.6 for which the probability to remain under 1.5 K is 48%. RCP 4.5 and RCP 8.5-like futures overshoot with very high probability. This underscores that over the next century, the state of the environment will be strongly influenced by past, present and future economical policies.

  14. One-Tube-Only Standardized Site-Directed Mutagenesis: An Alternative Approach to Generate Amino Acid Substitution Collections

    PubMed Central

    Mingo, Janire; Erramuzpe, Asier; Luna, Sandra; Aurtenetxe, Olaia; Amo, Laura; Diez, Ibai; Schepens, Jan T. G.; Hendriks, Wiljan J. A. J.; Cortés, Jesús M.; Pulido, Rafael

    2016-01-01

    Site-directed mutagenesis (SDM) is a powerful tool to create defined collections of protein variants for experimental and clinical purposes, but effectiveness is compromised when a large number of mutations is required. We present here a one-tube-only standardized SDM approach that generates comprehensive collections of amino acid substitution variants, including scanning- and single site-multiple mutations. The approach combines unified mutagenic primer design with the mixing of multiple distinct primer pairs and/or plasmid templates to increase the yield of a single inverse-PCR mutagenesis reaction. Also, a user-friendly program for automatic design of standardized primers for Ala-scanning mutagenesis is made available. Experimental results were compared with a modeling approach together with stochastic simulation data. For single site-multiple mutagenesis purposes and for simultaneous mutagenesis in different plasmid backgrounds, combination of primer sets and/or plasmid templates in a single reaction tube yielded the distinct mutations in a stochastic fashion. For scanning mutagenesis, we found that a combination of overlapping primer sets in a single PCR reaction allowed the yield of different individual mutations, although this yield did not necessarily follow a stochastic trend. Double mutants were generated when the overlap of primer pairs was below 60%. Our results illustrate that one-tube-only SDM effectively reduces the number of reactions required in large-scale mutagenesis strategies, facilitating the generation of comprehensive collections of protein variants suitable for functional analysis. PMID:27548698

  15. Universality in stochastic exponential growth.

    PubMed

    Iyer-Biswas, Srividya; Crooks, Gavin E; Scherer, Norbert F; Dinner, Aaron R

    2014-07-11

    Recent imaging data for single bacterial cells reveal that their mean sizes grow exponentially in time and that their size distributions collapse to a single curve when rescaled by their means. An analogous result holds for the division-time distributions. A model is needed to delineate the minimal requirements for these scaling behaviors. We formulate a microscopic theory of stochastic exponential growth as a Master Equation that accounts for these observations, in contrast to existing quantitative models of stochastic exponential growth (e.g., the Black-Scholes equation or geometric Brownian motion). Our model, the stochastic Hinshelwood cycle (SHC), is an autocatalytic reaction cycle in which each molecular species catalyzes the production of the next. By finding exact analytical solutions to the SHC and the corresponding first passage time problem, we uncover universal signatures of fluctuations in exponential growth and division. The model makes minimal assumptions, and we describe how more complex reaction networks can reduce to such a cycle. We thus expect similar scalings to be discovered in stochastic processes resulting in exponential growth that appear in diverse contexts such as cosmology, finance, technology, and population growth.

  16. Universality in Stochastic Exponential Growth

    NASA Astrophysics Data System (ADS)

    Iyer-Biswas, Srividya; Crooks, Gavin E.; Scherer, Norbert F.; Dinner, Aaron R.

    2014-07-01

    Recent imaging data for single bacterial cells reveal that their mean sizes grow exponentially in time and that their size distributions collapse to a single curve when rescaled by their means. An analogous result holds for the division-time distributions. A model is needed to delineate the minimal requirements for these scaling behaviors. We formulate a microscopic theory of stochastic exponential growth as a Master Equation that accounts for these observations, in contrast to existing quantitative models of stochastic exponential growth (e.g., the Black-Scholes equation or geometric Brownian motion). Our model, the stochastic Hinshelwood cycle (SHC), is an autocatalytic reaction cycle in which each molecular species catalyzes the production of the next. By finding exact analytical solutions to the SHC and the corresponding first passage time problem, we uncover universal signatures of fluctuations in exponential growth and division. The model makes minimal assumptions, and we describe how more complex reaction networks can reduce to such a cycle. We thus expect similar scalings to be discovered in stochastic processes resulting in exponential growth that appear in diverse contexts such as cosmology, finance, technology, and population growth.

  17. Dimensional flow and fuzziness in quantum gravity: Emergence of stochastic spacetime

    NASA Astrophysics Data System (ADS)

    Calcagni, Gianluca; Ronco, Michele

    2017-10-01

    We show that the uncertainty in distance and time measurements found by the heuristic combination of quantum mechanics and general relativity is reproduced in a purely classical and flat multi-fractal spacetime whose geometry changes with the probed scale (dimensional flow) and has non-zero imaginary dimension, corresponding to a discrete scale invariance at short distances. Thus, dimensional flow can manifest itself as an intrinsic measurement uncertainty and, conversely, measurement-uncertainty estimates are generally valid because they rely on this universal property of quantum geometries. These general results affect multi-fractional theories, a recent proposal related to quantum gravity, in two ways: they can fix two parameters previously left free (in particular, the value of the spacetime dimension at short scales) and point towards a reinterpretation of the ultraviolet structure of geometry as a stochastic foam or fuzziness. This is also confirmed by a correspondence we establish between Nottale scale relativity and the stochastic geometry of multi-fractional models.

  18. The Kolmogorov-Obukhov Statistical Theory of Turbulence

    NASA Astrophysics Data System (ADS)

    Birnir, Björn

    2013-08-01

    In 1941 Kolmogorov and Obukhov postulated the existence of a statistical theory of turbulence, which allows the computation of statistical quantities that can be simulated and measured in a turbulent system. These are quantities such as the moments, the structure functions and the probability density functions (PDFs) of the turbulent velocity field. In this paper we will outline how to construct this statistical theory from the stochastic Navier-Stokes equation. The additive noise in the stochastic Navier-Stokes equation is generic noise given by the central limit theorem and the large deviation principle. The multiplicative noise consists of jumps multiplying the velocity, modeling jumps in the velocity gradient. We first estimate the structure functions of turbulence and establish the Kolmogorov-Obukhov 1962 scaling hypothesis with the She-Leveque intermittency corrections. Then we compute the invariant measure of turbulence, writing the stochastic Navier-Stokes equation as an infinite-dimensional Ito process, and solving the linear Kolmogorov-Hopf functional differential equation for the invariant measure. Finally we project the invariant measure onto the PDF. The PDFs turn out to be the normalized inverse Gaussian (NIG) distributions of Barndorff-Nilsen, and compare well with PDFs from simulations and experiments.

  19. Fixation, transient landscape, and diffusion dilemma in stochastic evolutionary game dynamics

    NASA Astrophysics Data System (ADS)

    Zhou, Da; Qian, Hong

    2011-09-01

    Agent-based stochastic models for finite populations have recently received much attention in the game theory of evolutionary dynamics. Both the ultimate fixation and the pre-fixation transient behavior are important to a full understanding of the dynamics. In this paper, we study the transient dynamics of the well-mixed Moran process through constructing a landscape function. It is shown that the landscape playing a central theoretical “device” that integrates several lines of inquiries: the stable behavior of the replicator dynamics, the long-time fixation, and continuous diffusion approximation associated with asymptotically large population. Several issues relating to the transient dynamics are discussed: (i) multiple time scales phenomenon associated with intra- and inter-attractoral dynamics; (ii) discontinuous transition in stochastically stationary process akin to Maxwell construction in equilibrium statistical physics; and (iii) the dilemma diffusion approximation facing as a continuous approximation of the discrete evolutionary dynamics. It is found that rare events with exponentially small probabilities, corresponding to the uphill movements and barrier crossing in the landscape with multiple wells that are made possible by strong nonlinear dynamics, plays an important role in understanding the origin of the complexity in evolutionary, nonlinear biological systems.

  20. A Vision for Co-optimized T&D System Interaction with Renewables and Demand Response

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Anderson, Lindsay; Zéphyr, Luckny; Cardell, Judith B.

    The evolution of the power system to the reliable, efficient and sustainable system of the future will involve development of both demand- and supply-side technology and operations. The use of demand response to counterbalance the intermittency of renewable generation brings the consumer into the spotlight. Though individual consumers are interconnected at the low-voltage distribution system, these resources are typically modeled as variables at the transmission network level. In this paper, a vision for cooptimized interaction of distribution systems, or microgrids, with the high-voltage transmission system is described. In this framework, microgrids encompass consumers, distributed renewables and storage. The energy managementmore » system of the microgrid can also sell (buy) excess (necessary) energy from the transmission system. Preliminary work explores price mechanisms to manage the microgrid and its interactions with the transmission system. Wholesale market operations are addressed through the development of scalable stochastic optimization methods that provide the ability to co-optimize interactions between the transmission and distribution systems. Modeling challenges of the co-optimization are addressed via solution methods for large-scale stochastic optimization, including decomposition and stochastic dual dynamic programming.« less

  1. A Vision for Co-optimized T&D System Interaction with Renewables and Demand Response

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Anderson, C. Lindsay; Zéphyr, Luckny; Liu, Jialin

    The evolution of the power system to the reliable, effi- cient and sustainable system of the future will involve development of both demand- and supply-side technology and operations. The use of demand response to counterbalance the intermittency of re- newable generation brings the consumer into the spotlight. Though individual consumers are interconnected at the low-voltage distri- bution system, these resources are typically modeled as variables at the transmission network level. In this paper, a vision for co- optimized interaction of distribution systems, or microgrids, with the high-voltage transmission system is described. In this frame- work, microgrids encompass consumers, distributed renewablesmore » and storage. The energy management system of the microgrid can also sell (buy) excess (necessary) energy from the transmission system. Preliminary work explores price mechanisms to manage the microgrid and its interactions with the transmission system. Wholesale market operations are addressed through the devel- opment of scalable stochastic optimization methods that provide the ability to co-optimize interactions between the transmission and distribution systems. Modeling challenges of the co-optimization are addressed via solution methods for large-scale stochastic op- timization, including decomposition and stochastic dual dynamic programming.« less

  2. Evolution of the magnetorotational instability on initially tangled magnetic fields

    NASA Astrophysics Data System (ADS)

    Bhat, Pallavi; Ebrahimi, Fatima; Blackman, Eric G.; Subramanian, Kandaswamy

    2017-12-01

    The initial magnetic field of previous magnetorotational instability (MRI) simulations has always included a significant system-scale component, even if stochastic. However, it is of conceptual and practical interest to assess whether the MRI can grow when the initial field is turbulent. The ubiquitous presence of turbulent or random flows in astrophysical plasmas generically leads to a small-scale dynamo (SSD), which would provide initial seed turbulent velocity and magnetic fields in the plasma that becomes an accretion disc. Can the MRI grow from these more realistic initial conditions? To address this, we supply a standard shearing box with isotropically forced SSD generated magnetic and velocity fields as initial conditions and remove the forcing. We find that if the initially supplied fields are too weak or too incoherent, they decay from the initial turbulent cascade faster than they can grow via the MRI. When the initially supplied fields are sufficient to allow MRI growth and sustenance, the saturated stresses, large-scale fields and power spectra match those of the standard zero net flux MRI simulation with an initial large-scale vertical field.

  3. Lagrangian velocity and acceleration correlations of large inertial particles in a closed turbulent flow

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Machicoane, Nathanaël; Volk, Romain

    We investigate the response of large inertial particle to turbulent fluctuations in an inhomogeneous and anisotropic flow. We conduct a Lagrangian study using particles both heavier and lighter than the surrounding fluid, and whose diameters are comparable to the flow integral scale. Both velocity and acceleration correlation functions are analyzed to compute the Lagrangian integral time and the acceleration time scale of such particles. The knowledge of how size and density affect these time scales is crucial in understanding particle dynamics and may permit stochastic process modelization using two-time models (for instance, Sawford’s). As particles are tracked over long timesmore » in the quasi-totality of a closed flow, the mean flow influences their behaviour and also biases the velocity time statistics, in particular the velocity correlation functions. By using a method that allows for the computation of turbulent velocity trajectories, we can obtain unbiased Lagrangian integral time. This is particularly useful in accessing the scale separation for such particles and to comparing it to the case of fluid particles in a similar configuration.« less

  4. Etiology and treatment of hematological neoplasms: stochastic mathematical models.

    PubMed

    Radivoyevitch, Tomas; Li, Huamin; Sachs, Rainer K

    2014-01-01

    Leukemias are driven by stemlike cancer cells (SLCC), whose initiation, growth, response to treatment, and posttreatment behavior are often "stochastic", i.e., differ substantially even among very similar patients for reasons not observable with present techniques. We review the probabilistic mathematical methods used to analyze stochastics and give two specific examples. The first example concerns a treatment protocol, e.g., for acute myeloid leukemia (AML), where intermittent cytotoxic drug dosing (e.g., once each weekday) is used with intent to cure. We argue mathematically that, if independent SLCC are growing stochastically during prolonged treatment, then, other things being equal, front-loading doses are more effective for tumor eradication than back loading. We also argue that the interacting SLCC dynamics during treatment is often best modeled by considering SLCC in microenvironmental niches, with SLCC-SLCC interactions occurring only among SLCC within the same niche, and we present a stochastic dynamics formalism, involving "Poissonization," applicable in such situations. Interactions at a distance due to partial control of total cell numbers are also considered. The second half of this chapter concerns chromosomal aberrations, lesions known to cause some leukemias. A specific example is the induction of a Philadelphia chromosome by ionizing radiation, subsequent development of chronic myeloid leukemia (CML), CML treatment, and treatment outcome. This time evolution involves a coordinated sequence of > 10 steps, each stochastic in its own way, at the subatomic, molecular, macromolecular, cellular, tissue, and population scales, with corresponding time scales ranging from picoseconds to decades. We discuss models of these steps and progress in integrating models across scales.

  5. FASTPM: a new scheme for fast simulations of dark matter and haloes

    NASA Astrophysics Data System (ADS)

    Feng, Yu; Chu, Man-Yat; Seljak, Uroš; McDonald, Patrick

    2016-12-01

    We introduce FASTPM, a highly scalable approximated particle mesh (PM) N-body solver, which implements the PM scheme enforcing correct linear displacement (1LPT) evolution via modified kick and drift factors. Employing a two-dimensional domain decomposing scheme, FASTPM scales extremely well with a very large number of CPUs. In contrast to Comoving-Lagrangian (COLA) approach, we do not require to split the force or track separately the 2LPT solution, reducing the code complexity and memory requirements. We compare FASTPM with different number of steps (Ns) and force resolution factor (B) against three benchmarks: halo mass function from friends-of-friends halo finder; halo and dark matter power spectrum; and cross-correlation coefficient (or stochasticity), relative to a high-resolution TREEPM simulation. We show that the modified time stepping scheme reduces the halo stochasticity when compared to COLA with the same number of steps and force resolution. While increasing Ns and B improves the transfer function and cross-correlation coefficient, for many applications FASTPM achieves sufficient accuracy at low Ns and B. For example, Ns = 10 and B = 2 simulation provides a substantial saving (a factor of 10) of computing time relative to Ns = 40, B = 3 simulation, yet the halo benchmarks are very similar at z = 0. We find that for abundance matched haloes the stochasticity remains low even for Ns = 5. FASTPM compares well against less expensive schemes, being only 7 (4) times more expensive than 2LPT initial condition generator for Ns = 10 (Ns = 5). Some of the applications where FASTPM can be useful are generating a large number of mocks, producing non-linear statistics where one varies a large number of nuisance or cosmological parameters, or serving as part of an initial conditions solver.

  6. Stochastic clustering of material surface under high-heat plasma load

    NASA Astrophysics Data System (ADS)

    Budaev, Viacheslav P.

    2017-11-01

    The results of a study of a surface formed by high-temperature plasma loads on various materials such as tungsten, carbon and stainless steel are presented. High-temperature plasma irradiation leads to an inhomogeneous stochastic clustering of the surface with self-similar granularity - fractality on the scale from nanoscale to macroscales. Cauliflower-like structure of tungsten and carbon materials are formed under high heat plasma load in fusion devices. The statistical characteristics of hierarchical granularity and scale invariance are estimated. They differ qualitatively from the roughness of the ordinary Brownian surface, which is possibly due to the universal mechanisms of stochastic clustering of material surface under the influence of high-temperature plasma.

  7. Large-scale Individual-based Models of Pandemic Influenza Mitigation Strategies

    NASA Astrophysics Data System (ADS)

    Kadau, Kai; Germann, Timothy; Longini, Ira; Macken, Catherine

    2007-03-01

    We have developed a large-scale stochastic simulation model to investigate the spread of a pandemic strain of influenza virus through the U.S. population of 281 million people, to assess the likely effectiveness of various potential intervention strategies including antiviral agents, vaccines, and modified social mobility (including school closure and travel restrictions) [1]. The heterogeneous population structure and mobility is based on available Census and Department of Transportation data where available. Our simulations demonstrate that, in a highly mobile population, restricting travel after an outbreak is detected is likely to delay slightly the time course of the outbreak without impacting the eventual number ill. For large basic reproductive numbers R0, we predict that multiple strategies in combination (involving both social and medical interventions) will be required to achieve a substantial reduction in illness rates. [1] T. C. Germann, K. Kadau, I. M. Longini, and C. A. Macken, Proc. Natl. Acad. Sci. (USA) 103, 5935-5940 (2006).

  8. Projection Effects of Large-scale Structures on Weak-lensing Peak Abundances

    NASA Astrophysics Data System (ADS)

    Yuan, Shuo; Liu, Xiangkun; Pan, Chuzhong; Wang, Qiao; Fan, Zuhui

    2018-04-01

    High peaks in weak lensing (WL) maps originate dominantly from the lensing effects of single massive halos. Their abundance is therefore closely related to the halo mass function and thus a powerful cosmological probe. However, besides individual massive halos, large-scale structures (LSS) along lines of sight also contribute to the peak signals. In this paper, with ray-tracing simulations, we investigate the LSS projection effects. We show that for current surveys with a large shape noise, the stochastic LSS effects are subdominant. For future WL surveys with source galaxies having a median redshift z med ∼ 1 or higher, however, they are significant. For the cosmological constraints derived from observed WL high-peak counts, severe biases can occur if the LSS effects are not taken into account properly. We extend the model of Fan et al. by incorporating the LSS projection effects into the theoretical considerations. By comparing with simulation results, we demonstrate the good performance of the improved model and its applicability in cosmological studies.

  9. Stochasticity and organization of tropical convection: Role of stratiform heating in the simulation of MJO in an aquaplanet coarse resolution GCM using a stochastic multicloud parameterization

    NASA Astrophysics Data System (ADS)

    Khouider, B.; Majda, A.; Deng, Q.; Ravindran, A. M.

    2015-12-01

    Global climate models (GCMs) are large computer codes based on the discretization of the equations of atmospheric and oceanic motions coupled to various processes of transfer of heat, moisture and other constituents between land, atmosphere, and oceans. Because of computing power limitations, typical GCM grid resolution is on the order of 100 km and the effects of many physical processes, occurring on smaller scales, on the climate system are represented through various closure recipes known as parameterizations. The parameterization of convective motions and many processes associated with cumulus clouds such as the exchange of latent heat and cloud radiative forcing are believed to be behind much of uncertainty in GCMs. Based on a lattice particle interacting system, the stochastic multicloud model (SMCM) provide a novel and efficient representation of the unresolved variability in GCMs due to organized tropical convection and the cloud cover. It is widely recognized that stratiform heating contributes significantly to tropical rainfall and to the dynamics of tropical convective systems by inducing a front-to-rear tilt in the heating profile. Stratiform anvils forming in the wake of deep convection play a central role in the dynamics of tropical mesoscale convective systems. Here, aquaplanet simulations with a warm pool like surface forcing, based on a coarse-resolution GCM , of ˜170 km grid mesh, coupled with SMCM, are used to demonstrate the importance of stratiform heating for the organization of convection on planetary and intraseasonal scales. When some key model parameters are set to produce higher stratiform heating fractions, the model produces low-frequency and planetary-scale Madden Julian oscillation (MJO)-like wave disturbances while lower to moderate stratiform heating fractions yield mainly synoptic-scale convectively coupled Kelvin-like waves. Rooted from the stratiform instability, it is conjectured here that the strength and extent of stratiform downdrafts are key contributors to the scale selection of convective organizations perhaps with mechanisms that are in essence similar to those of mesoscale convective systems.

  10. On temporal stochastic modeling of precipitation, nesting models across scales

    NASA Astrophysics Data System (ADS)

    Paschalis, Athanasios; Molnar, Peter; Fatichi, Simone; Burlando, Paolo

    2014-01-01

    We analyze the performance of composite stochastic models of temporal precipitation which can satisfactorily reproduce precipitation properties across a wide range of temporal scales. The rationale is that a combination of stochastic precipitation models which are most appropriate for specific limited temporal scales leads to better overall performance across a wider range of scales than single models alone. We investigate different model combinations. For the coarse (daily) scale these are models based on Alternating renewal processes, Markov chains, and Poisson cluster models, which are then combined with a microcanonical Multiplicative Random Cascade model to disaggregate precipitation to finer (minute) scales. The composite models were tested on data at four sites in different climates. The results show that model combinations improve the performance in key statistics such as probability distributions of precipitation depth, autocorrelation structure, intermittency, reproduction of extremes, compared to single models. At the same time they remain reasonably parsimonious. No model combination was found to outperform the others at all sites and for all statistics, however we provide insight on the capabilities of specific model combinations. The results for the four different climates are similar, which suggests a degree of generality and wider applicability of the approach.

  11. Stochastic density functional theory at finite temperatures

    NASA Astrophysics Data System (ADS)

    Cytter, Yael; Rabani, Eran; Neuhauser, Daniel; Baer, Roi

    2018-03-01

    Simulations in the warm dense matter regime using finite temperature Kohn-Sham density functional theory (FT-KS-DFT), while frequently used, are computationally expensive due to the partial occupation of a very large number of high-energy KS eigenstates which are obtained from subspace diagonalization. We have developed a stochastic method for applying FT-KS-DFT, that overcomes the bottleneck of calculating the occupied KS orbitals by directly obtaining the density from the KS Hamiltonian. The proposed algorithm scales as O (" close=")N3T3)">N T-1 and is compared with the high-temperature limit scaling O Stochastic Oscillation in Self-Organized Critical States of Small Systems: Sensitive Resting State in Neural Systems

    NASA Astrophysics Data System (ADS)

    Wang, Sheng-Jun; Ouyang, Guang; Guang, Jing; Zhang, Mingsha; Wong, K. Y. Michael; Zhou, Changsong

    2016-01-01

    Self-organized critical states (SOCs) and stochastic oscillations (SOs) are simultaneously observed in neural systems, which appears to be theoretically contradictory since SOCs are characterized by scale-free avalanche sizes but oscillations indicate typical scales. Here, we show that SOs can emerge in SOCs of small size systems due to temporal correlation between large avalanches at the finite-size cutoff, resulting from the accumulation-release process in SOCs. In contrast, the critical branching process without accumulation-release dynamics cannot exhibit oscillations. The reconciliation of SOCs and SOs is demonstrated both in the sandpile model and robustly in biologically plausible neuronal networks. The oscillations can be suppressed if external inputs eliminate the prominent slow accumulation process, providing a potential explanation of the widely studied Berger effect or event-related desynchronization in neural response. The features of neural oscillations and suppression are confirmed during task processing in monkey eye-movement experiments. Our results suggest that finite-size, columnar neural circuits may play an important role in generating neural oscillations around the critical states, potentially enabling functional advantages of both SOCs and oscillations for sensitive response to transient stimuli.

  12. Comparative study of large scale simulation of underground explosions inalluvium and in fractured granite using stochastic characterization

    NASA Astrophysics Data System (ADS)

    Vorobiev, O.; Ezzedine, S. M.; Antoun, T.; Glenn, L.

    2014-12-01

    This work describes a methodology used for large scale modeling of wave propagation fromunderground explosions conducted at the Nevada Test Site (NTS) in two different geological settings:fractured granitic rock mass and in alluvium deposition. We show that the discrete nature of rockmasses as well as the spatial variability of the fabric of alluvium is very important to understand groundmotions induced by underground explosions. In order to build a credible conceptual model of thesubsurface we integrated the geological, geomechanical and geophysical characterizations conductedduring recent test at the NTS as well as historical data from the characterization during the undergroundnuclear test conducted at the NTS. Because detailed site characterization is limited, expensive and, insome instances, impossible we have numerically investigated the effects of the characterization gaps onthe overall response of the system. We performed several computational studies to identify the keyimportant geologic features specific to fractured media mainly the joints; and those specific foralluvium porous media mainly the spatial variability of geological alluvium facies characterized bytheir variances and their integral scales. We have also explored common key features to both geologicalenvironments such as saturation and topography and assess which characteristics affect the most theground motion in the near-field and in the far-field. Stochastic representation of these features based onthe field characterizations have been implemented in Geodyn and GeodynL hydrocodes. Both codeswere used to guide site characterization efforts in order to provide the essential data to the modelingcommunity. We validate our computational results by comparing the measured and computed groundmotion at various ranges. This work performed under the auspices of the U.S. Department of Energy by Lawrence LivermoreNational Laboratory under Contract DE-AC52-07NA27344.

  13. Modelling and mitigating refractive propagation effects in precision pulsar timing observations

    NASA Astrophysics Data System (ADS)

    Shannon, R. M.; Cordes, J. M.

    2017-01-01

    To obtain the most accurate pulse arrival times from radio pulsars, it is necessary to correct or mitigate the effects of the propagation of radio waves through the warm and ionized interstellar medium. We examine both the strength of propagation effects associated with large-scale electron-density variations and the methodology used to estimate infinite frequency arrival times. Using simulations of two-dimensional phase-varying screens, we assess the strength and non-stationarity of timing perturbations associated with large-scale density variations. We identify additional contributions to arrival times that are stochastic in both radio frequency and time and therefore not amenable to correction solely using times of arrival. We attribute this to the frequency dependence of the trajectories of the propagating radio waves. We find that this limits the efficacy of low-frequency (metre-wavelength) observations. Incorporating low-frequency pulsar observations into precision timing campaigns is increasingly problematic for pulsars with larger dispersion measures.

  14. Impact of stochasticity in immigration and reintroduction on colonizing and extirpating populations.

    PubMed

    Rajakaruna, Harshana; Potapov, Alexei; Lewis, Mark

    2013-05-01

    A thorough quantitative understanding of populations at the edge of extinction is needed to manage both invasive and extirpating populations. Immigration can govern the population dynamics when the population levels are low. It increases the probability of a population establishing (or reestablishing) before going extinct (EBE). However, the rate of immigration can be highly fluctuating. Here, we investigate how the stochasticity in immigration impacts the EBE probability for small populations in variable environments. We use a population model with an Allee effect described by a stochastic differential equation (SDE) and employ the Fokker-Planck diffusion approximation to quantify the EBE probability. We find that, the effect of the stochasticity in immigration on the EBE probability depends on both the intrinsic growth rate (r) and the mean rate of immigration (p). In general, if r is large and positive (e.g. invasive species introduced to favorable habitats), or if p is greater than the rate of population decline due to the demographic Allee effect (e.g., effective stocking of declining populations), then the stochasticity in immigration decreases the EBE probability. If r is large and negative (e.g. endangered populations in unfavorable habitats), or if the rate of decline due to the demographic Allee effect is much greater than p (e.g., weak stocking of declining populations), then the stochasticity in immigration increases the EBE probability. However, the mean time for EBE decreases with the increasing stochasticity in immigration with both positive and negative large r. Thus, results suggest that ecological management of populations involves a tradeoff as to whether to increase or decrease the stochasticity in immigration in order to optimize the desired outcome. Moreover, the control of invasive species spread through stochastic means, for example, by stochastic monitoring and treatment of vectors such as ship-ballast water, may be suitable strategies given the environmental and demographic uncertainties at introductions. Similarly, the recovery of declining and extirpated populations through stochastic stocking, translocation, and reintroduction, may also be suitable strategies. Copyright © 2013 Elsevier Inc. All rights reserved.

  15. Scalable hierarchical PDE sampler for generating spatially correlated random fields using nonmatching meshes: Scalable hierarchical PDE sampler using nonmatching meshes

    DOE PAGES

    Osborn, Sarah; Zulian, Patrick; Benson, Thomas; ...

    2018-01-30

    This work describes a domain embedding technique between two nonmatching meshes used for generating realizations of spatially correlated random fields with applications to large-scale sampling-based uncertainty quantification. The goal is to apply the multilevel Monte Carlo (MLMC) method for the quantification of output uncertainties of PDEs with random input coefficients on general and unstructured computational domains. We propose a highly scalable, hierarchical sampling method to generate realizations of a Gaussian random field on a given unstructured mesh by solving a reaction–diffusion PDE with a stochastic right-hand side. The stochastic PDE is discretized using the mixed finite element method on anmore » embedded domain with a structured mesh, and then, the solution is projected onto the unstructured mesh. This work describes implementation details on how to efficiently transfer data from the structured and unstructured meshes at coarse levels, assuming that this can be done efficiently on the finest level. We investigate the efficiency and parallel scalability of the technique for the scalable generation of Gaussian random fields in three dimensions. An application of the MLMC method is presented for quantifying uncertainties of subsurface flow problems. Here, we demonstrate the scalability of the sampling method with nonmatching mesh embedding, coupled with a parallel forward model problem solver, for large-scale 3D MLMC simulations with up to 1.9·109 unknowns.« less

  16. Sparse Regression Based Structure Learning of Stochastic Reaction Networks from Single Cell Snapshot Time Series.

    PubMed

    Klimovskaia, Anna; Ganscha, Stefan; Claassen, Manfred

    2016-12-01

    Stochastic chemical reaction networks constitute a model class to quantitatively describe dynamics and cell-to-cell variability in biological systems. The topology of these networks typically is only partially characterized due to experimental limitations. Current approaches for refining network topology are based on the explicit enumeration of alternative topologies and are therefore restricted to small problem instances with almost complete knowledge. We propose the reactionet lasso, a computational procedure that derives a stepwise sparse regression approach on the basis of the Chemical Master Equation, enabling large-scale structure learning for reaction networks by implicitly accounting for billions of topology variants. We have assessed the structure learning capabilities of the reactionet lasso on synthetic data for the complete TRAIL induced apoptosis signaling cascade comprising 70 reactions. We find that the reactionet lasso is able to efficiently recover the structure of these reaction systems, ab initio, with high sensitivity and specificity. With only < 1% false discoveries, the reactionet lasso is able to recover 45% of all true reactions ab initio among > 6000 possible reactions and over 102000 network topologies. In conjunction with information rich single cell technologies such as single cell RNA sequencing or mass cytometry, the reactionet lasso will enable large-scale structure learning, particularly in areas with partial network structure knowledge, such as cancer biology, and thereby enable the detection of pathological alterations of reaction networks. We provide software to allow for wide applicability of the reactionet lasso.

  17. Integrating biomass quality variability in stochastic supply chain modeling and optimization for large-scale biofuel production

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Castillo-Villar, Krystel K.; Eksioglu, Sandra; Taherkhorsandi, Milad

    The production of biofuels using second-generation feedstocks has been recognized as an important alternative source of sustainable energy and its demand is expected to increase due to regulations such as the Renewable Fuel Standard. However, the pathway to biofuel industry maturity faces unique, unaddressed challenges. Here, to address this challenges, this article presents an optimization model which quantifies and controls the impact of biomass quality variability on supply chain related decisions and technology selection. We propose a two-stage stochastic programming model and associated efficient solution procedures for solving large-scale problems to (1) better represent the random nature of the biomassmore » quality (defined by moisture and ash contents) in supply chain modeling, and (2) assess the impact of these uncertainties on the supply chain design and planning. The proposed model is then applied to a case study in the state of Tennessee. Results show that high moisture and ash contents negatively impact the unit delivery cost since poor biomass quality requires the addition of quality control activities. Experimental results indicate that supply chain cost could increase as much as 27%–31% when biomass quality is poor. We assess the impact of the biomass quality on the topological supply chain. Our case study indicates that biomass quality impacts supply chain costs; thus, it is important to consider the impact of biomass quality in supply chain design and management decisions.« less

  18. Scalable approximate policies for Markov decision process models of hospital elective admissions.

    PubMed

    Zhu, George; Lizotte, Dan; Hoey, Jesse

    2014-05-01

    To demonstrate the feasibility of using stochastic simulation methods for the solution of a large-scale Markov decision process model of on-line patient admissions scheduling. The problem of admissions scheduling is modeled as a Markov decision process in which the states represent numbers of patients using each of a number of resources. We investigate current state-of-the-art real time planning methods to compute solutions to this Markov decision process. Due to the complexity of the model, traditional model-based planners are limited in scalability since they require an explicit enumeration of the model dynamics. To overcome this challenge, we apply sample-based planners along with efficient simulation techniques that given an initial start state, generate an action on-demand while avoiding portions of the model that are irrelevant to the start state. We also propose a novel variant of a popular sample-based planner that is particularly well suited to the elective admissions problem. Results show that the stochastic simulation methods allow for the problem size to be scaled by a factor of almost 10 in the action space, and exponentially in the state space. We have demonstrated our approach on a problem with 81 actions, four specialities and four treatment patterns, and shown that we can generate solutions that are near-optimal in about 100s. Sample-based planners are a viable alternative to state-based planners for large Markov decision process models of elective admissions scheduling. Copyright © 2014 Elsevier B.V. All rights reserved.

  19. Scalable hierarchical PDE sampler for generating spatially correlated random fields using nonmatching meshes: Scalable hierarchical PDE sampler using nonmatching meshes

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Osborn, Sarah; Zulian, Patrick; Benson, Thomas

    This work describes a domain embedding technique between two nonmatching meshes used for generating realizations of spatially correlated random fields with applications to large-scale sampling-based uncertainty quantification. The goal is to apply the multilevel Monte Carlo (MLMC) method for the quantification of output uncertainties of PDEs with random input coefficients on general and unstructured computational domains. We propose a highly scalable, hierarchical sampling method to generate realizations of a Gaussian random field on a given unstructured mesh by solving a reaction–diffusion PDE with a stochastic right-hand side. The stochastic PDE is discretized using the mixed finite element method on anmore » embedded domain with a structured mesh, and then, the solution is projected onto the unstructured mesh. This work describes implementation details on how to efficiently transfer data from the structured and unstructured meshes at coarse levels, assuming that this can be done efficiently on the finest level. We investigate the efficiency and parallel scalability of the technique for the scalable generation of Gaussian random fields in three dimensions. An application of the MLMC method is presented for quantifying uncertainties of subsurface flow problems. Here, we demonstrate the scalability of the sampling method with nonmatching mesh embedding, coupled with a parallel forward model problem solver, for large-scale 3D MLMC simulations with up to 1.9·109 unknowns.« less

  1. See-Through Imaging of Laser-Scanned 3d Cultural Heritage Objects Based on Stochastic Rendering of Large-Scale Point Clouds

    NASA Astrophysics Data System (ADS)

    Tanaka, S.; Hasegawa, K.; Okamoto, N.; Umegaki, R.; Wang, S.; Uemura, M.; Okamoto, A.; Koyamada, K.

    2016-06-01

    We propose a method for the precise 3D see-through imaging, or transparent visualization, of the large-scale and complex point clouds acquired via the laser scanning of 3D cultural heritage objects. Our method is based on a stochastic algorithm and directly uses the 3D points, which are acquired using a laser scanner, as the rendering primitives. This method achieves the correct depth feel without requiring depth sorting of the rendering primitives along the line of sight. Eliminating this need allows us to avoid long computation times when creating natural and precise 3D see-through views of laser-scanned cultural heritage objects. The opacity of each laser-scanned object is also flexibly controllable. For a laser-scanned point cloud consisting of more than 107 or 108 3D points, the pre-processing requires only a few minutes, and the rendering can be executed at interactive frame rates. Our method enables the creation of cumulative 3D see-through images of time-series laser-scanned data. It also offers the possibility of fused visualization for observing a laser-scanned object behind a transparent high-quality photographic image placed in the 3D scene. We demonstrate the effectiveness of our method by applying it to festival floats of high cultural value. These festival floats have complex outer and inner 3D structures and are suitable for see-through imaging.

  2. Integrating biomass quality variability in stochastic supply chain modeling and optimization for large-scale biofuel production

    DOE PAGES

    Castillo-Villar, Krystel K.; Eksioglu, Sandra; Taherkhorsandi, Milad

    2017-02-20

    The production of biofuels using second-generation feedstocks has been recognized as an important alternative source of sustainable energy and its demand is expected to increase due to regulations such as the Renewable Fuel Standard. However, the pathway to biofuel industry maturity faces unique, unaddressed challenges. Here, to address this challenges, this article presents an optimization model which quantifies and controls the impact of biomass quality variability on supply chain related decisions and technology selection. We propose a two-stage stochastic programming model and associated efficient solution procedures for solving large-scale problems to (1) better represent the random nature of the biomassmore » quality (defined by moisture and ash contents) in supply chain modeling, and (2) assess the impact of these uncertainties on the supply chain design and planning. The proposed model is then applied to a case study in the state of Tennessee. Results show that high moisture and ash contents negatively impact the unit delivery cost since poor biomass quality requires the addition of quality control activities. Experimental results indicate that supply chain cost could increase as much as 27%–31% when biomass quality is poor. We assess the impact of the biomass quality on the topological supply chain. Our case study indicates that biomass quality impacts supply chain costs; thus, it is important to consider the impact of biomass quality in supply chain design and management decisions.« less

  3. Electron thermal confinement in a partially stochastic magnetic structure

    NASA Astrophysics Data System (ADS)

    Morton, L. A.; Young, W. C.; Hegna, C. C.; Parke, E.; Reusch, J. A.; Den Hartog, D. J.

    2018-04-01

    Using a high-repetition-rate Thomson scattering diagnostic, we observe a peak in electron temperature Te coinciding with the location of a large magnetic island in the Madison Symmetric Torus. Magnetohydrodynamic modeling of this quasi-single helicity plasma indicates that smaller adjacent islands overlap with and destroy the large island flux surfaces. The estimated stochastic electron thermal conductivity ( ≈30 m 2/s ) is consistent with the conductivity inferred from the observed Te gradient and ohmic heating power. Island-shaped Te peaks can result from partially stochastic magnetic islands.

  4. Scaling laws and fluctuations in the statistics of word frequencies

    NASA Astrophysics Data System (ADS)

    Gerlach, Martin; Altmann, Eduardo G.

    2014-11-01

    In this paper, we combine statistical analysis of written texts and simple stochastic models to explain the appearance of scaling laws in the statistics of word frequencies. The average vocabulary of an ensemble of fixed-length texts is known to scale sublinearly with the total number of words (Heaps’ law). Analyzing the fluctuations around this average in three large databases (Google-ngram, English Wikipedia, and a collection of scientific articles), we find that the standard deviation scales linearly with the average (Taylor's law), in contrast to the prediction of decaying fluctuations obtained using simple sampling arguments. We explain both scaling laws (Heaps’ and Taylor) by modeling the usage of words using a Poisson process with a fat-tailed distribution of word frequencies (Zipf's law) and topic-dependent frequencies of individual words (as in topic models). Considering topical variations lead to quenched averages, turn the vocabulary size a non-self-averaging quantity, and explain the empirical observations. For the numerous practical applications relying on estimations of vocabulary size, our results show that uncertainties remain large even for long texts. We show how to account for these uncertainties in measurements of lexical richness of texts with different lengths.

  5. Flows, scaling, and the control of moment hierarchies for stochastic chemical reaction networks

    NASA Astrophysics Data System (ADS)

    Smith, Eric; Krishnamurthy, Supriya

    2017-12-01

    Stochastic chemical reaction networks (CRNs) are complex systems that combine the features of concurrent transformation of multiple variables in each elementary reaction event and nonlinear relations between states and their rates of change. Most general results concerning CRNs are limited to restricted cases where a topological characteristic known as deficiency takes a value 0 or 1, implying uniqueness and positivity of steady states and surprising, low-information forms for their associated probability distributions. Here we derive equations of motion for fluctuation moments at all orders for stochastic CRNs at general deficiency. We show, for the standard base case of proportional sampling without replacement (which underlies the mass-action rate law), that the generator of the stochastic process acts on the hierarchy of factorial moments with a finite representation. Whereas simulation of high-order moments for many-particle systems is costly, this representation reduces the solution of moment hierarchies to a complexity comparable to solving a heat equation. At steady states, moment hierarchies for finite CRNs interpolate between low-order and high-order scaling regimes, which may be approximated separately by distributions similar to those for deficiency-zero networks and connected through matched asymptotic expansions. In CRNs with multiple stable or metastable steady states, boundedness of high-order moments provides the starting condition for recursive solution downward to low-order moments, reversing the order usually used to solve moment hierarchies. A basis for a subset of network flows defined by having the same mean-regressing property as the flows in deficiency-zero networks gives the leading contribution to low-order moments in CRNs at general deficiency, in a 1 /n expansion in large particle numbers. Our results give a physical picture of the different informational roles of mean-regressing and non-mean-regressing flows and clarify the dynamical meaning of deficiency not only for first-moment conditions but for all orders in fluctuations.

  6. A non-statistical regularization approach and a tensor product decomposition method applied to complex flow data

    NASA Astrophysics Data System (ADS)

    von Larcher, Thomas; Blome, Therese; Klein, Rupert; Schneider, Reinhold; Wolf, Sebastian; Huber, Benjamin

    2016-04-01

    Handling high-dimensional data sets like they occur e.g. in turbulent flows or in multiscale behaviour of certain types in Geosciences are one of the big challenges in numerical analysis and scientific computing. A suitable solution is to represent those large data sets in an appropriate compact form. In this context, tensor product decomposition methods currently emerge as an important tool. One reason is that these methods often enable one to attack high-dimensional problems successfully, another that they allow for very compact representations of large data sets. We follow the novel Tensor-Train (TT) decomposition method to support the development of improved understanding of the multiscale behavior and the development of compact storage schemes for solutions of such problems. One long-term goal of the project is the construction of a self-consistent closure for Large Eddy Simulations (LES) of turbulent flows that explicitly exploits the tensor product approach's capability of capturing self-similar structures. Secondly, we focus on a mixed deterministic-stochastic subgrid scale modelling strategy currently under development for application in Finite Volume Large Eddy Simulation (LES) codes. Advanced methods of time series analysis for the databased construction of stochastic models with inherently non-stationary statistical properties and concepts of information theory based on a modified Akaike information criterion and on the Bayesian information criterion for the model discrimination are used to construct surrogate models for the non-resolved flux fluctuations. Vector-valued auto-regressive models with external influences form the basis for the modelling approach [1], [2], [4]. Here, we present the reconstruction capabilities of the two modeling approaches tested against 3D turbulent channel flow data computed by direct numerical simulation (DNS) for an incompressible, isothermal fluid at Reynolds number Reτ = 590 (computed by [3]). References [1] I. Horenko. On identification of nonstationary factor models and its application to atmospherical data analysis. J. Atm. Sci., 67:1559-1574, 2010. [2] P. Metzner, L. Putzig and I. Horenko. Analysis of persistent non-stationary time series and applications. CAMCoS, 7:175-229, 2012. [3] M. Uhlmann. Generation of a temporally well-resolved sequence of snapshots of the flow-field in turbulent plane channel flow. URL: http://www-turbul.ifh.unikarlsruhe.de/uhlmann/reports/produce.pdf, 2000. [4] Th. von Larcher, A. Beck, R. Klein, I. Horenko, P. Metzner, M. Waidmann, D. Igdalov, G. Gassner and C.-D. Munz. Towards a Framework for the Stochastic Modelling of Subgrid Scale Fluxes for Large Eddy Simulation. Meteorol. Z., 24:313-342, 2015.

  7. Simulating biological processes: stochastic physics from whole cells to colonies.

    PubMed

    Earnest, Tyler M; Cole, John A; Luthey-Schulten, Zaida

    2018-05-01

    The last few decades have revealed the living cell to be a crowded spatially heterogeneous space teeming with biomolecules whose concentrations and activities are governed by intrinsically random forces. It is from this randomness, however, that a vast array of precisely timed and intricately coordinated biological functions emerge that give rise to the complex forms and behaviors we see in the biosphere around us. This seemingly paradoxical nature of life has drawn the interest of an increasing number of physicists, and recent years have seen stochastic modeling grow into a major subdiscipline within biological physics. Here we review some of the major advances that have shaped our understanding of stochasticity in biology. We begin with some historical context, outlining a string of important experimental results that motivated the development of stochastic modeling. We then embark upon a fairly rigorous treatment of the simulation methods that are currently available for the treatment of stochastic biological models, with an eye toward comparing and contrasting their realms of applicability, and the care that must be taken when parameterizing them. Following that, we describe how stochasticity impacts several key biological functions, including transcription, translation, ribosome biogenesis, chromosome replication, and metabolism, before considering how the functions may be coupled into a comprehensive model of a 'minimal cell'. Finally, we close with our expectation for the future of the field, focusing on how mesoscopic stochastic methods may be augmented with atomic-scale molecular modeling approaches in order to understand life across a range of length and time scales.

  8. Simulating biological processes: stochastic physics from whole cells to colonies

    NASA Astrophysics Data System (ADS)

    Earnest, Tyler M.; Cole, John A.; Luthey-Schulten, Zaida

    2018-05-01

    The last few decades have revealed the living cell to be a crowded spatially heterogeneous space teeming with biomolecules whose concentrations and activities are governed by intrinsically random forces. It is from this randomness, however, that a vast array of precisely timed and intricately coordinated biological functions emerge that give rise to the complex forms and behaviors we see in the biosphere around us. This seemingly paradoxical nature of life has drawn the interest of an increasing number of physicists, and recent years have seen stochastic modeling grow into a major subdiscipline within biological physics. Here we review some of the major advances that have shaped our understanding of stochasticity in biology. We begin with some historical context, outlining a string of important experimental results that motivated the development of stochastic modeling. We then embark upon a fairly rigorous treatment of the simulation methods that are currently available for the treatment of stochastic biological models, with an eye toward comparing and contrasting their realms of applicability, and the care that must be taken when parameterizing them. Following that, we describe how stochasticity impacts several key biological functions, including transcription, translation, ribosome biogenesis, chromosome replication, and metabolism, before considering how the functions may be coupled into a comprehensive model of a ‘minimal cell’. Finally, we close with our expectation for the future of the field, focusing on how mesoscopic stochastic methods may be augmented with atomic-scale molecular modeling approaches in order to understand life across a range of length and time scales.

  9. Motion estimation under location uncertainty for turbulent fluid flows

    NASA Astrophysics Data System (ADS)

    Cai, Shengze; Mémin, Etienne; Dérian, Pierre; Xu, Chao

    2018-01-01

    In this paper, we propose a novel optical flow formulation for estimating two-dimensional velocity fields from an image sequence depicting the evolution of a passive scalar transported by a fluid flow. This motion estimator relies on a stochastic representation of the flow allowing to incorporate naturally a notion of uncertainty in the flow measurement. In this context, the Eulerian fluid flow velocity field is decomposed into two components: a large-scale motion field and a small-scale uncertainty component. We define the small-scale component as a random field. Subsequently, the data term of the optical flow formulation is based on a stochastic transport equation, derived from the formalism under location uncertainty proposed in Mémin (Geophys Astrophys Fluid Dyn 108(2):119-146, 2014) and Resseguier et al. (Geophys Astrophys Fluid Dyn 111(3):149-176, 2017a). In addition, a specific regularization term built from the assumption of constant kinetic energy involves the very same diffusion tensor as the one appearing in the data transport term. Opposite to the classical motion estimators, this enables us to devise an optical flow method dedicated to fluid flows in which the regularization parameter has now a clear physical interpretation and can be easily estimated. Experimental evaluations are presented on both synthetic and real world image sequences. Results and comparisons indicate very good performance of the proposed formulation for turbulent flow motion estimation.

  10. Spatiotemporal Stochastic Resonance:Theory and Experiment

    NASA Astrophysics Data System (ADS)

    Peter, Jung

    1996-03-01

    The amplification of weak periodic signals in bistable or excitable systems via stochastic resonance has been studied intensively over the last years. We are going one step further and ask: Can noise enhance spatiotemporal patterns in excitable media and can this effect be observed in nature? To this end, we are looking at large, two dimensional arrays of coupled excitable elements. Due to the coupling, excitation can propagate through the array in form of nonlinear waves. We observe target waves, rotating spiral waves and other wave forms. If the coupling between the elements is below a critical threshold, any excitational pattern will die out in the absence of noise. Below this threshold, large scale rotating spiral waves - as they are observed above threshold - can be maintained by a proper level of the noise[1]. Furthermore, their geometric features, such as the curvature can be controlled by the homogeneous noise level[2]. If the noise level is too large, break up of spiral waves and collisions with spontaneously nucleated waves yields spiral turbulence. Driving our array with a spatiotemporal pattern, e.g. a rotating spiral wave, we show that for weak coupling the excitational response of the array shows stochastic resonance - an effect we have termed spatiotemporal stochastic resonance. In the last part of the talk I'll make contact with calcium waves, observed in astrocyte cultures and hippocampus slices[3]. A. Cornell-Bell and collaborators[3] have pointed out the role of calcium waves for long-range glial signaling. We demonstrate the similarity of calcium waves with nonlinear waves in noisy excitable media. The noise level in the tissue is characterized by spontaneous activity and can be controlled by applying neuro-transmitter substances[3]. Noise effects in our model are compared with the effect of neuro-transmitters on calcium waves. [1]P. Jung and G. Mayer-Kress, CHAOS 5, 458 (1995). [2]P. Jung and G. Mayer-Kress, Phys. Rev. Lett.62, 2682 (1995). [3] A. Cornell-Bell, Steven M. Finkbeiner, Mark.S. Cooper and Stephen J. Smith, SCIENCE, 247, 373 (1990).

  11. Stochastic modeling of experimental chaotic time series.

    PubMed

    Stemler, Thomas; Werner, Johannes P; Benner, Hartmut; Just, Wolfram

    2007-01-26

    Methods developed recently to obtain stochastic models of low-dimensional chaotic systems are tested in electronic circuit experiments. We demonstrate that reliable drift and diffusion coefficients can be obtained even when no excessive time scale separation occurs. Crisis induced intermittent motion can be described in terms of a stochastic model showing tunneling which is dominated by state space dependent diffusion. Analytical solutions of the corresponding Fokker-Planck equation are in excellent agreement with experimental data.

  12. A Coupled Approach with Stochastic Rainfall-Runoff Simulation and Hydraulic Modeling for Extreme Flood Estimation on Large Watersheds

    NASA Astrophysics Data System (ADS)

    Paquet, E.

    2015-12-01

    The SCHADEX method aims at estimating the distribution of peak and daily discharges up to extreme quantiles. It couples a precipitation probabilistic model based on weather patterns, with a stochastic rainfall-runoff simulation process using a conceptual lumped model. It allows exploring an exhaustive set of hydrological conditions and watershed responses to intense rainfall events. Since 2006, it has been widely applied in France to about one hundred watersheds for dam spillway design, and also aboard (Norway, Canada and central Europe among others). However, its application to large watersheds (above 10 000 km²) faces some significant issues: spatial heterogeneity of rainfall and hydrological processes and flood peak damping due to hydraulic effects (flood plains, natural or man-made embankment) being the more important. This led to the development of an extreme flood simulation framework for large and heterogeneous watersheds, based on the SCHADEX method. Its main features are: Division of the large (or main) watershed into several smaller sub-watersheds, where the spatial homogeneity of the hydro-meteorological processes can reasonably be assumed, and where the hydraulic effects can be neglected. Identification of pilot watersheds where discharge data are available, thus where rainfall-runoff models can be calibrated. They will be parameters donors to non-gauged watersheds. Spatially coherent stochastic simulations for all the sub-watersheds at the daily time step. Identification of a selection of simulated events for a given return period (according to the distribution of runoff volumes at the scale of the main watershed). Generation of the complete hourly hydrographs at each of the sub-watersheds outlets. Routing to the main outlet with hydraulic 1D or 2D models. The presentation will be illustrated with the case-study of the Isère watershed (9981 km), a French snow-driven watershed. The main novelties of this method will be underlined, as well as its perspectives and future improvements.

  13. Path integrals and large deviations in stochastic hybrid systems.

    PubMed

    Bressloff, Paul C; Newby, Jay M

    2014-04-01

    We construct a path-integral representation of solutions to a stochastic hybrid system, consisting of one or more continuous variables evolving according to a piecewise-deterministic dynamics. The differential equations for the continuous variables are coupled to a set of discrete variables that satisfy a continuous-time Markov process, which means that the differential equations are only valid between jumps in the discrete variables. Examples of stochastic hybrid systems arise in biophysical models of stochastic ion channels, motor-driven intracellular transport, gene networks, and stochastic neural networks. We use the path-integral representation to derive a large deviation action principle for a stochastic hybrid system. Minimizing the associated action functional with respect to the set of all trajectories emanating from a metastable state (assuming that such a minimization scheme exists) then determines the most probable paths of escape. Moreover, evaluating the action functional along a most probable path generates the so-called quasipotential used in the calculation of mean first passage times. We illustrate the theory by considering the optimal paths of escape from a metastable state in a bistable neural network.

  14. Hybrid approaches for multiple-species stochastic reaction–diffusion models

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Spill, Fabian, E-mail: fspill@bu.edu; Department of Mechanical Engineering, Massachusetts Institute of Technology, 77 Massachusetts Avenue, Cambridge, MA 02139; Guerrero, Pilar

    2015-10-15

    Reaction–diffusion models are used to describe systems in fields as diverse as physics, chemistry, ecology and biology. The fundamental quantities in such models are individual entities such as atoms and molecules, bacteria, cells or animals, which move and/or react in a stochastic manner. If the number of entities is large, accounting for each individual is inefficient, and often partial differential equation (PDE) models are used in which the stochastic behaviour of individuals is replaced by a description of the averaged, or mean behaviour of the system. In some situations the number of individuals is large in certain regions and smallmore » in others. In such cases, a stochastic model may be inefficient in one region, and a PDE model inaccurate in another. To overcome this problem, we develop a scheme which couples a stochastic reaction–diffusion system in one part of the domain with its mean field analogue, i.e. a discretised PDE model, in the other part of the domain. The interface in between the two domains occupies exactly one lattice site and is chosen such that the mean field description is still accurate there. In this way errors due to the flux between the domains are small. Our scheme can account for multiple dynamic interfaces separating multiple stochastic and deterministic domains, and the coupling between the domains conserves the total number of particles. The method preserves stochastic features such as extinction not observable in the mean field description, and is significantly faster to simulate on a computer than the pure stochastic model. - Highlights: • A novel hybrid stochastic/deterministic reaction–diffusion simulation method is given. • Can massively speed up stochastic simulations while preserving stochastic effects. • Can handle multiple reacting species. • Can handle moving boundaries.« less

  15. Production and efficiency of large wildland fire suppression effort: A stochastic frontier analysis

    Treesearch

    Hari Katuwal; Dave Calkin; Michael S. Hand

    2016-01-01

    This study examines the production and efficiency of wildland fire suppression effort. We estimate the effectiveness of suppression resource inputs to produce controlled fire lines that contain large wildland fires using stochastic frontier analysis. Determinants of inefficiency are identified and the effects of these determinants on the daily production of...

  16. Extreme reaction times determine fluctuation scaling in human color vision

    NASA Astrophysics Data System (ADS)

    Medina, José M.; Díaz, José A.

    2016-11-01

    In modern mental chronometry, human reaction time defines the time elapsed from stimulus presentation until a response occurs and represents a reference paradigm for investigating stochastic latency mechanisms in color vision. Here we examine the statistical properties of extreme reaction times and whether they support fluctuation scaling in the skewness-kurtosis plane. Reaction times were measured for visual stimuli across the cardinal directions of the color space. For all subjects, the results show that very large reaction times deviate from the right tail of reaction time distributions suggesting the existence of dragon-kings events. The results also indicate that extreme reaction times are correlated and shape fluctuation scaling over a wide range of stimulus conditions. The scaling exponent was higher for achromatic than isoluminant stimuli, suggesting distinct generative mechanisms. Our findings open a new perspective for studying failure modes in sensory-motor communications and in complex networks.

  17. Stochastic parameterization of shallow cumulus convection estimated from high-resolution model data

    NASA Astrophysics Data System (ADS)

    Dorrestijn, Jesse; Crommelin, Daan T.; Siebesma, A. Pier.; Jonker, Harm J. J.

    2013-02-01

    In this paper, we report on the development of a methodology for stochastic parameterization of convective transport by shallow cumulus convection in weather and climate models. We construct a parameterization based on Large-Eddy Simulation (LES) data. These simulations resolve the turbulent fluxes of heat and moisture and are based on a typical case of non-precipitating shallow cumulus convection above sea in the trade-wind region. Using clustering, we determine a finite number of turbulent flux pairs for heat and moisture that are representative for the pairs of flux profiles observed in these simulations. In the stochastic parameterization scheme proposed here, the convection scheme jumps randomly between these pre-computed pairs of turbulent flux profiles. The transition probabilities are estimated from the LES data, and they are conditioned on the resolved-scale state in the model column. Hence, the stochastic parameterization is formulated as a data-inferred conditional Markov chain (CMC), where each state of the Markov chain corresponds to a pair of turbulent heat and moisture fluxes. The CMC parameterization is designed to emulate, in a statistical sense, the convective behaviour observed in the LES data. The CMC is tested in single-column model (SCM) experiments. The SCM is able to reproduce the ensemble spread of the temperature and humidity that was observed in the LES data. Furthermore, there is a good similarity between time series of the fractions of the discretized fluxes produced by SCM and observed in LES.

  18. Robust stochastic optimization for reservoir operation

    NASA Astrophysics Data System (ADS)

    Pan, Limeng; Housh, Mashor; Liu, Pan; Cai, Ximing; Chen, Xin

    2015-01-01

    Optimal reservoir operation under uncertainty is a challenging engineering problem. Application of classic stochastic optimization methods to large-scale problems is limited due to computational difficulty. Moreover, classic stochastic methods assume that the estimated distribution function or the sample inflow data accurately represents the true probability distribution, which may be invalid and the performance of the algorithms may be undermined. In this study, we introduce a robust optimization (RO) approach, Iterative Linear Decision Rule (ILDR), so as to provide a tractable approximation for a multiperiod hydropower generation problem. The proposed approach extends the existing LDR method by accommodating nonlinear objective functions. It also provides users with the flexibility of choosing the accuracy of ILDR approximations by assigning a desired number of piecewise linear segments to each uncertainty. The performance of the ILDR is compared with benchmark policies including the sampling stochastic dynamic programming (SSDP) policy derived from historical data. The ILDR solves both the single and multireservoir systems efficiently. The single reservoir case study results show that the RO method is as good as SSDP when implemented on the original historical inflows and it outperforms SSDP policy when tested on generated inflows with the same mean and covariance matrix as those in history. For the multireservoir case study, which considers water supply in addition to power generation, numerical results show that the proposed approach performs as well as in the single reservoir case study in terms of optimal value and distributional robustness.

  19. An offline approach for output-only Bayesian identification of stochastic nonlinear systems using unscented Kalman filtering

    NASA Astrophysics Data System (ADS)

    Erazo, Kalil; Nagarajaiah, Satish

    2017-06-01

    In this paper an offline approach for output-only Bayesian identification of stochastic nonlinear systems is presented. The approach is based on a re-parameterization of the joint posterior distribution of the parameters that define a postulated state-space stochastic model class. In the re-parameterization the state predictive distribution is included, marginalized, and estimated recursively in a state estimation step using an unscented Kalman filter, bypassing state augmentation as required by existing online methods. In applications expectations of functions of the parameters are of interest, which requires the evaluation of potentially high-dimensional integrals; Markov chain Monte Carlo is adopted to sample the posterior distribution and estimate the expectations. The proposed approach is suitable for nonlinear systems subjected to non-stationary inputs whose realization is unknown, and that are modeled as stochastic processes. Numerical verification and experimental validation examples illustrate the effectiveness and advantages of the approach, including: (i) an increased numerical stability with respect to augmented-state unscented Kalman filtering, avoiding divergence of the estimates when the forcing input is unmeasured; (ii) the ability to handle arbitrary prior and posterior distributions. The experimental validation of the approach is conducted using data from a large-scale structure tested on a shake table. It is shown that the approach is robust to inherent modeling errors in the description of the system and forcing input, providing accurate prediction of the dynamic response when the excitation history is unknown.

  20. Optimal Control of Hybrid Systems in Air Traffic Applications

    NASA Astrophysics Data System (ADS)

    Kamgarpour, Maryam

    Growing concerns over the scalability of air traffic operations, air transportation fuel emissions and prices, as well as the advent of communication and sensing technologies motivate improvements to the air traffic management system. To address such improvements, in this thesis a hybrid dynamical model as an abstraction of the air traffic system is considered. Wind and hazardous weather impacts are included using a stochastic model. This thesis focuses on the design of algorithms for verification and control of hybrid and stochastic dynamical systems and the application of these algorithms to air traffic management problems. In the deterministic setting, a numerically efficient algorithm for optimal control of hybrid systems is proposed based on extensions of classical optimal control techniques. This algorithm is applied to optimize the trajectory of an Airbus 320 aircraft in the presence of wind and storms. In the stochastic setting, the verification problem of reaching a target set while avoiding obstacles (reach-avoid) is formulated as a two-player game to account for external agents' influence on system dynamics. The solution approach is applied to air traffic conflict prediction in the presence of stochastic wind. Due to the uncertainty in forecasts of the hazardous weather, and hence the unsafe regions of airspace for aircraft flight, the reach-avoid framework is extended to account for stochastic target and safe sets. This methodology is used to maximize the probability of the safety of aircraft paths through hazardous weather. Finally, the problem of modeling and optimization of arrival air traffic and runway configuration in dense airspace subject to stochastic weather data is addressed. This problem is formulated as a hybrid optimal control problem and is solved with a hierarchical approach that decouples safety and performance. As illustrated with this problem, the large scale of air traffic operations motivates future work on the efficient implementation of the proposed algorithms.

  1. Large-scale derived flood frequency analysis based on continuous simulation

    NASA Astrophysics Data System (ADS)

    Dung Nguyen, Viet; Hundecha, Yeshewatesfa; Guse, Björn; Vorogushyn, Sergiy; Merz, Bruno

    2016-04-01

    There is an increasing need for spatially consistent flood risk assessments at the regional scale (several 100.000 km2), in particular in the insurance industry and for national risk reduction strategies. However, most large-scale flood risk assessments are composed of smaller-scale assessments and show spatial inconsistencies. To overcome this deficit, a large-scale flood model composed of a weather generator and catchments models was developed reflecting the spatially inherent heterogeneity. The weather generator is a multisite and multivariate stochastic model capable of generating synthetic meteorological fields (precipitation, temperature, etc.) at daily resolution for the regional scale. These fields respect the observed autocorrelation, spatial correlation and co-variance between the variables. They are used as input into catchment models. A long-term simulation of this combined system enables to derive very long discharge series at many catchment locations serving as a basic for spatially consistent flood risk estimates at the regional scale. This combined model was set up and validated for major river catchments in Germany. The weather generator was trained by 53-year observation data at 528 stations covering not only the complete Germany but also parts of France, Switzerland, Czech Republic and Australia with the aggregated spatial scale of 443,931 km2. 10.000 years of daily meteorological fields for the study area were generated. Likewise, rainfall-runoff simulations with SWIM were performed for the entire Elbe, Rhine, Weser, Donau and Ems catchments. The validation results illustrate a good performance of the combined system, as the simulated flood magnitudes and frequencies agree well with the observed flood data. Based on continuous simulation this model chain is then used to estimate flood quantiles for the whole Germany including upstream headwater catchments in neighbouring countries. This continuous large scale approach overcomes the several drawbacks reported in traditional approaches for the derived flood frequency analysis and therefore is recommended for large scale flood risk case studies.

  2. Adiabatic reduction of a model of stochastic gene expression with jump Markov process.

    PubMed

    Yvinec, Romain; Zhuge, Changjing; Lei, Jinzhi; Mackey, Michael C

    2014-04-01

    This paper considers adiabatic reduction in a model of stochastic gene expression with bursting transcription considered as a jump Markov process. In this model, the process of gene expression with auto-regulation is described by fast/slow dynamics. The production of mRNA is assumed to follow a compound Poisson process occurring at a rate depending on protein levels (the phenomena called bursting in molecular biology) and the production of protein is a linear function of mRNA numbers. When the dynamics of mRNA is assumed to be a fast process (due to faster mRNA degradation than that of protein) we prove that, with appropriate scalings in the burst rate, jump size or translational rate, the bursting phenomena can be transmitted to the slow variable. We show that, depending on the scaling, the reduced equation is either a stochastic differential equation with a jump Poisson process or a deterministic ordinary differential equation. These results are significant because adiabatic reduction techniques seem to have not been rigorously justified for a stochastic differential system containing a jump Markov process. We expect that the results can be generalized to adiabatic methods in more general stochastic hybrid systems.

  3. The steady-state mosaic of disturbance and succession across an old-growth Central Amazon forest landscape.

    PubMed

    Chambers, Jeffrey Q; Negron-Juarez, Robinson I; Marra, Daniel Magnabosco; Di Vittorio, Alan; Tews, Joerg; Roberts, Dar; Ribeiro, Gabriel H P M; Trumbore, Susan E; Higuchi, Niro

    2013-03-05

    Old-growth forest ecosystems comprise a mosaic of patches in different successional stages, with the fraction of the landscape in any particular state relatively constant over large temporal and spatial scales. The size distribution and return frequency of disturbance events, and subsequent recovery processes, determine to a large extent the spatial scale over which this old-growth steady state develops. Here, we characterize this mosaic for a Central Amazon forest by integrating field plot data, remote sensing disturbance probability distribution functions, and individual-based simulation modeling. Results demonstrate that a steady state of patches of varying successional age occurs over a relatively large spatial scale, with important implications for detecting temporal trends on plots that sample a small fraction of the landscape. Long highly significant stochastic runs averaging 1.0 Mg biomass⋅ha(-1)⋅y(-1) were often punctuated by episodic disturbance events, resulting in a sawtooth time series of hectare-scale tree biomass. To maximize the detection of temporal trends for this Central Amazon site (e.g., driven by CO2 fertilization), plots larger than 10 ha would provide the greatest sensitivity. A model-based analysis of fractional mortality across all gap sizes demonstrated that 9.1-16.9% of tree mortality was missing from plot-based approaches, underscoring the need to combine plot and remote-sensing methods for estimating net landscape carbon balance. Old-growth tropical forests can exhibit complex large-scale structure driven by disturbance and recovery cycles, with ecosystem and community attributes of hectare-scale plots exhibiting continuous dynamic departures from a steady-state condition.

  4. Dissecting the multi-scale spatial relationship of earthworm assemblages with soil environmental variability.

    PubMed

    Jiménez, Juan J; Decaëns, Thibaud; Lavelle, Patrick; Rossi, Jean-Pierre

    2014-12-05

    Studying the drivers and determinants of species, population and community spatial patterns is central to ecology. The observed structure of community assemblages is the result of deterministic abiotic (environmental constraints) and biotic factors (positive and negative species interactions), as well as stochastic colonization events (historical contingency). We analyzed the role of multi-scale spatial component of soil environmental variability in structuring earthworm assemblages in a gallery forest from the Colombian "Llanos". We aimed to disentangle the spatial scales at which species assemblages are structured and determine whether these scales matched those expressed by soil environmental variables. We also tested the hypothesis of the "single tree effect" by exploring the spatial relationships between root-related variables and soil nutrient and physical variables in structuring earthworm assemblages. Multivariate ordination techniques and spatially explicit tools were used, namely cross-correlograms, Principal Coordinates of Neighbor Matrices (PCNM) and variation partitioning analyses. The relationship between the spatial organization of earthworm assemblages and soil environmental parameters revealed explicitly multi-scale responses. The soil environmental variables that explained nested population structures across the multi-spatial scale gradient differed for earthworms and assemblages at the very-fine- (<10 m) to medium-scale (10-20 m). The root traits were correlated with areas of high soil nutrient contents at a depth of 0-5 cm. Information on the scales of PCNM variables was obtained using variogram modeling. Based on the size of the plot, the PCNM variables were arbitrarily allocated to medium (>30 m), fine (10-20 m) and very fine scales (<10 m). Variation partitioning analysis revealed that the soil environmental variability explained from less than 1% to as much as 48% of the observed earthworm spatial variation. A large proportion of the spatial variation did not depend on the soil environmental variability for certain species. This finding could indicate the influence of contagious biotic interactions, stochastic factors, or unmeasured relevant soil environmental variables.

  5. Stochastic effects in a seasonally forced epidemic model

    NASA Astrophysics Data System (ADS)

    Rozhnova, G.; Nunes, A.

    2010-10-01

    The interplay of seasonality, the system’s nonlinearities and intrinsic stochasticity, is studied for a seasonally forced susceptible-exposed-infective-recovered stochastic model. The model is explored in the parameter region that corresponds to childhood infectious diseases such as measles. The power spectrum of the stochastic fluctuations around the attractors of the deterministic system that describes the model in the thermodynamic limit is computed analytically and validated by stochastic simulations for large system sizes. Size effects are studied through additional simulations. Other effects such as switching between coexisting attractors induced by stochasticity often mentioned in the literature as playing an important role in the dynamics of childhood infectious diseases are also investigated. The main conclusion is that stochastic amplification, rather than these effects, is the key ingredient to understand the observed incidence patterns.

  6. Randomized central limit theorems: A unified theory

    NASA Astrophysics Data System (ADS)

    Eliazar, Iddo; Klafter, Joseph

    2010-08-01

    The central limit theorems (CLTs) characterize the macroscopic statistical behavior of large ensembles of independent and identically distributed random variables. The CLTs assert that the universal probability laws governing ensembles’ aggregate statistics are either Gaussian or Lévy, and that the universal probability laws governing ensembles’ extreme statistics are Fréchet, Weibull, or Gumbel. The scaling schemes underlying the CLTs are deterministic—scaling all ensemble components by a common deterministic scale. However, there are “random environment” settings in which the underlying scaling schemes are stochastic—scaling the ensemble components by different random scales. Examples of such settings include Holtsmark’s law for gravitational fields and the Stretched Exponential law for relaxation times. In this paper we establish a unified theory of randomized central limit theorems (RCLTs)—in which the deterministic CLT scaling schemes are replaced with stochastic scaling schemes—and present “randomized counterparts” to the classic CLTs. The RCLT scaling schemes are shown to be governed by Poisson processes with power-law statistics, and the RCLTs are shown to universally yield the Lévy, Fréchet, and Weibull probability laws.

  7. How a small noise generates large-amplitude oscillations of volcanic plug and provides high seismicity

    NASA Astrophysics Data System (ADS)

    Alexandrov, Dmitri V.; Bashkirtseva, Irina A.; Ryashko, Lev B.

    2015-04-01

    A non-linear behavior of dynamic model of the magma-plug system under the action of N-shaped friction force and stochastic disturbances is studied. It is shown that the deterministic dynamics essentially depends on the mutual arrangement of an equilibrium point and the friction force branches. Variations of this arrangement imply bifurcations, birth and disappearance of stable limit cycles, changes of the stability of equilibria, system transformations between mono- and bistable regimes. A slope of the right increasing branch of the friction function is responsible for the formation of such regimes. In a bistable zone, the noise generates transitions between small and large amplitude stochastic oscillations. In a monostable zone with single stable equilibrium, a new dynamic phenomenon of noise-induced generation of large amplitude stochastic oscillations in the plug rate and pressure is revealed. A beat-type dynamics of the plug displacement under the influence of stochastic forcing is studied as well.

  8. Path-integral methods for analyzing the effects of fluctuations in stochastic hybrid neural networks.

    PubMed

    Bressloff, Paul C

    2015-01-01

    We consider applications of path-integral methods to the analysis of a stochastic hybrid model representing a network of synaptically coupled spiking neuronal populations. The state of each local population is described in terms of two stochastic variables, a continuous synaptic variable and a discrete activity variable. The synaptic variables evolve according to piecewise-deterministic dynamics describing, at the population level, synapses driven by spiking activity. The dynamical equations for the synaptic currents are only valid between jumps in spiking activity, and the latter are described by a jump Markov process whose transition rates depend on the synaptic variables. We assume a separation of time scales between fast spiking dynamics with time constant [Formula: see text] and slower synaptic dynamics with time constant τ. This naturally introduces a small positive parameter [Formula: see text], which can be used to develop various asymptotic expansions of the corresponding path-integral representation of the stochastic dynamics. First, we derive a variational principle for maximum-likelihood paths of escape from a metastable state (large deviations in the small noise limit [Formula: see text]). We then show how the path integral provides an efficient method for obtaining a diffusion approximation of the hybrid system for small ϵ. The resulting Langevin equation can be used to analyze the effects of fluctuations within the basin of attraction of a metastable state, that is, ignoring the effects of large deviations. We illustrate this by using the Langevin approximation to analyze the effects of intrinsic noise on pattern formation in a spatially structured hybrid network. In particular, we show how noise enlarges the parameter regime over which patterns occur, in an analogous fashion to PDEs. Finally, we carry out a [Formula: see text]-loop expansion of the path integral, and use this to derive corrections to voltage-based mean-field equations, analogous to the modified activity-based equations generated from a neural master equation.

  9. Planetary Rings

    NASA Astrophysics Data System (ADS)

    Esposito, Larry

    2014-03-01

    Preface: a personal view of planetary rings; 1. Introduction: the allure of the ringed planets; 2. Studies of planetary rings 1610-2013; 3. Diversity of planetary rings; 4. Individual ring particles and their collisions; 5. Large-scale ring evolution; 6. Moons confine and sculpt rings; 7. Explaining ring phenomena; 8. N-body simulations; 9. Stochastic models; 10. Age and evolution of rings; 11. Saturn's mysterious F ring; 12. Uranus' rings and moons; 13. Neptune's partial rings; 14. Jupiter's ring-moon system after Galileo and New Horizons; 15. Ring photometry; 16. Dusty rings; 17. Concluding remarks; Afterword; Glossary; References; Index.

  10. Multi-Parent Clustering Algorithms from Stochastic Grammar Data Models

    NASA Technical Reports Server (NTRS)

    Mjoisness, Eric; Castano, Rebecca; Gray, Alexander

    1999-01-01

    We introduce a statistical data model and an associated optimization-based clustering algorithm which allows data vectors to belong to zero, one or several "parent" clusters. For each data vector the algorithm makes a discrete decision among these alternatives. Thus, a recursive version of this algorithm would place data clusters in a Directed Acyclic Graph rather than a tree. We test the algorithm with synthetic data generated according to the statistical data model. We also illustrate the algorithm using real data from large-scale gene expression assays.

  11. Flow Topology Transition via Global Bifurcation in Thermally Driven Turbulence

    NASA Astrophysics Data System (ADS)

    Xie, Yi-Chao; Ding, Guang-Yu; Xia, Ke-Qing

    2018-05-01

    We report an experimental observation of a flow topology transition via global bifurcation in a turbulent Rayleigh-Bénard convection. This transition corresponds to a spontaneous symmetry breaking with the flow becomes more turbulent. Simultaneous measurements of the large-scale flow (LSF) structure and the heat transport show that the LSF bifurcates from a high heat transport efficiency quadrupole state to a less symmetric dipole state with a lower heat transport efficiency. In the transition zone, the system switches spontaneously and stochastically between the two long-lived metastable states.

  12. Calculating Higher-Order Moments of Phylogenetic Stochastic Mapping Summaries in Linear Time.

    PubMed

    Dhar, Amrit; Minin, Vladimir N

    2017-05-01

    Stochastic mapping is a simulation-based method for probabilistically mapping substitution histories onto phylogenies according to continuous-time Markov models of evolution. This technique can be used to infer properties of the evolutionary process on the phylogeny and, unlike parsimony-based mapping, conditions on the observed data to randomly draw substitution mappings that do not necessarily require the minimum number of events on a tree. Most stochastic mapping applications simulate substitution mappings only to estimate the mean and/or variance of two commonly used mapping summaries: the number of particular types of substitutions (labeled substitution counts) and the time spent in a particular group of states (labeled dwelling times) on the tree. Fast, simulation-free algorithms for calculating the mean of stochastic mapping summaries exist. Importantly, these algorithms scale linearly in the number of tips/leaves of the phylogenetic tree. However, to our knowledge, no such algorithm exists for calculating higher-order moments of stochastic mapping summaries. We present one such simulation-free dynamic programming algorithm that calculates prior and posterior mapping variances and scales linearly in the number of phylogeny tips. Our procedure suggests a general framework that can be used to efficiently compute higher-order moments of stochastic mapping summaries without simulations. We demonstrate the usefulness of our algorithm by extending previously developed statistical tests for rate variation across sites and for detecting evolutionarily conserved regions in genomic sequences.

  13. Calculating Higher-Order Moments of Phylogenetic Stochastic Mapping Summaries in Linear Time

    PubMed Central

    Dhar, Amrit

    2017-01-01

    Abstract Stochastic mapping is a simulation-based method for probabilistically mapping substitution histories onto phylogenies according to continuous-time Markov models of evolution. This technique can be used to infer properties of the evolutionary process on the phylogeny and, unlike parsimony-based mapping, conditions on the observed data to randomly draw substitution mappings that do not necessarily require the minimum number of events on a tree. Most stochastic mapping applications simulate substitution mappings only to estimate the mean and/or variance of two commonly used mapping summaries: the number of particular types of substitutions (labeled substitution counts) and the time spent in a particular group of states (labeled dwelling times) on the tree. Fast, simulation-free algorithms for calculating the mean of stochastic mapping summaries exist. Importantly, these algorithms scale linearly in the number of tips/leaves of the phylogenetic tree. However, to our knowledge, no such algorithm exists for calculating higher-order moments of stochastic mapping summaries. We present one such simulation-free dynamic programming algorithm that calculates prior and posterior mapping variances and scales linearly in the number of phylogeny tips. Our procedure suggests a general framework that can be used to efficiently compute higher-order moments of stochastic mapping summaries without simulations. We demonstrate the usefulness of our algorithm by extending previously developed statistical tests for rate variation across sites and for detecting evolutionarily conserved regions in genomic sequences. PMID:28177780

  14. Stochastic inflation in phase space: is slow roll a stochastic attractor?

    NASA Astrophysics Data System (ADS)

    Grain, Julien; Vennin, Vincent

    2017-05-01

    An appealing feature of inflationary cosmology is the presence of a phase-space attractor, ``slow roll'', which washes out the dependence on initial field velocities. We investigate the robustness of this property under backreaction from quantum fluctuations using the stochastic inflation formalism in the phase-space approach. A Hamiltonian formulation of stochastic inflation is presented, where it is shown that the coarse-graining procedure—where wavelengths smaller than the Hubble radius are integrated out—preserves the canonical structure of free fields. This means that different sets of canonical variables give rise to the same probability distribution which clarifies the literature with respect to this issue. The role played by the quantum-to-classical transition is also analysed and is shown to constrain the coarse-graining scale. In the case of free fields, we find that quantum diffusion is aligned in phase space with the slow-roll direction. This implies that the classical slow-roll attractor is immune to stochastic effects and thus generalises to a stochastic attractor regardless of initial conditions, with a relaxation time at least as short as in the classical system. For non-test fields or for test fields with non-linear self interactions however, quantum diffusion and the classical slow-roll flow are misaligned. We derive a condition on the coarse-graining scale so that observational corrections from this misalignment are negligible at leading order in slow roll.

  15. Efficient Storage Scheme of Covariance Matrix during Inverse Modeling

    NASA Astrophysics Data System (ADS)

    Mao, D.; Yeh, T. J.

    2013-12-01

    During stochastic inverse modeling, the covariance matrix of geostatistical based methods carries the information about the geologic structure. Its update during iterations reflects the decrease of uncertainty with the incorporation of observed data. For large scale problem, its storage and update cost too much memory and computational resources. In this study, we propose a new efficient storage scheme for storage and update. Compressed Sparse Column (CSC) format is utilized to storage the covariance matrix, and users can assign how many data they prefer to store based on correlation scales since the data beyond several correlation scales are usually not very informative for inverse modeling. After every iteration, only the diagonal terms of the covariance matrix are updated. The off diagonal terms are calculated and updated based on shortened correlation scales with a pre-assigned exponential model. The correlation scales are shortened by a coefficient, i.e. 0.95, every iteration to show the decrease of uncertainty. There is no universal coefficient for all the problems and users are encouraged to try several times. This new scheme is tested with 1D examples first. The estimated results and uncertainty are compared with the traditional full storage method. In the end, a large scale numerical model is utilized to validate this new scheme.

  16. Modeling Stochastic Energy and Water Consumption to Manage Residential Water Uses

    NASA Astrophysics Data System (ADS)

    Abdallah, A. M.; Rosenberg, D. E.; Water; Energy Conservation

    2011-12-01

    Water energy linkages have received growing attention from the water and energy utilities as utilities recognize that collaborative efforts can implement more effective conservation and efficiency improvement programs at lower cost with less effort. To date, limited energy-water household data has allowed only deterministic analysis for average, representative households and required coarse assumptions - like the water heater (the primary energy use in a home apart from heating and cooling) be a single end use. Here, we use recent available disaggregated hot and cold water household end-use data to estimate water and energy consumption for toilet, shower, faucet, dishwasher, laundry machine, leaks, and other household uses and savings from appliance retrofits. The disaggregated hot water and bulk water end-use data was previously collected by the USEPA for 96 single family households in Seattle WA and Oakland CA, and Tampa FL between the period from 2000 and 2003 for two weeks before and four weeks after each household was retrofitted with water efficient appliances. Using the disaggregated data, we developed a stochastic model that represents factors that influence water use for each appliance: behavioral (use frequency and duration), demographical (household size), and technological (use volume or flowrate). We also include stochastic factors that govern energy to heat hot water: hot water fraction (percentage of hot water volume to total water volume used in a certain end-use event), heater water intake and dispense temperatures, and energy source for the heater (gas, electric, etc). From the empirical household end-use data, we derive stochastic probability distributions for each water and energy factor where each distribution represents the range and likelihood of values that the factor may take. The uncertainty of the stochastic water and energy factors is propagated using Monte Carlo simulations to calculate the composite probability distribution for water and energy use, potential savings, and payback periods to install efficient water end-use appliances and fixtures. Stochastic model results show the distributions among households for (i) water end-use, (ii) energy consumed to use water, and (iii) financial payback periods. Compared to deterministic analysis, stochastic modeling results show that hot water fractions for appliances follow normal distributions with high standard deviation and reveal pronounced variations among households that significantly affect energy savings and payback period estimates. These distributions provide an important tool to select and size water conservation programs to simultaneously meet both water and energy conservation goals. They also provide a way to identify and target a small fraction of customers with potential to save large water volumes and energy from appliance retrofits. Future work will embed this household scale stochastic model in city-scale models to identify win-win water management opportunities where households save money by conserving water and energy while cities avoid costs, downsize, or delay infrastructure development.

  17. Stochastic does not equal ad hoc. [theories of lunar origin

    NASA Technical Reports Server (NTRS)

    Hartmann, W. K.

    1984-01-01

    Some classes of influential events in solar system history are class-predictable but not event-predictable. Theories of lunar origin should not ignore class-predictable stochastic events. Impacts and close encounters with large objects during planet formation are class-predictable. These stochastic events, such as large impacts that triggered ejection of Earth-mantle material into a circum-Earth cloud, should not be rejected as ad hoc. A way to deal with such events scientifically is to investigate their consequences; if it can be shown that they might produce the Moon, they become viable concepts in theories of lunar origin.

  18. Large-Scale Optimization for Bayesian Inference in Complex Systems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Willcox, Karen; Marzouk, Youssef

    2013-11-12

    The SAGUARO (Scalable Algorithms for Groundwater Uncertainty Analysis and Robust Optimization) Project focused on the development of scalable numerical algorithms for large-scale Bayesian inversion in complex systems that capitalize on advances in large-scale simulation-based optimization and inversion methods. The project was a collaborative effort among MIT, the University of Texas at Austin, Georgia Institute of Technology, and Sandia National Laboratories. The research was directed in three complementary areas: efficient approximations of the Hessian operator, reductions in complexity of forward simulations via stochastic spectral approximations and model reduction, and employing large-scale optimization concepts to accelerate sampling. The MIT--Sandia component of themore » SAGUARO Project addressed the intractability of conventional sampling methods for large-scale statistical inverse problems by devising reduced-order models that are faithful to the full-order model over a wide range of parameter values; sampling then employs the reduced model rather than the full model, resulting in very large computational savings. Results indicate little effect on the computed posterior distribution. On the other hand, in the Texas--Georgia Tech component of the project, we retain the full-order model, but exploit inverse problem structure (adjoint-based gradients and partial Hessian information of the parameter-to-observation map) to implicitly extract lower dimensional information on the posterior distribution; this greatly speeds up sampling methods, so that fewer sampling points are needed. We can think of these two approaches as ``reduce then sample'' and ``sample then reduce.'' In fact, these two approaches are complementary, and can be used in conjunction with each other. Moreover, they both exploit deterministic inverse problem structure, in the form of adjoint-based gradient and Hessian information of the underlying parameter-to-observation map, to achieve their speedups.« less

  19. Final Report: Large-Scale Optimization for Bayesian Inference in Complex Systems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ghattas, Omar

    2013-10-15

    The SAGUARO (Scalable Algorithms for Groundwater Uncertainty Analysis and Robust Optimiza- tion) Project focuses on the development of scalable numerical algorithms for large-scale Bayesian inversion in complex systems that capitalize on advances in large-scale simulation-based optimiza- tion and inversion methods. Our research is directed in three complementary areas: efficient approximations of the Hessian operator, reductions in complexity of forward simulations via stochastic spectral approximations and model reduction, and employing large-scale optimization concepts to accelerate sampling. Our efforts are integrated in the context of a challenging testbed problem that considers subsurface reacting flow and transport. The MIT component of the SAGUAROmore » Project addresses the intractability of conventional sampling methods for large-scale statistical inverse problems by devising reduced-order models that are faithful to the full-order model over a wide range of parameter values; sampling then employs the reduced model rather than the full model, resulting in very large computational savings. Results indicate little effect on the computed posterior distribution. On the other hand, in the Texas-Georgia Tech component of the project, we retain the full-order model, but exploit inverse problem structure (adjoint-based gradients and partial Hessian information of the parameter-to- observation map) to implicitly extract lower dimensional information on the posterior distribution; this greatly speeds up sampling methods, so that fewer sampling points are needed. We can think of these two approaches as "reduce then sample" and "sample then reduce." In fact, these two approaches are complementary, and can be used in conjunction with each other. Moreover, they both exploit deterministic inverse problem structure, in the form of adjoint-based gradient and Hessian information of the underlying parameter-to-observation map, to achieve their speedups.« less

  20. Introduction to Econophysics

    NASA Astrophysics Data System (ADS)

    Mantegna, Rosario N.; Stanley, H. Eugene

    2007-08-01

    Preface; 1. Introduction; 2. Efficient market hypothesis; 3. Random walk; 4. Lévy stochastic processes and limit theorems; 5. Scales in financial data; 6. Stationarity and time correlation; 7. Time correlation in financial time series; 8. Stochastic models of price dynamics; 9. Scaling and its breakdown; 10. ARCH and GARCH processes; 11. Financial markets and turbulence; 12. Correlation and anti-correlation between stocks; 13. Taxonomy of a stock portfolio; 14. Options in idealized markets; 15. Options in real markets; Appendix A: notation guide; Appendix B: martingales; References; Index.

  1. The development of magnetic field line wander in gyrokinetic plasma turbulence: dependence on amplitude of turbulence

    NASA Astrophysics Data System (ADS)

    Bourouaine, Sofiane; Howes, Gregory G.

    2017-06-01

    The dynamics of a turbulent plasma not only manifests the transport of energy from large to small scales, but also can lead to a tangling of the magnetic field that threads through the plasma. The resulting magnetic field line wander can have a large impact on a number of other important processes, such as the propagation of energetic particles through the turbulent plasma. Here we explore the saturation of the turbulent cascade, the development of stochasticity due to turbulent tangling of the magnetic field lines and the separation of field lines through the turbulent dynamics using nonlinear gyrokinetic simulations of weakly collisional plasma turbulence, relevant to many turbulent space and astrophysical plasma environments. We determine the characteristic time 2$ for the saturation of the turbulent perpendicular magnetic energy spectrum. We find that the turbulent magnetic field becomes completely stochastic at time 2$ for strong turbulence, and at 2$ for weak turbulence. However, when the nonlinearity parameter of the turbulence, a dimensionless measure of the amplitude of the turbulence, reaches a threshold value (within the regime of weak turbulence) the magnetic field stochasticity does not fully develop, at least within the evolution time interval 22$ . Finally, we quantify the mean square displacement of magnetic field lines in the turbulent magnetic field with a functional form 2\\rangle =A(z/L\\Vert )p$ ( \\Vert $ is the correlation length parallel to the magnetic background field \\mathbf{0}$ , is the distance along \\mathbf{0}$ direction), providing functional forms of the amplitude coefficient and power-law exponent as a function of the nonlinearity parameter.

  2. Bubonic plague: a metapopulation model of a zoonosis.

    PubMed Central

    Keeling, M J; Gilligan, C A

    2000-01-01

    Bubonic plague (Yersinia pestis) is generally thought of as a historical disease; however, it is still responsible for around 1000-3000 deaths each year worldwide. This paper expands the analysis of a model for bubonic plague that encompasses the disease dynamics in rat, flea and human populations. Some key variables of the deterministic model, including the force of infection to humans, are shown to be robust to changes in the basic parameters, although variation in the flea searching efficiency, and the movement rates of rats and fleas will be considered throughout the paper. The stochastic behaviour of the corresponding metapopulation model is discussed, with attention focused on the dynamics of rats and the force of infection at the local spatial scale. Short-lived local epidemics in rats govern the invasion of the disease and produce an irregular pattern of human cases similar to those observed. However, the endemic behaviour in a few rat subpopulations allows the disease to persist for many years. This spatial stochastic model is also used to identify the criteria for the spread to human populations in terms of the rat density. Finally, the full stochastic model is reduced to the form of a probabilistic cellular automaton, which allows the analysis of a large number of replicated epidemics in large populations. This simplified model enables us to analyse the spatial properties of rat epidemics and the effects of movement rates, and also to test whether the emergent metapopulation behaviour is a property of the local dynamics rather than the precise details of the model. PMID:11413636

  3. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Duncan, Andrew, E-mail: a.duncan@imperial.ac.uk; Erban, Radek, E-mail: erban@maths.ox.ac.uk; Zygalakis, Konstantinos, E-mail: k.zygalakis@ed.ac.uk

    Stochasticity plays a fundamental role in various biochemical processes, such as cell regulatory networks and enzyme cascades. Isothermal, well-mixed systems can be modelled as Markov processes, typically simulated using the Gillespie Stochastic Simulation Algorithm (SSA) [25]. While easy to implement and exact, the computational cost of using the Gillespie SSA to simulate such systems can become prohibitive as the frequency of reaction events increases. This has motivated numerous coarse-grained schemes, where the “fast” reactions are approximated either using Langevin dynamics or deterministically. While such approaches provide a good approximation when all reactants are abundant, the approximation breaks down when onemore » or more species exist only in small concentrations and the fluctuations arising from the discrete nature of the reactions become significant. This is particularly problematic when using such methods to compute statistics of extinction times for chemical species, as well as simulating non-equilibrium systems such as cell-cycle models in which a single species can cycle between abundance and scarcity. In this paper, a hybrid jump-diffusion model for simulating well-mixed stochastic kinetics is derived. It acts as a bridge between the Gillespie SSA and the chemical Langevin equation. For low reactant reactions the underlying behaviour is purely discrete, while purely diffusive when the concentrations of all species are large, with the two different behaviours coexisting in the intermediate region. A bound on the weak error in the classical large volume scaling limit is obtained, and three different numerical discretisations of the jump-diffusion model are described. The benefits of such a formalism are illustrated using computational examples.« less

  4. Uncertainty Reduction for Stochastic Processes on Complex Networks

    NASA Astrophysics Data System (ADS)

    Radicchi, Filippo; Castellano, Claudio

    2018-05-01

    Many real-world systems are characterized by stochastic dynamical rules where a complex network of interactions among individual elements probabilistically determines their state. Even with full knowledge of the network structure and of the stochastic rules, the ability to predict system configurations is generally characterized by a large uncertainty. Selecting a fraction of the nodes and observing their state may help to reduce the uncertainty about the unobserved nodes. However, choosing these points of observation in an optimal way is a highly nontrivial task, depending on the nature of the stochastic process and on the structure of the underlying interaction pattern. In this paper, we introduce a computationally efficient algorithm to determine quasioptimal solutions to the problem. The method leverages network sparsity to reduce computational complexity from exponential to almost quadratic, thus allowing the straightforward application of the method to mid-to-large-size systems. Although the method is exact only for equilibrium stochastic processes defined on trees, it turns out to be effective also for out-of-equilibrium processes on sparse loopy networks.

  5. Phase-Space Transport of Stochastic Chaos in Population Dynamics of Virus Spread

    NASA Astrophysics Data System (ADS)

    Billings, Lora; Bollt, Erik M.; Schwartz, Ira B.

    2002-06-01

    A general way to classify stochastic chaos is presented and applied to population dynamics models. A stochastic dynamical theory is used to develop an algorithmic tool to measure the transport across basin boundaries and predict the most probable regions of transport created by noise. The results of this tool are illustrated on a model of virus spread in a large population, where transport regions reveal how noise completes the necessary manifold intersections for the creation of emerging stochastic chaos.

  6. On square-wave-driven stochastic resonance for energy harvesting in a bistable system

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Su, Dongxu, E-mail: sudx@iis.u-tokyo.ac.jp; Zheng, Rencheng; Nakano, Kimihiko

    Stochastic resonance is a physical phenomenon through which the throughput of energy within an oscillator excited by a stochastic source can be boosted by adding a small modulating excitation. This study investigates the feasibility of implementing square-wave-driven stochastic resonance to enhance energy harvesting. The motivating hypothesis was that such stochastic resonance can be efficiently realized in a bistable mechanism. However, the condition for the occurrence of stochastic resonance is conventionally defined by the Kramers rate. This definition is inadequate because of the necessity and difficulty in estimating white noise density. A bistable mechanism has been designed using an explicit analyticalmore » model which implies a new approach for achieving stochastic resonance in the paper. Experimental tests confirm that the addition of a small-scale force to the bistable system excited by a random signal apparently leads to a corresponding amplification of the response that we now term square-wave-driven stochastic resonance. The study therefore indicates that this approach may be a promising way to improve the performance of an energy harvester under certain forms of random excitation.« less

  7. Design of a High Luminosity 100 TeV Proton-Antiproton Collider

    NASA Astrophysics Data System (ADS)

    Oliveros Tautiva, Sandra Jimena

    Currently new physics is being explored with the Large Hadron Collider at CERN and with Intensity Frontier programs at Fermilab and KEK. The energy scale for new physics is known to be in the multi-TeV range, signaling the need for a future collider which well surpasses this energy scale. A 10 34 cm-2 s-1 luminosity 100 TeV proton-antiproton collider is explored with 7x the energy of the LHC. The dipoles are 4.5 T to reduce cost. A proton-antiproton collider is selected as a future machine for several reasons. The cross section for many high mass states is 10 times higher in pp than pp collisions. Antiquarks for production can come directly from an antiproton rather than indirectly from gluon splitting. The higher cross sections reduce the synchrotron radiation in superconducting magnets and the number of events per bunch crossing, because lower beam currents can produce the same rare event rates. Events are also more centrally produced, allowing a more compact detector with less space between quadrupole triplets and a smaller beta* for higher luminosity. To adjust to antiproton beam losses (burn rate), a Fermilab-like antiproton source would be adapted to disperse the beam into 12 different momentum channels, using electrostatic septa, to increase antiproton momentum capture 12 times. At Fermilab, antiprotons were stochastically cooled in one Debuncher and one Accumulator ring. Because the stochastic cooling time scales as the number of particles, two options of 12 independent cooling systems are presented. One electron cooling ring might follow the stochastic cooling rings for antiproton stacking. Finally antiprotons in the collider ring would be recycled during runs without leaving the collider ring, by joining them to new bunches with snap bunch coalescence and synchrotron damping. These basic ideas are explored in this work on a future 100 TeV proton-antiproton collider and the main parameters are presented.

  8. Design of a High Luminosity 100 TeV Proton Antiproton Collider

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Oliveros Tuativa, Sandra Jimena

    2017-04-01

    Currently new physics is being explored with the Large Hadron Collider at CERN and with Intensity Frontier programs at Fermilab and KEK. The energy scale for new physics is known to be in the multi-TeV range, signaling the need for a future collider which well surpasses this energy scale. A 10more » $$^{\\,34}$$ cm$$^{-2}$$ s$$^{-1}$$ luminosity 100 TeV proton-antiproton collider is explored with 7$$\\times$$ the energy of the LHC. The dipoles are 4.5\\,T to reduce cost. A proton-antiproton collider is selected as a future machine for several reasons. The cross section for many high mass states is 10 times higher in $$p\\bar{p}$$ than $pp$ collisions. Antiquarks for production can come directly from an antiproton rather than indirectly from gluon splitting. The higher cross sections reduce the synchrotron radiation in superconducting magnets and the number of events per bunch crossing, because lower beam currents can produce the same rare event rates. Events are also more centrally produced, allowing a more compact detector with less space between quadrupole triplets and a smaller $$\\beta^{*}$$ for higher luminosity. To adjust to antiproton beam losses (burn rate), a Fermilab-like antiproton source would be adapted to disperse the beam into 12 different momentum channels, using electrostatic septa, to increase antiproton momentum capture 12 times. At Fermilab, antiprotons were stochastically cooled in one Debuncher and one Accumulator ring. Because the stochastic cooling time scales as the number of particles, two options of 12 independent cooling systems are presented. One electron cooling ring might follow the stochastic cooling rings for antiproton stacking. Finally antiprotons in the collider ring would be recycled during runs without leaving the collider ring, by joining them to new bunches with snap bunch coalescence and synchrotron damping. These basic ideas are explored in this work on a future 100 TeV proton-antiproton collider and the main parameters are presented.« less

  9. The U.S. Shale Oil and Gas Resource - a Multi-Scale Analysis of Productivity

    NASA Astrophysics Data System (ADS)

    O'sullivan, F.

    2014-12-01

    Over the past decade, the large-scale production of natural gas, and more recently oil, from U.S. shale formations has had a transformative impact on the energy industry. The emergence of shale oil and gas as recoverable resources has altered perceptions regarding both the future abundance and cost of hydrocarbons, and has shifted the balance of global energy geopolitics. However, despite the excitement, shale is a resource in its nascency, and many challenges surrounding its exploitation remain. One of the most significant of these is the dramatic variation in resource productivity across multiple length scales, which is a feature of all of today's shale plays. This paper will describe the results of work that has looked to characterize the spatial and temporal variations in the productivity of the contemporary shale resource. Analysis will be presented that shows there is a strong stochastic element to observed shale well productivity in all the major plays. It will be shown that the nature of this stochasticity is consistent regardless of specific play being considered. A characterization of this stochasticity will be proposed. As a parallel to the discussion of productivity, the paper will also address the issue of "learning" in shale development. It will be shown that "creaming" trends are observable and that although "absolute" well productivity levels have increased, "specific" productivity levels (i.e. considering well and stimulation size) have actually falling markedly in many plays. The paper will also show that among individual operators' well ensembles, normalized well-to-well performance distributions are almost identical, and have remained consistent year-to-year. This result suggests little if any systematic learning regarding the effective management of well-to-well performance variability has taken place. The paper will conclude with an articulation of how the productivity characteristics of the shale resource are impacting on the resources' economic profile, and the implications of this in terms of the commercial risks associated with shale production activities.

  10. Morphodynamics of Migration Surveyed at Large Spatial and Temporal Scales

    NASA Astrophysics Data System (ADS)

    Aalto, R.; Schwendel, A.; Nicholas, A. P.

    2012-04-01

    The controls on rivers migration are diverse and often complex. One way forwards is to select study rivers that meet certain simplifying conditions: near-pristine (no anthropogenic complications), large size and rapid mobility (resulting in significant change viewable in Landsat imagery), limited geological complexity (no bedrock), steady hydrology (relatively little variation in discharge and sediment load), and simplified base level control (no tides or other substantial perturbations). Such systems could then be measured at appropriate spatial and temporal scales to extract the reach-scale dynamics while averaging out the more stochastic behaviour of individual meander bends. Such an approach requires both special rivers and novel techniques, which we have investigated and present here. The two explored examples are the near-pristine Beni River basin in northern Bolivia (800 km channel length) and the similarly natural Fly-Strickland River basin in Papua New Guinea (400 km channel length) - large, tropical sand-bedded rivers that meet the above criteria. First, we conducted a GIS analysis of migration using image collections that include 1950s military aerial reconnaissance -- this allowed us to characterize mobility decades before the first Landsat satellite was launched. Following this approach, we characterized migration rate, sinuosity, and other parameters at the reach scale of 10km and the temporal scale of 50+ years, with clear patterns of rate and morphology emerging as a function of location within the systems. We conducted extensive fieldwork to explore potential controls on these patterns, with the focus of this talk being the results from DGPS surveys of river and valley slope. The length scale of these rivers, the density of the forested floodplains, and the hostility of the environments precluded the use of standard RTK-DGPS methods. Instead, we employed three novel techniques for long baseline (100s of km) DGPS surveys: OmniStar HP/XP GLONASS kinematic RT-DGPS (sub-decimetre), filtered static RT-DGPS using OmniStar VBS (sub-metre), and post-processed DGPS using newly available Precise Point Positioning methods (sub-metre). We compare these novel DGPS techniques simultaneously (recent 2011 and 2010 surveys) and over time (our 2004, 2001, and 1999 surveys), presenting an assessment of their utility for long baseline surveys of large rivers. Additionally, we present a comparison to water surface profiles developed from the raw version of the 2001 SRTM DEM, with the water elevations determined from MINIMUM 1-arc-second values (not the average 3-arc-second values previously released) - this is the first evaluation of such 'minimum' data of which we are aware. The field surveys ultimately produced quality elevation profiles that allow us to characterize and investigate the strong relationships of both reach-scale migration rate and sinuosity to water surface slope - empirical results realized over time and length scales that serve to average out stochastic noise at the bend scale.

  11. Pan-European stochastic flood event set

    NASA Astrophysics Data System (ADS)

    Kadlec, Martin; Pinto, Joaquim G.; He, Yi; Punčochář, Petr; Kelemen, Fanni D.; Manful, Desmond; Palán, Ladislav

    2017-04-01

    Impact Forecasting (IF), the model development center of Aon Benfield, has been developing a large suite of catastrophe flood models on probabilistic bases for individual countries in Europe. Such natural catastrophes do not follow national boundaries: for example, the major flood in 2016 was responsible for the Europe's largest insured loss of USD3.4bn and affected Germany, France, Belgium, Austria and parts of several other countries. Reflecting such needs, IF initiated a pan-European flood event set development which combines cross-country exposures with country based loss distributions to provide more insightful data to re/insurers. Because the observed discharge data are not available across the whole Europe in sufficient quantity and quality to permit a detailed loss evaluation purposes, a top-down approach was chosen. This approach is based on simulating precipitation from a GCM/RCM model chain followed by a calculation of discharges using rainfall-runoff modelling. IF set up this project in a close collaboration with Karlsruhe Institute of Technology (KIT) regarding the precipitation estimates and with University of East Anglia (UEA) in terms of the rainfall-runoff modelling. KIT's main objective is to provide high resolution daily historical and stochastic time series of key meteorological variables. A purely dynamical downscaling approach with the regional climate model COSMO-CLM (CCLM) is used to generate the historical time series, using re-analysis data as boundary conditions. The resulting time series are validated against the gridded observational dataset E-OBS, and different bias-correction methods are employed. The generation of the stochastic time series requires transfer functions between large-scale atmospheric variables and regional temperature and precipitation fields. These transfer functions are developed for the historical time series using reanalysis data as predictors and bias-corrected CCLM simulated precipitation and temperature as predictands. Finally, the transfer functions are applied to a large ensemble of GCM simulations with forcing corresponding to present day climate conditions to generate highly resolved stochastic time series of precipitation and temperature for several thousand years. These time series form the input for the rainfall-runoff model developed by the UEA team. It is a spatially distributed model adapted from the HBV model and will be calibrated for individual basins using historical discharge data. The calibrated model will be driven by the precipitation time series generated by the KIT team to simulate discharges at a daily time step. The uncertainties in the simulated discharges will be analysed using multiple model parameter sets. A number of statistical methods will be used to assess return periods, changes in the magnitudes, changes in the characteristics of floods such as time base and time to peak, and spatial correlations of large flood events. The Pan-European flood stochastic event set will permit a better view of flood risk for market applications.

  12. Comic ray flux anisotropies caused by astrospheres

    NASA Astrophysics Data System (ADS)

    Scherer, K.; Strauss, R. D.; Ferreira, S. E. S.; Fichtner, H.

    2016-09-01

    Huge astrospheres or stellar wind bubbles influence the propagation of cosmic rays at energies up to the TeV range and can act as small-scale sinks decreasing the cosmic ray flux. We model such a sink (in 2D) by a sphere of radius 10 pc embedded within a sphere of a radius of 1 kpc. The cosmic ray flux is calculated by means of backward stochastic differential equations from an observer, which is located at r0, to the outer boundary. It turns out that such small-scale sinks can influence the cosmic ray flux at the observer's location by a few permille (i.e. a few 0.1%), which is in the range of the observations by IceCube, Milagro and other large area telescopes.

  13. Extreme value statistics and finite-size scaling at the ecological extinction/laminar-turbulence transition

    NASA Astrophysics Data System (ADS)

    Shih, Hong-Yan; Goldenfeld, Nigel

    Experiments on transitional turbulence in pipe flow seem to show that turbulence is a transient metastable state since the measured mean lifetime of turbulence puffs does not diverge asymptotically at a critical Reynolds number. Yet measurements reveal that the lifetime scales with Reynolds number in a super-exponential way reminiscent of extreme value statistics, and simulations and experiments in Couette and channel flow exhibit directed percolation type scaling phenomena near a well-defined transition. This universality class arises from the interplay between small-scale turbulence and a large-scale collective zonal flow, which exhibit predator-prey behavior. Why is asymptotically divergent behavior not observed? Using directed percolation and a stochastic individual level model of predator-prey dynamics related to transitional turbulence, we investigate the relation between extreme value statistics and power law critical behavior, and show that the paradox is resolved by carefully defining what is measured in the experiments. We theoretically derive the super-exponential scaling law, and using finite-size scaling, show how the same data can give both super-exponential behavior and power-law critical scaling.

  14. Cutting planes for the multistage stochastic unit commitment problem

    DOE PAGES

    Jiang, Ruiwei; Guan, Yongpei; Watson, Jean -Paul

    2016-04-20

    As renewable energy penetration rates continue to increase in power systems worldwide, new challenges arise for system operators in both regulated and deregulated electricity markets to solve the security-constrained coal-fired unit commitment problem with intermittent generation (due to renewables) and uncertain load, in order to ensure system reliability and maintain cost effectiveness. In this paper, we study a security-constrained coal-fired stochastic unit commitment model, which we use to enhance the reliability unit commitment process for day-ahead power system operations. In our approach, we first develop a deterministic equivalent formulation for the problem, which leads to a large-scale mixed-integer linear program.more » Then, we verify that the turn on/off inequalities provide a convex hull representation of the minimum-up/down time polytope under the stochastic setting. Next, we develop several families of strong valid inequalities mainly through lifting schemes. In particular, by exploring sequence independent lifting and subadditive approximation lifting properties for the lifting schemes, we obtain strong valid inequalities for the ramping and general load balance polytopes. Lastly, branch-and-cut algorithms are developed to employ these valid inequalities as cutting planes to solve the problem. Our computational results verify the effectiveness of the proposed approach.« less

  15. Temporal Gillespie Algorithm: Fast Simulation of Contagion Processes on Time-Varying Networks

    PubMed Central

    Vestergaard, Christian L.; Génois, Mathieu

    2015-01-01

    Stochastic simulations are one of the cornerstones of the analysis of dynamical processes on complex networks, and are often the only accessible way to explore their behavior. The development of fast algorithms is paramount to allow large-scale simulations. The Gillespie algorithm can be used for fast simulation of stochastic processes, and variants of it have been applied to simulate dynamical processes on static networks. However, its adaptation to temporal networks remains non-trivial. We here present a temporal Gillespie algorithm that solves this problem. Our method is applicable to general Poisson (constant-rate) processes on temporal networks, stochastically exact, and up to multiple orders of magnitude faster than traditional simulation schemes based on rejection sampling. We also show how it can be extended to simulate non-Markovian processes. The algorithm is easily applicable in practice, and as an illustration we detail how to simulate both Poissonian and non-Markovian models of epidemic spreading. Namely, we provide pseudocode and its implementation in C++ for simulating the paradigmatic Susceptible-Infected-Susceptible and Susceptible-Infected-Recovered models and a Susceptible-Infected-Recovered model with non-constant recovery rates. For empirical networks, the temporal Gillespie algorithm is here typically from 10 to 100 times faster than rejection sampling. PMID:26517860

  16. Temporal Gillespie Algorithm: Fast Simulation of Contagion Processes on Time-Varying Networks.

    PubMed

    Vestergaard, Christian L; Génois, Mathieu

    2015-10-01

    Stochastic simulations are one of the cornerstones of the analysis of dynamical processes on complex networks, and are often the only accessible way to explore their behavior. The development of fast algorithms is paramount to allow large-scale simulations. The Gillespie algorithm can be used for fast simulation of stochastic processes, and variants of it have been applied to simulate dynamical processes on static networks. However, its adaptation to temporal networks remains non-trivial. We here present a temporal Gillespie algorithm that solves this problem. Our method is applicable to general Poisson (constant-rate) processes on temporal networks, stochastically exact, and up to multiple orders of magnitude faster than traditional simulation schemes based on rejection sampling. We also show how it can be extended to simulate non-Markovian processes. The algorithm is easily applicable in practice, and as an illustration we detail how to simulate both Poissonian and non-Markovian models of epidemic spreading. Namely, we provide pseudocode and its implementation in C++ for simulating the paradigmatic Susceptible-Infected-Susceptible and Susceptible-Infected-Recovered models and a Susceptible-Infected-Recovered model with non-constant recovery rates. For empirical networks, the temporal Gillespie algorithm is here typically from 10 to 100 times faster than rejection sampling.

  17. Cutting planes for the multistage stochastic unit commitment problem

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Jiang, Ruiwei; Guan, Yongpei; Watson, Jean -Paul

    As renewable energy penetration rates continue to increase in power systems worldwide, new challenges arise for system operators in both regulated and deregulated electricity markets to solve the security-constrained coal-fired unit commitment problem with intermittent generation (due to renewables) and uncertain load, in order to ensure system reliability and maintain cost effectiveness. In this paper, we study a security-constrained coal-fired stochastic unit commitment model, which we use to enhance the reliability unit commitment process for day-ahead power system operations. In our approach, we first develop a deterministic equivalent formulation for the problem, which leads to a large-scale mixed-integer linear program.more » Then, we verify that the turn on/off inequalities provide a convex hull representation of the minimum-up/down time polytope under the stochastic setting. Next, we develop several families of strong valid inequalities mainly through lifting schemes. In particular, by exploring sequence independent lifting and subadditive approximation lifting properties for the lifting schemes, we obtain strong valid inequalities for the ramping and general load balance polytopes. Lastly, branch-and-cut algorithms are developed to employ these valid inequalities as cutting planes to solve the problem. Our computational results verify the effectiveness of the proposed approach.« less

  18. Transversal Fluctuations of the ASEP, Stochastic Six Vertex Model, and Hall-Littlewood Gibbsian Line Ensembles

    NASA Astrophysics Data System (ADS)

    Corwin, Ivan; Dimitrov, Evgeni

    2018-05-01

    We consider the ASEP and the stochastic six vertex model started with step initial data. After a long time, T, it is known that the one-point height function fluctuations for these systems are of order T 1/3. We prove the KPZ prediction of T 2/3 scaling in space. Namely, we prove tightness (and Brownian absolute continuity of all subsequential limits) as T goes to infinity of the height function with spatial coordinate scaled by T 2/3 and fluctuations scaled by T 1/3. The starting point for proving these results is a connection discovered recently by Borodin-Bufetov-Wheeler between the stochastic six vertex height function and the Hall-Littlewood process (a certain measure on plane partitions). Interpreting this process as a line ensemble with a Gibbsian resampling invariance, we show that the one-point tightness of the top curve can be propagated to the tightness of the entire curve.

  19. The first new application of the mathematical theory of stochastic processes to lunar and planetary science: topography profile diagrams of Mars

    NASA Astrophysics Data System (ADS)

    Salamuniccar, G.

    The Mathematical Statistics Theory (MST) and the Mathematical Theory of Stochastic Processes (MTSP) are different branches of the more general Mathematical Probability Theory (MPT) that represents different aspects of some physical processes we can analyze using mathematics. Each model of a stochastic process according to MTSP can provide one or more interpretations in MST domain. Large body of work on the impact crater statistics according to MST was already done many years ago, for e.g., where Cratering Chronology Diagrams (CCD) were shown in log/log scale, showing Cum. Crater Frequency [N km-2] that is the function of Age [years] for some particular crater diameter. However, all this is only one possible representation in MST domain, of the bombardment of the planetary surface modeled as stochastic process according to MTSP. The idea that other representations in MST domain of the same stochastic process from MTSP are possible was recently presented [G. Salamuniæcar, Adv. Space Res. in press]. The importance of the approach is that each such interpretation can provide large amount of new information. Topography Profile Diagrams (TPDs) are one example, that with MOLA data provide us with large amount of new information regarding history of Mars. TPDs consists of [34thLPS #1403]: (1) Topography-Profile Curve (TPC) that is representation of the planet topography, (2) Density-of-Craters Curve (DCC) that represents density of craters, (3) Filtered-DCC (FDCC) that represents DCC filtered by a low-pass filter included with the purpose of reducing the noise and (4) Level-of-Substance-Over-Time Curve (LSOTC). While definition of TPC uniquely corresponds to way we will compute it, the same is not also the case with DCC and FDCC. While DCC depends on algorithms for computing crater altitude according to the topography, center coordinates and radius of impact crater [34thLPS #1409], FDCC depends on the architecture of the custom designed low-pass filter for filtering DCC [34thLPS #1415]. However all variations of DCC and FDCC including the different input craters data-sets confirmed correlation between density of craters and topographic altitude over 70˜ 80% of the planet surface. For the assumption that ocean primarily caused noted correlation, LSOTC additionally for the first time offers mathematical approach how to compute how level of ocean was changing over time [6thMars #3187]. Accordingly, conclusion is that TPDs are the first new practical application of MTSP to the Lunar and Planetary Science (LPS).

  20. Data-driven spectral filters for decomposing the streamwise turbulent kinetic energy in turbulent boundary layers

    NASA Astrophysics Data System (ADS)

    Baars, Woutijn J.; Hutchins, Nicholas; Marusic, Ivan

    2017-11-01

    An organization in wall-bounded turbulence is evidenced by the classification of distinctly different flow structures, including large-scale motions such as hairpin packets and very large-scale motions or superstructures. In conjunction with less organized turbulence, these flow structures all contribute to the streamwise turbulent kinetic energy . Since different class structures comprise dissimilar scalings of their overlapping imprints in the streamwise velocity spectra, their coexistence complicates the interpretation of the wall-normal trend in and its Reynolds number dependence. Via coherence analyses of two-point data in boundary layers we derive spectral filters for stochastically decomposing the streamwise spectra into sub-components, representing different types of statistical flow structures. It is also explored how the decomposition reflects the spectral break-down following the modeling attempts of Perry et al. 1986 and Marusic & Perry 1995. In the process we reveal a universal wall-scaling for a portion of the outer-region turbulence that is coherent with the near-wall region for Reτ O(103) to O(106) , which is described as a wall-attached self-similar structure embedded within the logarithmic region.

  1. Accurate representation of organized convection in CFSv2 via a stochastic lattice model

    NASA Astrophysics Data System (ADS)

    Goswami, B. B.; Khouider, B.; Krishna, R. P. M. M.; Mukhopadhyay, P.; Majda, A.

    2016-12-01

    General circulation models (GCM) show limitations of various sorts in their representation of synoptic and intra-seasonal variability associated with tropical convective systems apart from the success of superparameterization and cloud system permitting global models. This systematic deficiency is believed to be due to the inadequate treatment of organized convection by the underlying cumulus parameterizations, which have the quasi-equilibrium assumption as a common denominator. By its nature, this assumption neglects the continuous interactions across scales between convection and the large scale dynamics. By design, the stochastic multicloud model (SMCM) mimics the interactions between the three cloud types, congestus, deep, and stratiform, that are observed to play a central role across multiple scales in the dynamics and physical structure of tropical convective systems. It is based on a stochastic lattice model, overlaid over each GCM grid box, where an order parameter taking the values 0,1,2,3 at each lattice site according to whether the site is clear sky or occupied by a congestus, deep, or stratiform cloud, respectively. As such the SMCM mimics the unresolved variability due to cumulus convection and the interactions across multiple scales of organized convective systems, following the philosophy of superparameterization. Here, we discuss the implementation of the SMCM in NCEP Climate Forecast System model (CFS), version-2, through the use of a simple parametrization of adiabatic heating and moisture sink due to cumulus clouds based on their observed vertical profiles (a.k.a Q1 and Q2). Much like the success of superparameterization but without the burden of high computational cost, a 20 year run showed tremendous improvements in the ability of the CFS-SMCM model to represent synoptic and intraseasonal variability associated with organized convection as well as a few minor improvements in the simulated climatology when compared to the control CFSv2 model which is based on the widely used simplified Arakawa-Shubert parameterization. This extra-ordinary improvement comes in despite the fact that CFSv2 is one of the best GCMs in terms of its representation of intra-seasonal oscillations in the tropical atmosphere.

  2. Noise analysis of genome-scale protein synthesis using a discrete computational model of translation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Racle, Julien; Hatzimanikatis, Vassily, E-mail: vassily.hatzimanikatis@epfl.ch; Swiss Institute of Bioinformatics

    2015-07-28

    Noise in genetic networks has been the subject of extensive experimental and computational studies. However, very few of these studies have considered noise properties using mechanistic models that account for the discrete movement of ribosomes and RNA polymerases along their corresponding templates (messenger RNA (mRNA) and DNA). The large size of these systems, which scales with the number of genes, mRNA copies, codons per mRNA, and ribosomes, is responsible for some of the challenges. Additionally, one should be able to describe the dynamics of ribosome exchange between the free ribosome pool and those bound to mRNAs, as well as howmore » mRNA species compete for ribosomes. We developed an efficient algorithm for stochastic simulations that addresses these issues and used it to study the contribution and trade-offs of noise to translation properties (rates, time delays, and rate-limiting steps). The algorithm scales linearly with the number of mRNA copies, which allowed us to study the importance of genome-scale competition between mRNAs for the same ribosomes. We determined that noise is minimized under conditions maximizing the specific synthesis rate. Moreover, sensitivity analysis of the stochastic system revealed the importance of the elongation rate in the resultant noise, whereas the translation initiation rate constant was more closely related to the average protein synthesis rate. We observed significant differences between our results and the noise properties of the most commonly used translation models. Overall, our studies demonstrate that the use of full mechanistic models is essential for the study of noise in translation and transcription.« less

  3. Front propagation and clustering in the stochastic nonlocal Fisher equation

    NASA Astrophysics Data System (ADS)

    Ganan, Yehuda A.; Kessler, David A.

    2018-04-01

    In this work, we study the problem of front propagation and pattern formation in the stochastic nonlocal Fisher equation. We find a crossover between two regimes: a steadily propagating regime for not too large interaction range and a stochastic punctuated spreading regime for larger ranges. We show that the former regime is well described by the heuristic approximation of the system by a deterministic system where the linear growth term is cut off below some critical density. This deterministic system is seen not only to give the right front velocity, but also predicts the onset of clustering for interaction kernels which give rise to stable uniform states, such as the Gaussian kernel, for sufficiently large cutoff. Above the critical cutoff, distinct clusters emerge behind the front. These same features are present in the stochastic model for sufficiently small carrying capacity. In the latter, punctuated spreading, regime, the population is concentrated on clusters, as in the infinite range case, which divide and separate as a result of the stochastic noise. Due to the finite interaction range, if a fragment at the edge of the population separates sufficiently far, it stabilizes as a new cluster, and the processes begins anew. The deterministic cutoff model does not have this spreading for large interaction ranges, attesting to its purely stochastic origins. We show that this mode of spreading has an exponentially small mean spreading velocity, decaying with the range of the interaction kernel.

  4. Front propagation and clustering in the stochastic nonlocal Fisher equation.

    PubMed

    Ganan, Yehuda A; Kessler, David A

    2018-04-01

    In this work, we study the problem of front propagation and pattern formation in the stochastic nonlocal Fisher equation. We find a crossover between two regimes: a steadily propagating regime for not too large interaction range and a stochastic punctuated spreading regime for larger ranges. We show that the former regime is well described by the heuristic approximation of the system by a deterministic system where the linear growth term is cut off below some critical density. This deterministic system is seen not only to give the right front velocity, but also predicts the onset of clustering for interaction kernels which give rise to stable uniform states, such as the Gaussian kernel, for sufficiently large cutoff. Above the critical cutoff, distinct clusters emerge behind the front. These same features are present in the stochastic model for sufficiently small carrying capacity. In the latter, punctuated spreading, regime, the population is concentrated on clusters, as in the infinite range case, which divide and separate as a result of the stochastic noise. Due to the finite interaction range, if a fragment at the edge of the population separates sufficiently far, it stabilizes as a new cluster, and the processes begins anew. The deterministic cutoff model does not have this spreading for large interaction ranges, attesting to its purely stochastic origins. We show that this mode of spreading has an exponentially small mean spreading velocity, decaying with the range of the interaction kernel.

  5. New window into stochastic gravitational wave background.

    PubMed

    Rotti, Aditya; Souradeep, Tarun

    2012-11-30

    A stochastic gravitational wave background (SGWB) would gravitationally lens the cosmic microwave background (CMB) photons. We correct the results provided in existing literature for modifications to the CMB polarization power spectra due to lensing by gravitational waves. Weak lensing by gravitational waves distorts all four CMB power spectra; however, its effect is most striking in the mixing of power between the E mode and B mode of CMB polarization. This suggests the possibility of using measurements of the CMB angular power spectra to constrain the energy density (Ω(GW)) of the SGWB. Using current data sets (QUAD, WMAP, and ACT), we find that the most stringent constraints on the present Ω(GW) come from measurements of the angular power spectra of CMB temperature anisotropies. In the near future, more stringent bounds on Ω(GW) can be expected with improved upper limits on the B modes of CMB polarization. Any detection of B modes of CMB polarization above the expected signal from large scale structure lensing could be a signal for a SGWB.

  6. Incorporating variability in simulations of seasonally forced phenology using integral projection models

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Goodsman, Devin W.; Aukema, Brian H.; McDowell, Nate G.

    Phenology models are becoming increasingly important tools to accurately predict how climate change will impact the life histories of organisms. We propose a class of integral projection phenology models derived from stochastic individual-based models of insect development and demography.Our derivation, which is based on the rate-summation concept, produces integral projection models that capture the effect of phenotypic rate variability on insect phenology, but which are typically more computationally frugal than equivalent individual-based phenology models. We demonstrate our approach using a temperature-dependent model of the demography of the mountain pine beetle (Dendroctonus ponderosae Hopkins), an insect that kills mature pine trees.more » This work illustrates how a wide range of stochastic phenology models can be reformulated as integral projection models. Due to their computational efficiency, these integral projection models are suitable for deployment in large-scale simulations, such as studies of altered pest distributions under climate change.« less

  7. Nonparametric Bayesian inference of the microcanonical stochastic block model

    NASA Astrophysics Data System (ADS)

    Peixoto, Tiago P.

    2017-01-01

    A principled approach to characterize the hidden modular structure of networks is to formulate generative models and then infer their parameters from data. When the desired structure is composed of modules or "communities," a suitable choice for this task is the stochastic block model (SBM), where nodes are divided into groups, and the placement of edges is conditioned on the group memberships. Here, we present a nonparametric Bayesian method to infer the modular structure of empirical networks, including the number of modules and their hierarchical organization. We focus on a microcanonical variant of the SBM, where the structure is imposed via hard constraints, i.e., the generated networks are not allowed to violate the patterns imposed by the model. We show how this simple model variation allows simultaneously for two important improvements over more traditional inference approaches: (1) deeper Bayesian hierarchies, with noninformative priors replaced by sequences of priors and hyperpriors, which not only remove limitations that seriously degrade the inference on large networks but also reveal structures at multiple scales; (2) a very efficient inference algorithm that scales well not only for networks with a large number of nodes and edges but also with an unlimited number of modules. We show also how this approach can be used to sample modular hierarchies from the posterior distribution, as well as to perform model selection. We discuss and analyze the differences between sampling from the posterior and simply finding the single parameter estimate that maximizes it. Furthermore, we expose a direct equivalence between our microcanonical approach and alternative derivations based on the canonical SBM.

  8. Evidencing `Tight Bound States' in the Hydrogen Atom:. Empirical Manipulation of Large-Scale XD in Violation of QED

    NASA Astrophysics Data System (ADS)

    Amoroso, Richard L.; Vigier, Jean-Pierre

    2013-09-01

    In this work we extend Vigier's recent theory of `tight bound state' (TBS) physics and propose empirical protocols to test not only for their putative existence, but also that their existence if demonstrated provides the 1st empirical evidence of string theory because it occurs in the context of large-scale extra dimensionality (LSXD) cast in a unique M-Theoretic vacuum corresponding to the new Holographic Anthropic Multiverse (HAM) cosmological paradigm. Physicists generally consider spacetime as a stochastic foam containing a zero-point field (ZPF) from which virtual particles restricted by the quantum uncertainty principle (to the Planck time) wink in and out of existence. According to the extended de Broglie-Bohm-Vigier causal stochastic interpretation of quantum theory spacetime and the matter embedded within it is created annihilated and recreated as a virtual locus of reality with a continuous quantum evolution (de Broglie matter waves) governed by a pilot wave - a `super quantum potential' extended in HAM cosmology to be synonymous with the a `force of coherence' inherent in the Unified Field, UF. We consider this backcloth to be a covariant polarized vacuum of the (generally ignored by contemporary physicists) Dirac type. We discuss open questions of the physics of point particles (fermionic nilpotent singularities). We propose a new set of experiments to test for TBS in a Dirac covariant polarized vacuum LSXD hyperspace suggestive of a recently tested special case of the Lorentz Transformation put forth by Kowalski and Vigier. These protocols reach far beyond the recent battery of atomic spectral violations of QED performed through NIST.

  9. Stochastic backscatter modelling for the prediction of pollutant removal from an urban street canyon: A large-eddy simulation

    NASA Astrophysics Data System (ADS)

    O'Neill, J. J.; Cai, X.-M.; Kinnersley, R.

    2016-10-01

    The large-eddy simulation (LES) approach has recently exhibited its appealing capability of capturing turbulent processes inside street canyons and the urban boundary layer aloft, and its potential for deriving the bulk parameters adopted in low-cost operational urban dispersion models. However, the thin roof-level shear layer may be under-resolved in most LES set-ups and thus sophisticated subgrid-scale (SGS) parameterisations may be required. In this paper, we consider the important case of pollutant removal from an urban street canyon of unit aspect ratio (i.e. building height equal to street width) with the external flow perpendicular to the street. We show that by employing a stochastic SGS model that explicitly accounts for backscatter (energy transfer from unresolved to resolved scales), the pollutant removal process is better simulated compared with the use of a simpler (fully dissipative) but widely-used SGS model. The backscatter induces additional mixing within the shear layer which acts to increase the rate of pollutant removal from the street canyon, giving better agreement with a recent wind-tunnel experiment. The exchange velocity, an important parameter in many operational models that determines the mass transfer between the urban canopy and the external flow, is predicted to be around 15% larger with the backscatter SGS model; consequently, the steady-state mean pollutant concentration within the street canyon is around 15% lower. A database of exchange velocities for various other urban configurations could be generated and used as improved input for operational street canyon models.

  10. Wavelet-based time series bootstrap model for multidecadal streamflow simulation using climate indicators

    NASA Astrophysics Data System (ADS)

    Erkyihun, Solomon Tassew; Rajagopalan, Balaji; Zagona, Edith; Lall, Upmanu; Nowak, Kenneth

    2016-05-01

    A model to generate stochastic streamflow projections conditioned on quasi-oscillatory climate indices such as Pacific Decadal Oscillation (PDO) and Atlantic Multi-decadal Oscillation (AMO) is presented. Recognizing that each climate index has underlying band-limited components that contribute most of the energy of the signals, we first pursue a wavelet decomposition of the signals to identify and reconstruct these features from annually resolved historical data and proxy based paleoreconstructions of each climate index covering the period from 1650 to 2012. A K-Nearest Neighbor block bootstrap approach is then developed to simulate the total signal of each of these climate index series while preserving its time-frequency structure and marginal distributions. Finally, given the simulated climate signal time series, a K-Nearest Neighbor bootstrap is used to simulate annual streamflow series conditional on the joint state space defined by the simulated climate index for each year. We demonstrate this method by applying it to simulation of streamflow at Lees Ferry gauge on the Colorado River using indices of two large scale climate forcings: Pacific Decadal Oscillation (PDO) and Atlantic Multi-decadal Oscillation (AMO), which are known to modulate the Colorado River Basin (CRB) hydrology at multidecadal time scales. Skill in stochastic simulation of multidecadal projections of flow using this approach is demonstrated.

  11. A real-space stochastic density matrix approach for density functional electronic structure.

    PubMed

    Beck, Thomas L

    2015-12-21

    The recent development of real-space grid methods has led to more efficient, accurate, and adaptable approaches for large-scale electrostatics and density functional electronic structure modeling. With the incorporation of multiscale techniques, linear-scaling real-space solvers are possible for density functional problems if localized orbitals are used to represent the Kohn-Sham energy functional. These methods still suffer from high computational and storage overheads, however, due to extensive matrix operations related to the underlying wave function grid representation. In this paper, an alternative stochastic method is outlined that aims to solve directly for the one-electron density matrix in real space. In order to illustrate aspects of the method, model calculations are performed for simple one-dimensional problems that display some features of the more general problem, such as spatial nodes in the density matrix. This orbital-free approach may prove helpful considering a future involving increasingly parallel computing architectures. Its primary advantage is the near-locality of the random walks, allowing for simultaneous updates of the density matrix in different regions of space partitioned across the processors. In addition, it allows for testing and enforcement of the particle number and idempotency constraints through stabilization of a Feynman-Kac functional integral as opposed to the extensive matrix operations in traditional approaches.

  12. Seasonal change of topology and resilience of ecological networks in wetlandscapes

    NASA Astrophysics Data System (ADS)

    Bin, Kim; Park, Jeryang

    2017-04-01

    Wetlands distributed in a landscape provide various ecosystem services including habitat for flora and fauna, hydrologic controls, and biogeochemical processes. Hydrologic regime of each wetland at a given landscape varies by hydro-climatic and geological conditions as well as the bathymetry, forming a certain pattern in the wetland area distribution and spatial organization. However, its large-scale pattern also changes over time as this wetland complex is subject to stochastic hydro-climatic forcing in various temporal scales. Consequently, temporal variation in the spatial structure of wetlands inevitably affects the dispersal ability of species depending on those wetlands as habitat. Here, we numerically show (1) the spatiotemporal variation of wetlandscapes by forcing seasonally changing stochastic rainfall and (2) the corresponding ecological networks which either deterministically or stochastically forming the dispersal ranges. We selected four vernal pool regions with distinct climate conditions in California. The results indicate that the spatial structure of wetlands in a landscape by measuring the wetland area frequency distribution changes by seasonal hydro-climatic condition but eventually recovers to the initial state. However, the corresponding ecological networks, which the structure and function change by the change of distances between wetlands, and measured by degree distribution and network efficiency, may not recover to the initial state especially in the regions with high seasonal dryness index. Moreover, we observed that the changes in both the spatial structure of wetlands in a landscape and the corresponding ecological networks exhibit hysteresis over seasons. Our analysis indicates that the hydrologic and ecological resilience of a wetlandcape may be low in a dry region with seasonal hydro-climatic forcing. Implications of these results for modelling ecological networks depending on hydrologic systems especially for conservation purposes are discussed.

  13. Allometric Scaling of the Active Hematopoietic Stem Cell Pool across Mammals

    PubMed Central

    Dingli, David; Pacheco, Jorge M.

    2006-01-01

    Background Many biological processes are characterized by allometric relations of the type Y = Y 0 Mb between an observable Y and body mass M, which pervade at multiple levels of organization. In what regards the hematopoietic stem cell pool, there is experimental evidence that the size of the hematopoietic stem cell pool is conserved in mammals. However, demands for blood cell formation vary across mammals and thus the size of the active stem cell compartment could vary across species. Methodology/Principle Findings Here we investigate the allometric scaling of the hematopoietic system in a large group of mammalian species using reticulocyte counts as a marker of the active stem cell pool. Our model predicts that the total number of active stem cells, in an adult mammal, scales with body mass with the exponent ¾. Conclusion/Significance The scaling predicted here provides an intuitive justification of the Hayflick hypothesis and supports the current view of a small active stem cell pool supported by a large, quiescent reserve. The present scaling shows excellent agreement with the available (indirect) data for smaller mammals. The small size of the active stem cell pool enhances the role of stochastic effects in the overall dynamics of the hematopoietic system. PMID:17183646

  14. Stochastic inflation in phase space: is slow roll a stochastic attractor?

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Grain, Julien; Vennin, Vincent, E-mail: julien.grain@ias.u-psud.fr, E-mail: vincent.vennin@port.ac.uk

    An appealing feature of inflationary cosmology is the presence of a phase-space attractor, ''slow roll'', which washes out the dependence on initial field velocities. We investigate the robustness of this property under backreaction from quantum fluctuations using the stochastic inflation formalism in the phase-space approach. A Hamiltonian formulation of stochastic inflation is presented, where it is shown that the coarse-graining procedure—where wavelengths smaller than the Hubble radius are integrated out—preserves the canonical structure of free fields. This means that different sets of canonical variables give rise to the same probability distribution which clarifies the literature with respect to this issue.more » The role played by the quantum-to-classical transition is also analysed and is shown to constrain the coarse-graining scale. In the case of free fields, we find that quantum diffusion is aligned in phase space with the slow-roll direction. This implies that the classical slow-roll attractor is immune to stochastic effects and thus generalises to a stochastic attractor regardless of initial conditions, with a relaxation time at least as short as in the classical system. For non-test fields or for test fields with non-linear self interactions however, quantum diffusion and the classical slow-roll flow are misaligned. We derive a condition on the coarse-graining scale so that observational corrections from this misalignment are negligible at leading order in slow roll.« less

  15. From Global to Cloud Resolving Scale: Experiments with a Scale- and Aerosol-Aware Physics Package and Impact on Tracer Transport

    NASA Astrophysics Data System (ADS)

    Grell, G. A.; Freitas, S. R.; Olson, J.; Bela, M.

    2017-12-01

    We will start by providing a summary of the latest cumulus parameterization modeling efforts at NOAA's Earth System Research Laboratory (ESRL) will be presented on both regional and global scales. The physics package includes a scale-aware parameterization of subgrid cloudiness feedback to radiation (coupled PBL, microphysics, radiation, shallow and congestus type convection), the stochastic Grell-Freitas (GF) scale- and aerosol-aware convective parameterization, and an aerosol aware microphysics package. GF is based on a stochastic approach originally implemented by Grell and Devenyi (2002) and described in more detail in Grell and Freitas (2014, ACP). It was expanded to include PDF's for vertical mass flux, as well as modifications to improve the diurnal cycle. This physics package will be used on different scales, spanning global to cloud resolving, to look at the impact on scalar transport and numerical weather prediction.

  16. Fast and robust estimation of spectro-temporal receptive fields using stochastic approximations.

    PubMed

    Meyer, Arne F; Diepenbrock, Jan-Philipp; Ohl, Frank W; Anemüller, Jörn

    2015-05-15

    The receptive field (RF) represents the signal preferences of sensory neurons and is the primary analysis method for understanding sensory coding. While it is essential to estimate a neuron's RF, finding numerical solutions to increasingly complex RF models can become computationally intensive, in particular for high-dimensional stimuli or when many neurons are involved. Here we propose an optimization scheme based on stochastic approximations that facilitate this task. The basic idea is to derive solutions on a random subset rather than computing the full solution on the available data set. To test this, we applied different optimization schemes based on stochastic gradient descent (SGD) to both the generalized linear model (GLM) and a recently developed classification-based RF estimation approach. Using simulated and recorded responses, we demonstrate that RF parameter optimization based on state-of-the-art SGD algorithms produces robust estimates of the spectro-temporal receptive field (STRF). Results on recordings from the auditory midbrain demonstrate that stochastic approximations preserve both predictive power and tuning properties of STRFs. A correlation of 0.93 with the STRF derived from the full solution may be obtained in less than 10% of the full solution's estimation time. We also present an on-line algorithm that allows simultaneous monitoring of STRF properties of more than 30 neurons on a single computer. The proposed approach may not only prove helpful for large-scale recordings but also provides a more comprehensive characterization of neural tuning in experiments than standard tuning curves. Copyright © 2015 Elsevier B.V. All rights reserved.

  17. Calculation of stochastic broadening due to low mn magnetic perturbation in the simple map in action-angle coordinates

    NASA Astrophysics Data System (ADS)

    Hinton, Courtney; Punjabi, Alkesh; Ali, Halima

    2009-11-01

    The simple map is the simplest map that has topology of divertor tokamaks [A. Punjabi, H. Ali, T. Evans, and A. Boozer, Phys. Let. A 364, 140--145 (2007)]. Recently, the action-angle coordinates for simple map are analytically calculated, and simple map is constructed in action-angle coordinates [O. Kerwin, A. Punjabi, and H. Ali, Phys. Plasmas 15, 072504 (2008)]. Action-angle coordinates for simple map cannot be inverted to real space coordinates (R,Z). Because there is logarithmic singularity on the ideal separatrix, trajectories cannot cross separatrix [op cit]. Simple map in action-angle coordinates is applied to calculate stochastic broadening due to the low mn magnetic perturbation with mode numbers m=1, and n=±1. The width of stochastic layer near the X-point scales as 0.63 power of the amplitude δ of low mn perturbation, toroidal flux loss scales as 1.16 power of δ, and poloidal flux loss scales as 1.26 power of δ. Scaling of width deviates from Boozer-Rechester scaling by 26% [A. Boozer, and A. Rechester, Phys. Fluids 21, 682 (1978)]. This work is supported by US Department of Energy grants DE-FG02-07ER54937, DE-FG02-01ER54624 and DE-FG02-04ER54793.

  18. Epidemic extinction paths in complex networks

    NASA Astrophysics Data System (ADS)

    Hindes, Jason; Schwartz, Ira B.

    2017-05-01

    We study the extinction of long-lived epidemics on finite complex networks induced by intrinsic noise. Applying analytical techniques to the stochastic susceptible-infected-susceptible model, we predict the distribution of large fluctuations, the most probable or optimal path through a network that leads to a disease-free state from an endemic state, and the average extinction time in general configurations. Our predictions agree with Monte Carlo simulations on several networks, including synthetic weighted and degree-distributed networks with degree correlations, and an empirical high school contact network. In addition, our approach quantifies characteristic scaling patterns for the optimal path and distribution of large fluctuations, both near and away from the epidemic threshold, in networks with heterogeneous eigenvector centrality and degree distributions.

  19. Epidemic extinction paths in complex networks.

    PubMed

    Hindes, Jason; Schwartz, Ira B

    2017-05-01

    We study the extinction of long-lived epidemics on finite complex networks induced by intrinsic noise. Applying analytical techniques to the stochastic susceptible-infected-susceptible model, we predict the distribution of large fluctuations, the most probable or optimal path through a network that leads to a disease-free state from an endemic state, and the average extinction time in general configurations. Our predictions agree with Monte Carlo simulations on several networks, including synthetic weighted and degree-distributed networks with degree correlations, and an empirical high school contact network. In addition, our approach quantifies characteristic scaling patterns for the optimal path and distribution of large fluctuations, both near and away from the epidemic threshold, in networks with heterogeneous eigenvector centrality and degree distributions.

  20. Effects of forcing time scale on the simulated turbulent flows and turbulent collision statistics of inertial particles

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Rosa, B., E-mail: bogdan.rosa@imgw.pl; Parishani, H.; Department of Earth System Science, University of California, Irvine, California 92697-3100

    2015-01-15

    In this paper, we study systematically the effects of forcing time scale in the large-scale stochastic forcing scheme of Eswaran and Pope [“An examination of forcing in direct numerical simulations of turbulence,” Comput. Fluids 16, 257 (1988)] on the simulated flow structures and statistics of forced turbulence. Using direct numerical simulations, we find that the forcing time scale affects the flow dissipation rate and flow Reynolds number. Other flow statistics can be predicted using the altered flow dissipation rate and flow Reynolds number, except when the forcing time scale is made unrealistically large to yield a Taylor microscale flow Reynoldsmore » number of 30 and less. We then study the effects of forcing time scale on the kinematic collision statistics of inertial particles. We show that the radial distribution function and the radial relative velocity may depend on the forcing time scale when it becomes comparable to the eddy turnover time. This dependence, however, can be largely explained in terms of altered flow Reynolds number and the changing range of flow length scales present in the turbulent flow. We argue that removing this dependence is important when studying the Reynolds number dependence of the turbulent collision statistics. The results are also compared to those based on a deterministic forcing scheme to better understand the role of large-scale forcing, relative to that of the small-scale turbulence, on turbulent collision of inertial particles. To further elucidate the correlation between the altered flow structures and dynamics of inertial particles, a conditional analysis has been performed, showing that the regions of higher collision rate of inertial particles are well correlated with the regions of lower vorticity. Regions of higher concentration of pairs at contact are found to be highly correlated with the region of high energy dissipation rate.« less

  1. Diffusion approximations to the chemical master equation only have a consistent stochastic thermodynamics at chemical equilibrium

    NASA Astrophysics Data System (ADS)

    Horowitz, Jordan M.

    2015-07-01

    The stochastic thermodynamics of a dilute, well-stirred mixture of chemically reacting species is built on the stochastic trajectories of reaction events obtained from the chemical master equation. However, when the molecular populations are large, the discrete chemical master equation can be approximated with a continuous diffusion process, like the chemical Langevin equation or low noise approximation. In this paper, we investigate to what extent these diffusion approximations inherit the stochastic thermodynamics of the chemical master equation. We find that a stochastic-thermodynamic description is only valid at a detailed-balanced, equilibrium steady state. Away from equilibrium, where there is no consistent stochastic thermodynamics, we show that one can still use the diffusive solutions to approximate the underlying thermodynamics of the chemical master equation.

  2. Diffusion approximations to the chemical master equation only have a consistent stochastic thermodynamics at chemical equilibrium.

    PubMed

    Horowitz, Jordan M

    2015-07-28

    The stochastic thermodynamics of a dilute, well-stirred mixture of chemically reacting species is built on the stochastic trajectories of reaction events obtained from the chemical master equation. However, when the molecular populations are large, the discrete chemical master equation can be approximated with a continuous diffusion process, like the chemical Langevin equation or low noise approximation. In this paper, we investigate to what extent these diffusion approximations inherit the stochastic thermodynamics of the chemical master equation. We find that a stochastic-thermodynamic description is only valid at a detailed-balanced, equilibrium steady state. Away from equilibrium, where there is no consistent stochastic thermodynamics, we show that one can still use the diffusive solutions to approximate the underlying thermodynamics of the chemical master equation.

  3. DG-IMEX Stochastic Galerkin Schemes for Linear Transport Equation with Random Inputs and Diffusive Scalings

    DOE PAGES

    Chen, Zheng; Liu, Liu; Mu, Lin

    2017-05-03

    In this paper, we consider the linear transport equation under diffusive scaling and with random inputs. The method is based on the generalized polynomial chaos approach in the stochastic Galerkin framework. Several theoretical aspects will be addressed. Additionally, a uniform numerical stability with respect to the Knudsen number ϵ, and a uniform in ϵ error estimate is given. For temporal and spatial discretizations, we apply the implicit–explicit scheme under the micro–macro decomposition framework and the discontinuous Galerkin method, as proposed in Jang et al. (SIAM J Numer Anal 52:2048–2072, 2014) for deterministic problem. Lastly, we provide a rigorous proof ofmore » the stochastic asymptotic-preserving (sAP) property. Extensive numerical experiments that validate the accuracy and sAP of the method are conducted.« less

  4. Single cell Hi-C reveals cell-to-cell variability in chromosome structure

    PubMed Central

    Schoenfelder, Stefan; Yaffe, Eitan; Dean, Wendy; Laue, Ernest D.; Tanay, Amos; Fraser, Peter

    2013-01-01

    Large-scale chromosome structure and spatial nuclear arrangement have been linked to control of gene expression and DNA replication and repair. Genomic techniques based on chromosome conformation capture assess contacts for millions of loci simultaneously, but do so by averaging chromosome conformations from millions of nuclei. Here we introduce single cell Hi-C, combined with genome-wide statistical analysis and structural modeling of single copy X chromosomes, to show that individual chromosomes maintain domain organisation at the megabase scale, but show variable cell-to-cell chromosome territory structures at larger scales. Despite this structural stochasticity, localisation of active gene domains to boundaries of territories is a hallmark of chromosomal conformation. Single cell Hi-C data bridge current gaps between genomics and microscopy studies of chromosomes, demonstrating how modular organisation underlies dynamic chromosome structure, and how this structure is probabilistically linked with genome activity patterns. PMID:24067610

  5. Hybrid stochastic simulations of intracellular reaction-diffusion systems.

    PubMed

    Kalantzis, Georgios

    2009-06-01

    With the observation that stochasticity is important in biological systems, chemical kinetics have begun to receive wider interest. While the use of Monte Carlo discrete event simulations most accurately capture the variability of molecular species, they become computationally costly for complex reaction-diffusion systems with large populations of molecules. On the other hand, continuous time models are computationally efficient but they fail to capture any variability in the molecular species. In this study a hybrid stochastic approach is introduced for simulating reaction-diffusion systems. We developed an adaptive partitioning strategy in which processes with high frequency are simulated with deterministic rate-based equations, and those with low frequency using the exact stochastic algorithm of Gillespie. Therefore the stochastic behavior of cellular pathways is preserved while being able to apply it to large populations of molecules. We describe our method and demonstrate its accuracy and efficiency compared with the Gillespie algorithm for two different systems. First, a model of intracellular viral kinetics with two steady states and second, a compartmental model of the postsynaptic spine head for studying the dynamics of Ca+2 and NMDA receptors.

  6. Stochastic von Bertalanffy models, with applications to fish recruitment.

    PubMed

    Lv, Qiming; Pitchford, Jonathan W

    2007-02-21

    We consider three individual-based models describing growth in stochastic environments. Stochastic differential equations (SDEs) with identical von Bertalanffy deterministic parts are formulated, with a stochastic term which decreases, remains constant, or increases with organism size, respectively. Probability density functions for hitting times are evaluated in the context of fish growth and mortality. Solving the hitting time problem analytically or numerically shows that stochasticity can have a large positive impact on fish recruitment probability. It is also demonstrated that the observed mean growth rate of surviving individuals always exceeds the mean population growth rate, which itself exceeds the growth rate of the equivalent deterministic model. The consequences of these results in more general biological situations are discussed.

  7. Addressing model error through atmospheric stochastic physical parametrizations: impact on the coupled ECMWF seasonal forecasting system

    PubMed Central

    Weisheimer, Antje; Corti, Susanna; Palmer, Tim; Vitart, Frederic

    2014-01-01

    The finite resolution of general circulation models of the coupled atmosphere–ocean system and the effects of sub-grid-scale variability present a major source of uncertainty in model simulations on all time scales. The European Centre for Medium-Range Weather Forecasts has been at the forefront of developing new approaches to account for these uncertainties. In particular, the stochastically perturbed physical tendency scheme and the stochastically perturbed backscatter algorithm for the atmosphere are now used routinely for global numerical weather prediction. The European Centre also performs long-range predictions of the coupled atmosphere–ocean climate system in operational forecast mode, and the latest seasonal forecasting system—System 4—has the stochastically perturbed tendency and backscatter schemes implemented in a similar way to that for the medium-range weather forecasts. Here, we present results of the impact of these schemes in System 4 by contrasting the operational performance on seasonal time scales during the retrospective forecast period 1981–2010 with comparable simulations that do not account for the representation of model uncertainty. We find that the stochastic tendency perturbation schemes helped to reduce excessively strong convective activity especially over the Maritime Continent and the tropical Western Pacific, leading to reduced biases of the outgoing longwave radiation (OLR), cloud cover, precipitation and near-surface winds. Positive impact was also found for the statistics of the Madden–Julian oscillation (MJO), showing an increase in the frequencies and amplitudes of MJO events. Further, the errors of El Niño southern oscillation forecasts become smaller, whereas increases in ensemble spread lead to a better calibrated system if the stochastic tendency is activated. The backscatter scheme has overall neutral impact. Finally, evidence for noise-activated regime transitions has been found in a cluster analysis of mid-latitude circulation regimes over the Pacific–North America region. PMID:24842026

  8. Addressing model error through atmospheric stochastic physical parametrizations: impact on the coupled ECMWF seasonal forecasting system.

    PubMed

    Weisheimer, Antje; Corti, Susanna; Palmer, Tim; Vitart, Frederic

    2014-06-28

    The finite resolution of general circulation models of the coupled atmosphere-ocean system and the effects of sub-grid-scale variability present a major source of uncertainty in model simulations on all time scales. The European Centre for Medium-Range Weather Forecasts has been at the forefront of developing new approaches to account for these uncertainties. In particular, the stochastically perturbed physical tendency scheme and the stochastically perturbed backscatter algorithm for the atmosphere are now used routinely for global numerical weather prediction. The European Centre also performs long-range predictions of the coupled atmosphere-ocean climate system in operational forecast mode, and the latest seasonal forecasting system--System 4--has the stochastically perturbed tendency and backscatter schemes implemented in a similar way to that for the medium-range weather forecasts. Here, we present results of the impact of these schemes in System 4 by contrasting the operational performance on seasonal time scales during the retrospective forecast period 1981-2010 with comparable simulations that do not account for the representation of model uncertainty. We find that the stochastic tendency perturbation schemes helped to reduce excessively strong convective activity especially over the Maritime Continent and the tropical Western Pacific, leading to reduced biases of the outgoing longwave radiation (OLR), cloud cover, precipitation and near-surface winds. Positive impact was also found for the statistics of the Madden-Julian oscillation (MJO), showing an increase in the frequencies and amplitudes of MJO events. Further, the errors of El Niño southern oscillation forecasts become smaller, whereas increases in ensemble spread lead to a better calibrated system if the stochastic tendency is activated. The backscatter scheme has overall neutral impact. Finally, evidence for noise-activated regime transitions has been found in a cluster analysis of mid-latitude circulation regimes over the Pacific-North America region.

  9. Stochastic multi-scale models of competition within heterogeneous cellular populations: Simulation methods and mean-field analysis.

    PubMed

    Cruz, Roberto de la; Guerrero, Pilar; Spill, Fabian; Alarcón, Tomás

    2016-10-21

    We propose a modelling framework to analyse the stochastic behaviour of heterogeneous, multi-scale cellular populations. We illustrate our methodology with a particular example in which we study a population with an oxygen-regulated proliferation rate. Our formulation is based on an age-dependent stochastic process. Cells within the population are characterised by their age (i.e. time elapsed since they were born). The age-dependent (oxygen-regulated) birth rate is given by a stochastic model of oxygen-dependent cell cycle progression. Once the birth rate is determined, we formulate an age-dependent birth-and-death process, which dictates the time evolution of the cell population. The population is under a feedback loop which controls its steady state size (carrying capacity): cells consume oxygen which in turn fuels cell proliferation. We show that our stochastic model of cell cycle progression allows for heterogeneity within the cell population induced by stochastic effects. Such heterogeneous behaviour is reflected in variations in the proliferation rate. Within this set-up, we have established three main results. First, we have shown that the age to the G1/S transition, which essentially determines the birth rate, exhibits a remarkably simple scaling behaviour. Besides the fact that this simple behaviour emerges from a rather complex model, this allows for a huge simplification of our numerical methodology. A further result is the observation that heterogeneous populations undergo an internal process of quasi-neutral competition. Finally, we investigated the effects of cell-cycle-phase dependent therapies (such as radiation therapy) on heterogeneous populations. In particular, we have studied the case in which the population contains a quiescent sub-population. Our mean-field analysis and numerical simulations confirm that, if the survival fraction of the therapy is too high, rescue of the quiescent population occurs. This gives rise to emergence of resistance to therapy since the rescued population is less sensitive to therapy. Copyright © 2016 The Authors. Published by Elsevier Ltd.. All rights reserved.

  10. Predicting viscous-range velocity gradient dynamics in large-eddy simulations of turbulence

    NASA Astrophysics Data System (ADS)

    Johnson, Perry; Meneveau, Charles

    2017-11-01

    The details of small-scale turbulence are not directly accessible in large-eddy simulations (LES), posing a modeling challenge because many important micro-physical processes depend strongly on the dynamics of turbulence in the viscous range. Here, we introduce a method for coupling existing stochastic models for the Lagrangian evolution of the velocity gradient tensor with LES to simulate unresolved dynamics. The proposed approach is implemented in LES of turbulent channel flow and detailed comparisons with DNS are carried out. An application to modeling the fate of deformable, small (sub-Kolmogorov) droplets at negligible Stokes number and low volume fraction with one-way coupling is carried out. These results illustrate the ability of the proposed model to predict the influence of small scale turbulence on droplet micro-physics in the context of LES. This research was made possible by a graduate Fellowship from the National Science Foundation and by a Grant from The Gulf of Mexico Research Initiative.

  11. Chaotic gas turbine subject to augmented Lorenz equations.

    PubMed

    Cho, Kenichiro; Miyano, Takaya; Toriyama, Toshiyuki

    2012-09-01

    Inspired by the chaotic waterwheel invented by Malkus and Howard about 40 years ago, we have developed a gas turbine that randomly switches the sense of rotation between clockwise and counterclockwise. The nondimensionalized expressions for the equations of motion of our turbine are represented as a starlike network of many Lorenz subsystems sharing the angular velocity of the turbine rotor as the central node, referred to as augmented Lorenz equations. We show qualitative similarities between the statistical properties of the angular velocity of the turbine rotor and the velocity field of large-scale wind in turbulent Rayleigh-Bénard convection reported by Sreenivasan et al. [Phys. Rev. E 65, 056306 (2002)]. Our equations of motion achieve the random reversal of the turbine rotor through the stochastic resonance of the angular velocity in a double-well potential and the force applied by rapidly oscillating fields. These results suggest that the augmented Lorenz model is applicable as a dynamical model for the random reversal of turbulent large-scale wind through cessation.

  12. Hybrid approaches for multiple-species stochastic reaction-diffusion models

    NASA Astrophysics Data System (ADS)

    Spill, Fabian; Guerrero, Pilar; Alarcon, Tomas; Maini, Philip K.; Byrne, Helen

    2015-10-01

    Reaction-diffusion models are used to describe systems in fields as diverse as physics, chemistry, ecology and biology. The fundamental quantities in such models are individual entities such as atoms and molecules, bacteria, cells or animals, which move and/or react in a stochastic manner. If the number of entities is large, accounting for each individual is inefficient, and often partial differential equation (PDE) models are used in which the stochastic behaviour of individuals is replaced by a description of the averaged, or mean behaviour of the system. In some situations the number of individuals is large in certain regions and small in others. In such cases, a stochastic model may be inefficient in one region, and a PDE model inaccurate in another. To overcome this problem, we develop a scheme which couples a stochastic reaction-diffusion system in one part of the domain with its mean field analogue, i.e. a discretised PDE model, in the other part of the domain. The interface in between the two domains occupies exactly one lattice site and is chosen such that the mean field description is still accurate there. In this way errors due to the flux between the domains are small. Our scheme can account for multiple dynamic interfaces separating multiple stochastic and deterministic domains, and the coupling between the domains conserves the total number of particles. The method preserves stochastic features such as extinction not observable in the mean field description, and is significantly faster to simulate on a computer than the pure stochastic model.

  13. Hybrid approaches for multiple-species stochastic reaction-diffusion models.

    PubMed

    Spill, Fabian; Guerrero, Pilar; Alarcon, Tomas; Maini, Philip K; Byrne, Helen

    2015-10-15

    Reaction-diffusion models are used to describe systems in fields as diverse as physics, chemistry, ecology and biology. The fundamental quantities in such models are individual entities such as atoms and molecules, bacteria, cells or animals, which move and/or react in a stochastic manner. If the number of entities is large, accounting for each individual is inefficient, and often partial differential equation (PDE) models are used in which the stochastic behaviour of individuals is replaced by a description of the averaged, or mean behaviour of the system. In some situations the number of individuals is large in certain regions and small in others. In such cases, a stochastic model may be inefficient in one region, and a PDE model inaccurate in another. To overcome this problem, we develop a scheme which couples a stochastic reaction-diffusion system in one part of the domain with its mean field analogue, i.e. a discretised PDE model, in the other part of the domain. The interface in between the two domains occupies exactly one lattice site and is chosen such that the mean field description is still accurate there. In this way errors due to the flux between the domains are small. Our scheme can account for multiple dynamic interfaces separating multiple stochastic and deterministic domains, and the coupling between the domains conserves the total number of particles. The method preserves stochastic features such as extinction not observable in the mean field description, and is significantly faster to simulate on a computer than the pure stochastic model.

  14. Hybrid approaches for multiple-species stochastic reaction–diffusion models

    PubMed Central

    Spill, Fabian; Guerrero, Pilar; Alarcon, Tomas; Maini, Philip K.; Byrne, Helen

    2015-01-01

    Reaction–diffusion models are used to describe systems in fields as diverse as physics, chemistry, ecology and biology. The fundamental quantities in such models are individual entities such as atoms and molecules, bacteria, cells or animals, which move and/or react in a stochastic manner. If the number of entities is large, accounting for each individual is inefficient, and often partial differential equation (PDE) models are used in which the stochastic behaviour of individuals is replaced by a description of the averaged, or mean behaviour of the system. In some situations the number of individuals is large in certain regions and small in others. In such cases, a stochastic model may be inefficient in one region, and a PDE model inaccurate in another. To overcome this problem, we develop a scheme which couples a stochastic reaction–diffusion system in one part of the domain with its mean field analogue, i.e. a discretised PDE model, in the other part of the domain. The interface in between the two domains occupies exactly one lattice site and is chosen such that the mean field description is still accurate there. In this way errors due to the flux between the domains are small. Our scheme can account for multiple dynamic interfaces separating multiple stochastic and deterministic domains, and the coupling between the domains conserves the total number of particles. The method preserves stochastic features such as extinction not observable in the mean field description, and is significantly faster to simulate on a computer than the pure stochastic model. PMID:26478601

  15. Learning, climate and the evolution of cultural capacity.

    PubMed

    Whitehead, Hal

    2007-03-21

    Patterns of environmental variation influence the utility, and thus evolution, of different learning strategies. I use stochastic, individual-based evolutionary models to assess the relative advantages of 15 different learning strategies (genetic determination, individual learning, vertical social learning, horizontal/oblique social learning, and contingent combinations of these) when competing in variable environments described by 1/f noise. When environmental variation has little effect on fitness, then genetic determinism persists. When environmental variation is large and equal over all time-scales ("white noise") then individual learning is adaptive. Social learning is advantageous in "red noise" environments when variation over long time-scales is large. Climatic variability increases with time-scale, so that short-lived organisms should be able to rely largely on genetic determination. Thermal climates usually are insufficiently red for social learning to be advantageous for species whose fitness is very determined by temperature. In contrast, population trajectories of many species, especially large mammals and aquatic carnivores, are sufficiently red to promote social learning in their predators. The ocean environment is generally redder than that on land. Thus, while individual learning should be adaptive for many longer-lived organisms, social learning will often be found in those dependent on the populations of other species, especially if they are marine. This provides a potential explanation for the evolution of a prevalence of social learning, and culture, in humans and cetaceans.

  16. Shifts in Summertime Precipitation Accumulation Distributions over the US

    NASA Astrophysics Data System (ADS)

    Martinez-Villalobos, C.; Neelin, J. D.

    2016-12-01

    Precipitation accumulations, i.e., the amount of precipitation integrated over the course of an event, is a variable with both important physical and societal implications. Previous observational studies show that accumulation distributions have a characteristic shape, with an approximately power law decrease at first, followed by a sharp decrease at a characteristic large event cutoff scale. This cutoff scale is important as it limits the biggest accumulation events. Stochastic prototypes show that the resulting distributions, and importantly the large event cutoff scale, can be understood as a result of the interplay between moisture loss by precipitation and changes in moisture sinks/sources due to fluctuations in moisture divergence over the course of a precipitation event. The strength of this fluctuating moisture sink/source term is expected to increase under global warming, with both theory and climate model simulations predicting a concomitant increase in the large event cutoff scale. This cutoff scale increase has important consequences as it implies an approximately exponential increase for the largest accumulation events. Given its importance, in this study we characterize and track changes in the distribution of precipitation events accumulations over the contiguous US. Accumulation distributions are calculated using hourly precipitation data from 1700 stations, covering the 1974-2013 period over May-October. The resulting distributions largely follow the aforementioned shape, with individual cutoff scales depending on the local climate. An increase in the large event cutoff scale over this period is observed over several regions over the US, most notably over the eastern third of the US. In agreement with the increase in the cutoff, almost exponential increases in the highest accumulation percentiles occur over these regions, with increases in the 99.9 percentile in the Northeast of 70% for example. The relationship to changes in daily precipitation that have previously been noted and to changes in the moisture budget over this period are examined.

  17. Shifts in Summertime Precipitation Accumulation Distributions over the US

    NASA Astrophysics Data System (ADS)

    Martinez-Villalobos, C.; Neelin, J. D.

    2017-12-01

    Precipitation accumulations, i.e., the amount of precipitation integrated over the course of an event, is a variable with both important physical and societal implications. Previous observational studies show that accumulation distributions have a characteristic shape, with an approximately power law decrease at first, followed by a sharp decrease at a characteristic large event cutoff scale. This cutoff scale is important as it limits the biggest accumulation events. Stochastic prototypes show that the resulting distributions, and importantly the large event cutoff scale, can be understood as a result of the interplay between moisture loss by precipitation and changes in moisture sinks/sources due to fluctuations in moisture divergence over the course of a precipitation event. The strength of this fluctuating moisture sink/source term is expected to increase under global warming, with both theory and climate model simulations predicting a concomitant increase in the large event cutoff scale. This cutoff scale increase has important consequences as it implies an approximately exponential increase for the largest accumulation events. Given its importance, in this study we characterize and track changes in the distribution of precipitation events accumulations over the contiguous US. Accumulation distributions are calculated using hourly precipitation data from 1700 stations, covering the 1974-2013 period over May-October. The resulting distributions largely follow the aforementioned shape, with individual cutoff scales depending on the local climate. An increase in the large event cutoff scale over this period is observed over several regions over the US, most notably over the eastern third of the US. In agreement with the increase in the cutoff, almost exponential increases in the highest accumulation percentiles occur over these regions, with increases in the 99.9 percentile in the Northeast of 70% for example. The relationship to changes in daily precipitation that have previously been noted and to changes in the moisture budget over this period are examined.

  18. Dynamics of Topological Excitations in a Model Quantum Spin Ice

    NASA Astrophysics Data System (ADS)

    Huang, Chun-Jiong; Deng, Youjin; Wan, Yuan; Meng, Zi Yang

    2018-04-01

    We study the quantum spin dynamics of a frustrated X X Z model on a pyrochlore lattice by using large-scale quantum Monte Carlo simulation and stochastic analytic continuation. In the low-temperature quantum spin ice regime, we observe signatures of coherent photon and spinon excitations in the dynamic spin structure factor. As the temperature rises to the classical spin ice regime, the photon disappears from the dynamic spin structure factor, whereas the dynamics of the spinon remain coherent in a broad temperature window. Our results provide experimentally relevant, quantitative information for the ongoing pursuit of quantum spin ice materials.

  19. Final Report---Optimization Under Nonconvexity and Uncertainty: Algorithms and Software

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Jeff Linderoth

    2011-11-06

    the goal of this work was to develop new algorithmic techniques for solving large-scale numerical optimization problems, focusing on problems classes that have proven to be among the most challenging for practitioners: those involving uncertainty and those involving nonconvexity. This research advanced the state-of-the-art in solving mixed integer linear programs containing symmetry, mixed integer nonlinear programs, and stochastic optimization problems. The focus of the work done in the continuation was on Mixed Integer Nonlinear Programs (MINLP)s and Mixed Integer Linear Programs (MILP)s, especially those containing a great deal of symmetry.

  20. Grand unification scale primordial black holes: consequences and constraints.

    PubMed

    Anantua, Richard; Easther, Richard; Giblin, John T

    2009-09-11

    A population of very light primordial black holes which evaporate before nucleosynthesis begins is unconstrained unless the decaying black holes leave stable relics. We show that gravitons Hawking radiated from these black holes would source a substantial stochastic background of high frequency gravititational waves (10(12) Hz or more) in the present Universe. These black holes may lead to a transient period of matter-dominated expansion. In this case the primordial Universe could be temporarily dominated by large clusters of "Hawking stars" and the resulting gravitational wave spectrum is independent of the initial number density of primordial black holes.

  1. Portable parallel portfolio optimization in the Aurora Financial Management System

    NASA Astrophysics Data System (ADS)

    Laure, Erwin; Moritsch, Hans

    2001-07-01

    Financial planning problems are formulated as large scale, stochastic, multiperiod, tree structured optimization problems. An efficient technique for solving this kind of problems is the nested Benders decomposition method. In this paper we present a parallel, portable, asynchronous implementation of this technique. To achieve our portability goals we elected the programming language Java for our implementation and used a high level Java based framework, called OpusJava, for expressing the parallelism potential as well as synchronization constraints. Our implementation is embedded within a modular decision support tool for portfolio and asset liability management, the Aurora Financial Management System.

  2. Stochastic Order Redshift Technique (SORT): a simple, efficient and robust method to improve cosmological redshift measurements

    NASA Astrophysics Data System (ADS)

    Tejos, Nicolas; Rodríguez-Puebla, Aldo; Primack, Joel R.

    2018-01-01

    We present a simple, efficient and robust approach to improve cosmological redshift measurements. The method is based on the presence of a reference sample for which a precise redshift number distribution (dN/dz) can be obtained for different pencil-beam-like sub-volumes within the original survey. For each sub-volume we then impose that: (i) the redshift number distribution of the uncertain redshift measurements matches the reference dN/dz corrected by their selection functions and (ii) the rank order in redshift of the original ensemble of uncertain measurements is preserved. The latter step is motivated by the fact that random variables drawn from Gaussian probability density functions (PDFs) of different means and arbitrarily large standard deviations satisfy stochastic ordering. We then repeat this simple algorithm for multiple arbitrary pencil-beam-like overlapping sub-volumes; in this manner, each uncertain measurement has multiple (non-independent) 'recovered' redshifts which can be used to estimate a new redshift PDF. We refer to this method as the Stochastic Order Redshift Technique (SORT). We have used a state-of-the-art N-body simulation to test the performance of SORT under simple assumptions and found that it can improve the quality of cosmological redshifts in a robust and efficient manner. Particularly, SORT redshifts (zsort) are able to recover the distinctive features of the so-called 'cosmic web' and can provide unbiased measurement of the two-point correlation function on scales ≳4 h-1Mpc. Given its simplicity, we envision that a method like SORT can be incorporated into more sophisticated algorithms aimed to exploit the full potential of large extragalactic photometric surveys.

  3. Does internal climate variability overwhelm climate change signals in streamflow? The upper Po and Rhone basin case studies.

    PubMed

    Fatichi, S; Rimkus, S; Burlando, P; Bordoy, R

    2014-09-15

    Projections of climate change effects in streamflow are increasingly required to plan water management strategies. These projections are however largely uncertain due to the spread among climate model realizations, internal climate variability, and difficulties in transferring climate model results at the spatial and temporal scales required by catchment hydrology. A combination of a stochastic downscaling methodology and distributed hydrological modeling was used in the ACQWA project to provide projections of future streamflow (up to year 2050) for the upper Po and Rhone basins, respectively located in northern Italy and south-western Switzerland. Results suggest that internal (stochastic) climate variability is a fundamental source of uncertainty, typically comparable or larger than the projected climate change signal. Therefore, climate change effects in streamflow mean, frequency, and seasonality can be masked by natural climatic fluctuations in large parts of the analyzed regions. An exception to the overwhelming role of stochastic variability is represented by high elevation catchments fed by glaciers where streamflow is expected to be considerably reduced due to glacier retreat, with consequences appreciable in the main downstream rivers in August and September. Simulations also identify regions (west upper Rhone and Toce, Ticino river basins) where a strong precipitation increase in the February to April period projects streamflow beyond the range of natural climate variability during the melting season. This study emphasizes the importance of including internal climate variability in climate change analyses, especially when compared to the limited uncertainty that would be accounted for by few deterministic projections. The presented results could be useful in guiding more specific impact studies, although design or management decisions should be better based on reliability and vulnerability criteria as suggested by recent literature. Copyright © 2013 Elsevier B.V. All rights reserved.

  4. Hybrid stochastic simplifications for multiscale gene networks.

    PubMed

    Crudu, Alina; Debussche, Arnaud; Radulescu, Ovidiu

    2009-09-07

    Stochastic simulation of gene networks by Markov processes has important applications in molecular biology. The complexity of exact simulation algorithms scales with the number of discrete jumps to be performed. Approximate schemes reduce the computational time by reducing the number of simulated discrete events. Also, answering important questions about the relation between network topology and intrinsic noise generation and propagation should be based on general mathematical results. These general results are difficult to obtain for exact models. We propose a unified framework for hybrid simplifications of Markov models of multiscale stochastic gene networks dynamics. We discuss several possible hybrid simplifications, and provide algorithms to obtain them from pure jump processes. In hybrid simplifications, some components are discrete and evolve by jumps, while other components are continuous. Hybrid simplifications are obtained by partial Kramers-Moyal expansion [1-3] which is equivalent to the application of the central limit theorem to a sub-model. By averaging and variable aggregation we drastically reduce simulation time and eliminate non-critical reactions. Hybrid and averaged simplifications can be used for more effective simulation algorithms and for obtaining general design principles relating noise to topology and time scales. The simplified models reproduce with good accuracy the stochastic properties of the gene networks, including waiting times in intermittence phenomena, fluctuation amplitudes and stationary distributions. The methods are illustrated on several gene network examples. Hybrid simplifications can be used for onion-like (multi-layered) approaches to multi-scale biochemical systems, in which various descriptions are used at various scales. Sets of discrete and continuous variables are treated with different methods and are coupled together in a physically justified approach.

  5. Effect of slip-area scaling on the earthquake frequency-magnitude relationship

    NASA Astrophysics Data System (ADS)

    Senatorski, Piotr

    2017-06-01

    The earthquake frequency-magnitude relationship is considered in the maximum entropy principle (MEP) perspective. The MEP suggests sampling with constraints as a simple stochastic model of seismicity. The model is based on the von Neumann's acceptance-rejection method, with b-value as the parameter that breaks symmetry between small and large earthquakes. The Gutenberg-Richter law's b-value forms a link between earthquake statistics and physics. Dependence between b-value and the rupture area vs. slip scaling exponent is derived. The relationship enables us to explain observed ranges of b-values for different types of earthquakes. Specifically, different b-value ranges for tectonic and induced, hydraulic fracturing seismicity is explained in terms of their different triggering mechanisms: by the applied stress increase and fault strength reduction, respectively.

  6. New Statistical Model for Variability of Aerosol Optical Thickness: Theory and Application to MODIS Data over Ocean

    NASA Technical Reports Server (NTRS)

    Alexandrov, Mikhail Dmitrievic; Geogdzhayev, Igor V.; Tsigaridis, Konstantinos; Marshak, Alexander; Levy, Robert; Cairns, Brian

    2016-01-01

    A novel model for the variability in aerosol optical thickness (AOT) is presented. This model is based on the consideration of AOT fields as realizations of a stochastic process, that is the exponent of an underlying Gaussian process with a specific autocorrelation function. In this approach AOT fields have lognormal PDFs and structure functions having the correct asymptotic behavior at large scales. The latter is an advantage compared with fractal (scale-invariant) approaches. The simple analytical form of the structure function in the proposed model facilitates its use for the parameterization of AOT statistics derived from remote sensing data. The new approach is illustrated using a month-long global MODIS AOT dataset (over ocean) with 10 km resolution. It was used to compute AOT statistics for sample cells forming a grid with 5deg spacing. The observed shapes of the structure functions indicated that in a large number of cases the AOT variability is split into two regimes that exhibit different patterns of behavior: small-scale stationary processes and trends reflecting variations at larger scales. The small-scale patterns are suggested to be generated by local aerosols within the marine boundary layer, while the large-scale trends are indicative of elevated aerosols transported from remote continental sources. This assumption is evaluated by comparison of the geographical distributions of these patterns derived from MODIS data with those obtained from the GISS GCM. This study shows considerable potential to enhance comparisons between remote sensing datasets and climate models beyond regional mean AOTs.

  7. Comparing large-scale computational approaches to epidemic modeling: agent-based versus structured metapopulation models.

    PubMed

    Ajelli, Marco; Gonçalves, Bruno; Balcan, Duygu; Colizza, Vittoria; Hu, Hao; Ramasco, José J; Merler, Stefano; Vespignani, Alessandro

    2010-06-29

    In recent years large-scale computational models for the realistic simulation of epidemic outbreaks have been used with increased frequency. Methodologies adapt to the scale of interest and range from very detailed agent-based models to spatially-structured metapopulation models. One major issue thus concerns to what extent the geotemporal spreading pattern found by different modeling approaches may differ and depend on the different approximations and assumptions used. We provide for the first time a side-by-side comparison of the results obtained with a stochastic agent-based model and a structured metapopulation stochastic model for the progression of a baseline pandemic event in Italy, a large and geographically heterogeneous European country. The agent-based model is based on the explicit representation of the Italian population through highly detailed data on the socio-demographic structure. The metapopulation simulations use the GLobal Epidemic and Mobility (GLEaM) model, based on high-resolution census data worldwide, and integrating airline travel flow data with short-range human mobility patterns at the global scale. The model also considers age structure data for Italy. GLEaM and the agent-based models are synchronized in their initial conditions by using the same disease parameterization, and by defining the same importation of infected cases from international travels. The results obtained show that both models provide epidemic patterns that are in very good agreement at the granularity levels accessible by both approaches, with differences in peak timing on the order of a few days. The relative difference of the epidemic size depends on the basic reproductive ratio, R0, and on the fact that the metapopulation model consistently yields a larger incidence than the agent-based model, as expected due to the differences in the structure in the intra-population contact pattern of the approaches. The age breakdown analysis shows that similar attack rates are obtained for the younger age classes. The good agreement between the two modeling approaches is very important for defining the tradeoff between data availability and the information provided by the models. The results we present define the possibility of hybrid models combining the agent-based and the metapopulation approaches according to the available data and computational resources.

  8. Modelling stock order flows with non-homogeneous intensities from high-frequency data

    NASA Astrophysics Data System (ADS)

    Gorshenin, Andrey K.; Korolev, Victor Yu.; Zeifman, Alexander I.; Shorgin, Sergey Ya.; Chertok, Andrey V.; Evstafyev, Artem I.; Korchagin, Alexander Yu.

    2013-10-01

    A micro-scale model is proposed for the evolution of such information system as the limit order book in financial markets. Within this model, the flows of orders (claims) are described by doubly stochastic Poisson processes taking account of the stochastic character of intensities of buy and sell orders that determine the price discovery mechanism. The proposed multiplicative model of stochastic intensities makes it possible to analyze the characteristics of the order flows as well as the instantaneous proportion of the forces of buyers and sellers, that is, the imbalance process, without modelling the external information background. The proposed model gives the opportunity to link the micro-scale (high-frequency) dynamics of the limit order book with the macro-scale models of stock price processes of the form of subordinated Wiener processes by means of limit theorems of probability theory and hence, to use the normal variance-mean mixture models of the corresponding heavy-tailed distributions. The approach can be useful in different areas with similar properties (e.g., in plasma physics).

  9. Climate and weather across scales: singularities and stochastic Levy-Clifford algebra

    NASA Astrophysics Data System (ADS)

    Schertzer, Daniel; Tchiguirinskaia, Ioulia

    2016-04-01

    There have been several attempts to understand and simulate the fluctuations of weather and climate across scales. Beyond mono/uni-scaling approaches (e.g. using spectral analysis), this was done with the help of multifractal techniques that aim to track and simulate the scaling singularities of the underlying equations instead of relying on numerical, scale truncated simulations of these equations (Royer et al., 2008, Lovejoy and Schertzer, 2013). However, these techniques were limited to deal with scalar fields, instead of dealing directly with a system of complex interactions and non trivial symmetries. The latter is unfortunately indispensable to answer to the challenging question of being able to assess the climatology of (exo-) planets based on first principles (Pierrehumbert, 2013) or to fully address the question of the relevance of quasi-geostrophic turbulence and to define an effective, fractal dimension of the atmospheric motions (Schertzer et al., 2012). In this talk, we present a plausible candidate based on the combination of Lévy stable processes and Clifford algebra. Together they combine stochastic and structural properties that are strongly universal. They therefore define with the help of a few physically meaningful parameters a wide class of stochastic symmetries, as well as high dimensional vector- or manifold-valued fields respecting these symmetries (Schertzer and Tchiguirinskaia, 2015). Lovejoy, S. & Schertzer, D., 2013. The Weather and Climate: Emergent Laws and Multifractal Cascades. Cambridge U.K. Cambridge Univeristy Press. Pierrehumbert, R.T., 2013. Strange news from other stars. Nature Geoscience, 6(2), pp.81-83. Royer, J.F. et al., 2008. Multifractal analysis of the evolution of simulated precipitation over France in a climate scenario. C.R. Geoscience, 340(431-440). Schertzer, D. et al., 2012. Quasi-geostrophic turbulence and generalized scale invariance, a theoretical reply. Atmos. Chem. Phys., 12, pp.327-336. Schertzer, D. & Tchiguirinskaia, I., 2015. Multifractal vector fields and stochastic Clifford algebra. Chaos: An Interdisciplinary Journal of Nonlinear Science, 25(12), p.123127.

  10. An asymptotic-preserving stochastic Galerkin method for the radiative heat transfer equations with random inputs and diffusive scalings

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Jin, Shi, E-mail: sjin@wisc.edu; Institute of Natural Sciences, Department of Mathematics, MOE-LSEC and SHL-MAC, Shanghai Jiao Tong University, Shanghai 200240; Lu, Hanqing, E-mail: hanqing@math.wisc.edu

    2017-04-01

    In this paper, we develop an Asymptotic-Preserving (AP) stochastic Galerkin scheme for the radiative heat transfer equations with random inputs and diffusive scalings. In this problem the random inputs arise due to uncertainties in cross section, initial data or boundary data. We use the generalized polynomial chaos based stochastic Galerkin (gPC-SG) method, which is combined with the micro–macro decomposition based deterministic AP framework in order to handle efficiently the diffusive regime. For linearized problem we prove the regularity of the solution in the random space and consequently the spectral accuracy of the gPC-SG method. We also prove the uniform (inmore » the mean free path) linear stability for the space-time discretizations. Several numerical tests are presented to show the efficiency and accuracy of proposed scheme, especially in the diffusive regime.« less

  11. Multiscale stochastic simulations of chemical reactions with regulated scale separation

    NASA Astrophysics Data System (ADS)

    Koumoutsakos, Petros; Feigelman, Justin

    2013-07-01

    We present a coupling of multiscale frameworks with accelerated stochastic simulation algorithms for systems of chemical reactions with disparate propensities. The algorithms regulate the propensities of the fast and slow reactions of the system, using alternating micro and macro sub-steps simulated with accelerated algorithms such as τ and R-leaping. The proposed algorithms are shown to provide significant speedups in simulations of stiff systems of chemical reactions with a trade-off in accuracy as controlled by a regulating parameter. More importantly, the error of the methods exhibits a cutoff phenomenon that allows for optimal parameter choices. Numerical experiments demonstrate that hybrid algorithms involving accelerated stochastic simulations can be, in certain cases, more accurate while faster, than their corresponding stochastic simulation algorithm counterparts.

  12. Estimation of stochastic volatility by using Ornstein-Uhlenbeck type models

    NASA Astrophysics Data System (ADS)

    Mariani, Maria C.; Bhuiyan, Md Al Masum; Tweneboah, Osei K.

    2018-02-01

    In this study, we develop a technique for estimating the stochastic volatility (SV) of a financial time series by using Ornstein-Uhlenbeck type models. Using the daily closing prices from developed and emergent stock markets, we conclude that the incorporation of stochastic volatility into the time varying parameter estimation significantly improves the forecasting performance via Maximum Likelihood Estimation. Furthermore, our estimation algorithm is feasible with large data sets and have good convergence properties.

  13. Stochastic genome-nuclear lamina interactions: modulating roles of Lamin A and BAF.

    PubMed

    Kind, Jop; van Steensel, Bas

    2014-01-01

    The nuclear lamina (NL) is thought to aid in the spatial organization of interphase chromosomes by providing an anchoring platform for hundreds of large genomic regions named lamina associated domains (LADs). Recently, a new live-cell imaging approach demonstrated directly that LAD-NL interactions are dynamic and in part stochastic. Here we discuss implications of these new findings and introduce Lamin A and BAF as potential modulators of stochastic LAD positioning.

  14. Memory effects on stochastic resonance

    NASA Astrophysics Data System (ADS)

    Neiman, Alexander; Sung, Wokyung

    1996-02-01

    We study the phenomenon of stochastic resonance (SR) in a bistable system with internal colored noise. In this situation the system possesses time-dependent memory friction connected with noise via the fluctuation-dissipation theorem, so that in the absence of periodic driving the system approaches the thermodynamic equilibrium state. For this non-Markovian case we find that memory usually suppresses stochastic resonance. However, for a large memory time SR can be enhanced by the memory.

  15. On Nash Equilibria in Stochastic Games

    DTIC Science & Technology

    2003-10-01

    Traditionally automata theory and veri cation has considered zero sum or strictly competitive versions of stochastic games . In these games there are two players...zero- sum discrete-time stochastic dynamic games . SIAM J. Control and Optimization, 19(5):617{634, 1981. 18. R.J. Lipton, E . Markakis, and A. Mehta...Playing large games using simple strate- gies. In EC 03: Electronic Commerce, pages 36{41. ACM Press, 2003. 19. A. Maitra and W. Sudderth. Finitely

  16. Stochastic locality and master-field simulations of very large lattices

    NASA Astrophysics Data System (ADS)

    Lüscher, Martin

    2018-03-01

    In lattice QCD and other field theories with a mass gap, the field variables in distant regions of a physically large lattice are only weakly correlated. Accurate stochastic estimates of the expectation values of local observables may therefore be obtained from a single representative field. Such master-field simulations potentially allow very large lattices to be simulated, but require various conceptual and technical issues to be addressed. In this talk, an introduction to the subject is provided and some encouraging results of master-field simulations of the SU(3) gauge theory are reported.

  17. Broken detailed balance and non-equilibrium dynamics in living systems: a review

    NASA Astrophysics Data System (ADS)

    Gnesotto, F. S.; Mura, F.; Gladrow, J.; Broedersz, C. P.

    2018-06-01

    Living systems operate far from thermodynamic equilibrium. Enzymatic activity can induce broken detailed balance at the molecular scale. This molecular scale breaking of detailed balance is crucial to achieve biological functions such as high-fidelity transcription and translation, sensing, adaptation, biochemical patterning, and force generation. While biological systems such as motor enzymes violate detailed balance at the molecular scale, it remains unclear how non-equilibrium dynamics manifests at the mesoscale in systems that are driven through the collective activity of many motors. Indeed, in several cellular systems the presence of non-equilibrium dynamics is not always evident at large scales. For example, in the cytoskeleton or in chromosomes one can observe stationary stochastic processes that appear at first glance thermally driven. This raises the question how non-equilibrium fluctuations can be discerned from thermal noise. We discuss approaches that have recently been developed to address this question, including methods based on measuring the extent to which the system violates the fluctuation-dissipation theorem. We also review applications of this approach to reconstituted cytoskeletal networks, the cytoplasm of living cells, and cell membranes. Furthermore, we discuss a more recent approach to detect actively driven dynamics, which is based on inferring broken detailed balance. This constitutes a non-invasive method that uses time-lapse microscopy data, and can be applied to a broad range of systems in cells and tissue. We discuss the ideas underlying this method and its application to several examples including flagella, primary cilia, and cytoskeletal networks. Finally, we briefly discuss recent developments in stochastic thermodynamics and non-equilibrium statistical mechanics, which offer new perspectives to understand the physics of living systems.

  18. Broken detailed balance and non-equilibrium dynamics in living systems: a review.

    PubMed

    Gnesotto, F S; Mura, F; Gladrow, J; Broedersz, C P

    2018-06-01

    Living systems operate far from thermodynamic equilibrium. Enzymatic activity can induce broken detailed balance at the molecular scale. This molecular scale breaking of detailed balance is crucial to achieve biological functions such as high-fidelity transcription and translation, sensing, adaptation, biochemical patterning, and force generation. While biological systems such as motor enzymes violate detailed balance at the molecular scale, it remains unclear how non-equilibrium dynamics manifests at the mesoscale in systems that are driven through the collective activity of many motors. Indeed, in several cellular systems the presence of non-equilibrium dynamics is not always evident at large scales. For example, in the cytoskeleton or in chromosomes one can observe stationary stochastic processes that appear at first glance thermally driven. This raises the question how non-equilibrium fluctuations can be discerned from thermal noise. We discuss approaches that have recently been developed to address this question, including methods based on measuring the extent to which the system violates the fluctuation-dissipation theorem. We also review applications of this approach to reconstituted cytoskeletal networks, the cytoplasm of living cells, and cell membranes. Furthermore, we discuss a more recent approach to detect actively driven dynamics, which is based on inferring broken detailed balance. This constitutes a non-invasive method that uses time-lapse microscopy data, and can be applied to a broad range of systems in cells and tissue. We discuss the ideas underlying this method and its application to several examples including flagella, primary cilia, and cytoskeletal networks. Finally, we briefly discuss recent developments in stochastic thermodynamics and non-equilibrium statistical mechanics, which offer new perspectives to understand the physics of living systems.

  19. A hybrid meta-heuristic algorithm for the vehicle routing problem with stochastic travel times considering the driver's satisfaction

    NASA Astrophysics Data System (ADS)

    Tavakkoli-Moghaddam, Reza; Alinaghian, Mehdi; Salamat-Bakhsh, Alireza; Norouzi, Narges

    2012-05-01

    A vehicle routing problem is a significant problem that has attracted great attention from researchers in recent years. The main objectives of the vehicle routing problem are to minimize the traveled distance, total traveling time, number of vehicles and cost function of transportation. Reducing these variables leads to decreasing the total cost and increasing the driver's satisfaction level. On the other hand, this satisfaction, which will decrease by increasing the service time, is considered as an important logistic problem for a company. The stochastic time dominated by a probability variable leads to variation of the service time, while it is ignored in classical routing problems. This paper investigates the problem of the increasing service time by using the stochastic time for each tour such that the total traveling time of the vehicles is limited to a specific limit based on a defined probability. Since exact solutions of the vehicle routing problem that belong to the category of NP-hard problems are not practical in a large scale, a hybrid algorithm based on simulated annealing with genetic operators was proposed to obtain an efficient solution with reasonable computational cost and time. Finally, for some small cases, the related results of the proposed algorithm were compared with results obtained by the Lingo 8 software. The obtained results indicate the efficiency of the proposed hybrid simulated annealing algorithm.

  20. A MULTISCALE FRAMEWORK FOR THE STOCHASTIC ASSIMILATION AND MODELING OF UNCERTAINTY ASSOCIATED NCF COMPOSITE MATERIALS

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mehrez, Loujaine; Ghanem, Roger; McAuliffe, Colin

    multiscale framework to construct stochastic macroscopic constitutive material models is proposed. A spectral projection approach, specifically polynomial chaos expansion, has been used to construct explicit functional relationships between the homogenized properties and input parameters from finer scales. A homogenization engine embedded in Multiscale Designer, software for composite materials, has been used for the upscaling process. The framework is demonstrated using non-crimp fabric composite materials by constructing probabilistic models of the homogenized properties of a non-crimp fabric laminate in terms of the input parameters together with the homogenized properties from finer scales.

  1. Statistical theory of dynamo

    NASA Astrophysics Data System (ADS)

    Kim, E.; Newton, A. P.

    2012-04-01

    One major problem in dynamo theory is the multi-scale nature of the MHD turbulence, which requires statistical theory in terms of probability distribution functions. In this contribution, we present the statistical theory of magnetic fields in a simplified mean field α-Ω dynamo model by varying the statistical property of alpha, including marginal stability and intermittency, and then utilize observational data of solar activity to fine-tune the mean field dynamo model. Specifically, we first present a comprehensive investigation into the effect of the stochastic parameters in a simplified α-Ω dynamo model. Through considering the manifold of marginal stability (the region of parameter space where the mean growth rate is zero), we show that stochastic fluctuations are conductive to dynamo. Furthermore, by considering the cases of fluctuating alpha that are periodic and Gaussian coloured random noise with identical characteristic time-scales and fluctuating amplitudes, we show that the transition to dynamo is significantly facilitated for stochastic alpha with random noise. Furthermore, we show that probability density functions (PDFs) of the growth-rate, magnetic field and magnetic energy can provide a wealth of useful information regarding the dynamo behaviour/intermittency. Finally, the precise statistical property of the dynamo such as temporal correlation and fluctuating amplitude is found to be dependent on the distribution the fluctuations of stochastic parameters. We then use observations of solar activity to constrain parameters relating to the effect in stochastic α-Ω nonlinear dynamo models. This is achieved through performing a comprehensive statistical comparison by computing PDFs of solar activity from observations and from our simulation of mean field dynamo model. The observational data that are used are the time history of solar activity inferred for C14 data in the past 11000 years on a long time scale and direct observations of the sun spot numbers obtained in recent years 1795-1995 on a short time scale. Monte Carlo simulations are performed on these data to obtain PDFs of the solar activity on both long and short time scales. These PDFs are then compared with predicted PDFs from numerical simulation of our α-Ω dynamo model, where α is assumed to have both mean α0 and fluctuating α' parts. By varying the correlation time of fluctuating α', the ratio of the amplitude of the fluctuating to mean alpha <α'2>/α02 (where angular brackets <> denote ensemble average), and the ratio of poloidal to toroidal magnetic fields, we show that the results from our stochastic dynamo model can match the PDFs of solar activity on both long and short time scales. In particular, a good agreement is obtained when the fluctuation in alpha is roughly equal to the mean part with a correlation time shorter than the solar period.

  2. Tests of oceanic stochastic parameterisation in a seasonal forecast system.

    NASA Astrophysics Data System (ADS)

    Cooper, Fenwick; Andrejczuk, Miroslaw; Juricke, Stephan; Zanna, Laure; Palmer, Tim

    2015-04-01

    Over seasonal time scales, our aim is to compare the relative impact of ocean initial condition and model uncertainty, upon the ocean forecast skill and reliability. Over seasonal timescales we compare four oceanic stochastic parameterisation schemes applied in a 1x1 degree ocean model (NEMO) with a fully coupled T159 atmosphere (ECMWF IFS). The relative impacts upon the ocean of the resulting eddy induced activity, wind forcing and typical initial condition perturbations are quantified. Following the historical success of stochastic parameterisation in the atmosphere, two of the parameterisations tested were multiplicitave in nature: A stochastic variation of the Gent-McWilliams scheme and a stochastic diffusion scheme. We also consider a surface flux parameterisation (similar to that introduced by Williams, 2012), and stochastic perturbation of the equation of state (similar to that introduced by Brankart, 2013). The amplitude of the stochastic term in the Williams (2012) scheme was set to the physically reasonable amplitude considered in that paper. The amplitude of the stochastic term in each of the other schemes was increased to the limits of model stability. As expected, variability was increased. Up to 1 month after initialisation, ensemble spread induced by stochastic parameterisation is greater than that induced by the atmosphere, whilst being smaller than the initial condition perturbations currently used at ECMWF. After 1 month, the wind forcing becomes the dominant source of model ocean variability, even at depth.

  3. An efficient computational method for solving nonlinear stochastic Itô integral equations: Application for stochastic problems in physics

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Heydari, M.H., E-mail: heydari@stu.yazd.ac.ir; The Laboratory of Quantum Information Processing, Yazd University, Yazd; Hooshmandasl, M.R., E-mail: hooshmandasl@yazd.ac.ir

    Because of the nonlinearity, closed-form solutions of many important stochastic functional equations are virtually impossible to obtain. Thus, numerical solutions are a viable alternative. In this paper, a new computational method based on the generalized hat basis functions together with their stochastic operational matrix of Itô-integration is proposed for solving nonlinear stochastic Itô integral equations in large intervals. In the proposed method, a new technique for computing nonlinear terms in such problems is presented. The main advantage of the proposed method is that it transforms problems under consideration into nonlinear systems of algebraic equations which can be simply solved. Errormore » analysis of the proposed method is investigated and also the efficiency of this method is shown on some concrete examples. The obtained results reveal that the proposed method is very accurate and efficient. As two useful applications, the proposed method is applied to obtain approximate solutions of the stochastic population growth models and stochastic pendulum problem.« less

  4. Stochastic volatility of the futures prices of emission allowances: A Bayesian approach

    NASA Astrophysics Data System (ADS)

    Kim, Jungmu; Park, Yuen Jung; Ryu, Doojin

    2017-01-01

    Understanding the stochastic nature of the spot volatility of emission allowances is crucial for risk management in emissions markets. In this study, by adopting a stochastic volatility model with or without jumps to represent the dynamics of European Union Allowances (EUA) futures prices, we estimate the daily volatilities and model parameters by using the Markov Chain Monte Carlo method for stochastic volatility (SV), stochastic volatility with return jumps (SVJ) and stochastic volatility with correlated jumps (SVCJ) models. Our empirical results reveal three important features of emissions markets. First, the data presented herein suggest that EUA futures prices exhibit significant stochastic volatility. Second, the leverage effect is noticeable regardless of whether or not jumps are included. Third, the inclusion of jumps has a significant impact on the estimation of the volatility dynamics. Finally, the market becomes very volatile and large jumps occur at the beginning of a new phase. These findings are important for policy makers and regulators.

  5. Conditional flood frequency and catchment state: a simulation approach

    NASA Astrophysics Data System (ADS)

    Brettschneider, Marco; Bourgin, François; Merz, Bruno; Andreassian, Vazken; Blaquiere, Simon

    2017-04-01

    Catchments have memory and the conditional flood frequency distribution for a time period ahead can be seen as non-stationary: it varies with the catchment state and climatic factors. From a risk management perspective, understanding the link of conditional flood frequency to catchment state is a key to anticipate potential periods of higher flood risk. Here, we adopt a simulation approach to explore the link between flood frequency obtained by continuous rainfall-runoff simulation and the initial state of the catchment. The simulation chain is based on i) a three state rainfall generator applied at the catchment scale, whose parameters are estimated for each month, and ii) the GR4J lumped rainfall-runoff model, whose parameters are calibrated with all available data. For each month, a large number of stochastic realizations of the continuous rainfall generator for the next 12 months are used as inputs for the GR4J model in order to obtain a large number of stochastic realizations for the next 12 months. This process is then repeated for 50 different initial states of the soil moisture reservoir of the GR4J model and for all the catchments. Thus, 50 different conditional flood frequency curves are obtained for the 50 different initial catchment states. We will present an analysis of the link between the catchment states, the period of the year and the strength of the conditioning of the flood frequency compared to the unconditional flood frequency. A large sample of diverse catchments in France will be used.

  6. Stochastic theory of large-scale enzyme-reaction networks: Finite copy number corrections to rate equation models

    NASA Astrophysics Data System (ADS)

    Thomas, Philipp; Straube, Arthur V.; Grima, Ramon

    2010-11-01

    Chemical reactions inside cells occur in compartment volumes in the range of atto- to femtoliters. Physiological concentrations realized in such small volumes imply low copy numbers of interacting molecules with the consequence of considerable fluctuations in the concentrations. In contrast, rate equation models are based on the implicit assumption of infinitely large numbers of interacting molecules, or equivalently, that reactions occur in infinite volumes at constant macroscopic concentrations. In this article we compute the finite-volume corrections (or equivalently the finite copy number corrections) to the solutions of the rate equations for chemical reaction networks composed of arbitrarily large numbers of enzyme-catalyzed reactions which are confined inside a small subcellular compartment. This is achieved by applying a mesoscopic version of the quasisteady-state assumption to the exact Fokker-Planck equation associated with the Poisson representation of the chemical master equation. The procedure yields impressively simple and compact expressions for the finite-volume corrections. We prove that the predictions of the rate equations will always underestimate the actual steady-state substrate concentrations for an enzyme-reaction network confined in a small volume. In particular we show that the finite-volume corrections increase with decreasing subcellular volume, decreasing Michaelis-Menten constants, and increasing enzyme saturation. The magnitude of the corrections depends sensitively on the topology of the network. The predictions of the theory are shown to be in excellent agreement with stochastic simulations for two types of networks typically associated with protein methylation and metabolism.

  7. A simplified model to evaluate the effect of fluid rheology on non-Newtonian flow in variable aperture fractures

    NASA Astrophysics Data System (ADS)

    Felisa, Giada; Ciriello, Valentina; Longo, Sandro; Di Federico, Vittorio

    2017-04-01

    Modeling of non-Newtonian flow in fractured media is essential in hydraulic fracturing operations, largely used for optimal exploitation of oil, gas and thermal reservoirs. Complex fluids interact with pre-existing rock fractures also during drilling operations, enhanced oil recovery, environmental remediation, and other natural phenomena such as magma and sand intrusions, and mud volcanoes. A first step in the modeling effort is a detailed understanding of flow in a single fracture, as the fracture aperture is typically spatially variable. A large bibliography exists on Newtonian flow in single, variable aperture fractures. Ultimately, stochastic modeling of aperture variability at the single fracture scale leads to determination of the flowrate under a given pressure gradient as a function of the parameters describing the variability of the aperture field and the fluid rheological behaviour. From the flowrate, a flow, or 'hydraulic', aperture can then be derived. The equivalent flow aperture for non-Newtonian fluids of power-law nature in single, variable aperture fractures has been obtained in the past both for deterministic and stochastic variations. Detailed numerical modeling of power-law fluid flow in a variable aperture fracture demonstrated that pronounced channelization effects are associated to a nonlinear fluid rheology. The availability of an equivalent flow aperture as a function of the parameters describing the fluid rheology and the aperture variability is enticing, as it allows taking their interaction into account when modeling flow in fracture networks at a larger scale. A relevant issue in non-Newtonian fracture flow is the rheological nature of the fluid. The constitutive model routinely used for hydro-fracturing modeling is the simple, two-parameter power-law. Yet this model does not characterize real fluids at low and high shear rates, as it implies, for shear-thinning fluids, an apparent viscosity which becomes unbounded for zero shear rate and tends to zero for infinite shear rate. On the contrary, the four-parameter Carreau constitutive equation includes asymptotic values of the apparent viscosity at those limits; in turn, the Carreau rheological equation is well approximated by the more tractable truncated power-law model. Results for flow of such fluids between parallel walls are already available. This study extends the adoption of the truncated power-law model to variable aperture fractures, with the aim of understanding the joint influence of rheology and aperture spatial variability. The aperture variation, modeled within a stochastic or deterministic framework, is taken to be one-dimensional and perpendicular to the flow direction; for stochastic modeling, the influence of different distribution functions is examined. Results are then compared with those obtained for pure power-law fluids for different combinations of model parameters. It is seen that the adoption of the pure power law model leads to significant overestimation of the flowrate with respect to the truncated model, more so for large external pressure gradient and/or aperture variability.

  8. A low-order model for long-range infrasound propagation in random atmospheric waveguides

    NASA Astrophysics Data System (ADS)

    Millet, C.; Lott, F.

    2014-12-01

    In numerical modeling of long-range infrasound propagation in the atmosphere, the wind and temperature profiles are usually obtained as a result of matching atmospheric models to empirical data. The atmospheric models are classically obtained from operational numerical weather prediction centers (NOAA Global Forecast System or ECMWF Integrated Forecast system) as well as atmospheric climate reanalysis activities and thus, do not explicitly resolve atmospheric gravity waves (GWs). The GWs are generally too small to be represented in Global Circulation Models, and their effects on the resolved scales need to be parameterized in order to account for fine-scale atmospheric inhomogeneities (for length scales less than 100 km). In the present approach, the sound speed profiles are considered as random functions, obtained by superimposing a stochastic GW field on the ECMWF reanalysis ERA-Interim. The spectral domain is binned by a large number of monochromatic GWs, and the breaking of each GW is treated independently from the others. The wave equation is solved using a reduced-order model, starting from the classical normal mode technique. We focus on the asymptotic behavior of the transmitted waves in the weakly heterogeneous regime (for which the coupling between the wave and the medium is weak), with a fixed number of propagating modes that can be obtained by rearranging the eigenvalues by decreasing Sobol indices. The most important feature of the stochastic approach lies in the fact that the model order (i.e. the number of relevant eigenvalues) can be computed to satisfy a given statistical accuracy whatever the frequency. As the low-order model preserves the overall structure of waveforms under sufficiently small perturbations of the profile, it can be applied to sensitivity analysis and uncertainty quantification. The gain in CPU cost provided by the low-order model is essential for extracting statistical information from simulations. The statistics of a transmitted broadband pulse are computed by decomposing the original pulse into a sum of modal pulses that propagate with different phase speeds and can be described by a front pulse stabilization theory. The method is illustrated on two large-scale infrasound calibration experiments, that were conducted at the Sayarim Military Range, Israel, in 2009 and 2011.

  9. The relationship between stochastic and deterministic quasi-steady state approximations.

    PubMed

    Kim, Jae Kyoung; Josić, Krešimir; Bennett, Matthew R

    2015-11-23

    The quasi steady-state approximation (QSSA) is frequently used to reduce deterministic models of biochemical networks. The resulting equations provide a simplified description of the network in terms of non-elementary reaction functions (e.g. Hill functions). Such deterministic reductions are frequently a basis for heuristic stochastic models in which non-elementary reaction functions are used to define reaction propensities. Despite their popularity, it remains unclear when such stochastic reductions are valid. It is frequently assumed that the stochastic reduction can be trusted whenever its deterministic counterpart is accurate. However, a number of recent examples show that this is not necessarily the case. Here we explain the origin of these discrepancies, and demonstrate a clear relationship between the accuracy of the deterministic and the stochastic QSSA for examples widely used in biological systems. With an analysis of a two-state promoter model, and numerical simulations for a variety of other models, we find that the stochastic QSSA is accurate whenever its deterministic counterpart provides an accurate approximation over a range of initial conditions which cover the likely fluctuations from the quasi steady-state (QSS). We conjecture that this relationship provides a simple and computationally inexpensive way to test the accuracy of reduced stochastic models using deterministic simulations. The stochastic QSSA is one of the most popular multi-scale stochastic simulation methods. While the use of QSSA, and the resulting non-elementary functions has been justified in the deterministic case, it is not clear when their stochastic counterparts are accurate. In this study, we show how the accuracy of the stochastic QSSA can be tested using their deterministic counterparts providing a concrete method to test when non-elementary rate functions can be used in stochastic simulations.

  10. Bias, belief, and consensus: Collective opinion formation on fluctuating networks

    NASA Astrophysics Data System (ADS)

    Ngampruetikorn, Vudtiwat; Stephens, Greg J.

    2016-11-01

    With the advent of online networks, societies have become substantially more interconnected with individual members able to easily both maintain and modify their own social links. Here, we show that active network maintenance exposes agents to confirmation bias, the tendency to confirm one's beliefs, and we explore how this bias affects collective opinion formation. We introduce a model of binary opinion dynamics on a complex, fluctuating network with stochastic rewiring and we analyze these dynamics in the mean-field limit of large networks and fast link rewiring. We show that confirmation bias induces a segregation of individuals with different opinions and stabilizes the consensus state. We further show that bias can have an unusual, nonmonotonic effect on the time to consensus and this suggests a novel avenue for large-scale opinion manipulation.

  11. Bias, belief, and consensus: Collective opinion formation on fluctuating networks.

    PubMed

    Ngampruetikorn, Vudtiwat; Stephens, Greg J

    2016-11-01

    With the advent of online networks, societies have become substantially more interconnected with individual members able to easily both maintain and modify their own social links. Here, we show that active network maintenance exposes agents to confirmation bias, the tendency to confirm one's beliefs, and we explore how this bias affects collective opinion formation. We introduce a model of binary opinion dynamics on a complex, fluctuating network with stochastic rewiring and we analyze these dynamics in the mean-field limit of large networks and fast link rewiring. We show that confirmation bias induces a segregation of individuals with different opinions and stabilizes the consensus state. We further show that bias can have an unusual, nonmonotonic effect on the time to consensus and this suggests a novel avenue for large-scale opinion manipulation.

  12. An upper limit on the stochastic gravitational-wave background of cosmological origin.

    PubMed

    Abbott, B P; Abbott, R; Acernese, F; Adhikari, R; Ajith, P; Allen, B; Allen, G; Alshourbagy, M; Amin, R S; Anderson, S B; Anderson, W G; Antonucci, F; Aoudia, S; Arain, M A; Araya, M; Armandula, H; Armor, P; Arun, K G; Aso, Y; Aston, S; Astone, P; Aufmuth, P; Aulbert, C; Babak, S; Baker, P; Ballardin, G; Ballmer, S; Barker, C; Barker, D; Barone, F; Barr, B; Barriga, P; Barsotti, L; Barsuglia, M; Barton, M A; Bartos, I; Bassiri, R; Bastarrika, M; Bauer, Th S; Behnke, B; Beker, M; Benacquista, M; Betzwieser, J; Beyersdorf, P T; Bigotta, S; Bilenko, I A; Billingsley, G; Birindelli, S; Biswas, R; Bizouard, M A; Black, E; Blackburn, J K; Blackburn, L; Blair, D; Bland, B; Boccara, C; Bodiya, T P; Bogue, L; Bondu, F; Bonelli, L; Bork, R; Boschi, V; Bose, S; Bosi, L; Braccini, S; Bradaschia, C; Brady, P R; Braginsky, V B; Brand, J F J van den; Brau, J E; Bridges, D O; Brillet, A; Brinkmann, M; Brisson, V; Van Den Broeck, C; Brooks, A F; Brown, D A; Brummit, A; Brunet, G; Bullington, A; Bulten, H J; Buonanno, A; Burmeister, O; Buskulic, D; Byer, R L; Cadonati, L; Cagnoli, G; Calloni, E; Camp, J B; Campagna, E; Cannizzo, J; Cannon, K C; Canuel, B; Cao, J; Carbognani, F; Cardenas, L; Caride, S; Castaldi, G; Caudill, S; Cavaglià, M; Cavalier, F; Cavalieri, R; Cella, G; Cepeda, C; Cesarini, E; Chalermsongsak, T; Chalkley, E; Charlton, P; Chassande-Mottin, E; Chatterji, S; Chelkowski, S; Chen, Y; Christensen, N; Chung, C T Y; Clark, D; Clark, J; Clayton, J H; Cleva, F; Coccia, E; Cokelaer, T; Colacino, C N; Colas, J; Colla, A; Colombini, M; Conte, R; Cook, D; Corbitt, T R C; Corda, C; Cornish, N; Corsi, A; Coulon, J-P; Coward, D; Coyne, D C; Creighton, J D E; Creighton, T D; Cruise, A M; Culter, R M; Cumming, A; Cunningham, L; Cuoco, E; Danilishin, S L; D'Antonio, S; Danzmann, K; Dari, A; Dattilo, V; Daudert, B; Davier, M; Davies, G; Daw, E J; Day, R; De Rosa, R; Debra, D; Degallaix, J; Del Prete, M; Dergachev, V; Desai, S; Desalvo, R; Dhurandhar, S; Di Fiore, L; Di Lieto, A; Di Paolo Emilio, M; Di Virgilio, A; Díaz, M; Dietz, A; Donovan, F; Dooley, K L; Doomes, E E; Drago, M; Drever, R W P; Dueck, J; Duke, I; Dumas, J-C; Dwyer, J G; Echols, C; Edgar, M; Effler, A; Ehrens, P; Ely, G; Espinoza, E; Etzel, T; Evans, M; Evans, T; Fafone, V; Fairhurst, S; Faltas, Y; Fan, Y; Fazi, D; Fehrmann, H; Ferrante, I; Fidecaro, F; Finn, L S; Fiori, I; Flaminio, R; Flasch, K; Foley, S; Forrest, C; Fotopoulos, N; Fournier, J-D; Franc, J; Franzen, A; Frasca, S; Frasconi, F; Frede, M; Frei, M; Frei, Z; Freise, A; Frey, R; Fricke, T; Fritschel, P; Frolov, V V; Fyffe, M; Galdi, V; Gammaitoni, L; Garofoli, J A; Garufi, F; Genin, E; Gennai, A; Gholami, I; Giaime, J A; Giampanis, S; Giardina, K D; Giazotto, A; Goda, K; Goetz, E; Goggin, L M; González, G; Gorodetsky, M L; Gobler, S; Gouaty, R; Granata, M; Granata, V; Grant, A; Gras, S; Gray, C; Gray, M; Greenhalgh, R J S; Gretarsson, A M; Greverie, C; Grimaldi, F; Grosso, R; Grote, H; Grunewald, S; Guenther, M; Guidi, G; Gustafson, E K; Gustafson, R; Hage, B; Hallam, J M; Hammer, D; Hammond, G D; Hanna, C; Hanson, J; Harms, J; Harry, G M; Harry, I W; Harstad, E D; Haughian, K; Hayama, K; Heefner, J; Heitmann, H; Hello, P; Heng, I S; Heptonstall, A; Hewitson, M; Hild, S; Hirose, E; Hoak, D; Hodge, K A; Holt, K; Hosken, D J; Hough, J; Hoyland, D; Huet, D; Hughey, B; Huttner, S H; Ingram, D R; Isogai, T; Ito, M; Ivanov, A; Johnson, B; Johnson, W W; Jones, D I; Jones, G; Jones, R; Sancho de la Jordana, L; Ju, L; Kalmus, P; Kalogera, V; Kandhasamy, S; Kanner, J; Kasprzyk, D; Katsavounidis, E; Kawabe, K; Kawamura, S; Kawazoe, F; Kells, W; Keppel, D G; Khalaidovski, A; Khalili, F Y; Khan, R; Khazanov, E; King, P; Kissel, J S; Klimenko, S; Kokeyama, K; Kondrashov, V; Kopparapu, R; Koranda, S; Kozak, D; Krishnan, B; Kumar, R; Kwee, P; La Penna, P; Lam, P K; Landry, M; Lantz, B; Laval, M; Lazzarini, A; Lei, H; Lei, M; Leindecker, N; Leonor, I; Leroy, N; Letendre, N; Li, C; Lin, H; Lindquist, P E; Littenberg, T B; Lockerbie, N A; Lodhia, D; Longo, M; Lorenzini, M; Loriette, V; Lormand, M; Losurdo, G; Lu, P; Lubinski, M; Lucianetti, A; Lück, H; Machenschalk, B; Macinnis, M; Mackowski, J-M; Mageswaran, M; Mailand, K; Majorana, E; Man, N; Mandel, I; Mandic, V; Mantovani, M; Marchesoni, F; Marion, F; Márka, S; Márka, Z; Markosyan, A; Markowitz, J; Maros, E; Marque, J; Martelli, F; Martin, I W; Martin, R M; Marx, J N; Mason, K; Masserot, A; Matichard, F; Matone, L; Matzner, R A; Mavalvala, N; McCarthy, R; McClelland, D E; McGuire, S C; McHugh, M; McIntyre, G; McKechan, D J A; McKenzie, K; Mehmet, M; Melatos, A; Melissinos, A C; Mendell, G; Menéndez, D F; Menzinger, F; Mercer, R A; Meshkov, S; Messenger, C; Meyer, M S; Michel, C; Milano, L; Miller, J; Minelli, J; Minenkov, Y; Mino, Y; Mitrofanov, V P; Mitselmakher, G; Mittleman, R; Miyakawa, O; Moe, B; Mohan, M; Mohanty, S D; Mohapatra, S R P; Moreau, J; Moreno, G; Morgado, N; Morgia, A; Morioka, T; Mors, K; Mosca, S; Mossavi, K; Mours, B; Mowlowry, C; Mueller, G; Muhammad, D; Mühlen, H Zur; Mukherjee, S; Mukhopadhyay, H; Mullavey, A; Müller-Ebhardt, H; Munch, J; Murray, P G; Myers, E; Myers, J; Nash, T; Nelson, J; Neri, I; Newton, G; Nishizawa, A; Nocera, F; Numata, K; Ochsner, E; O'Dell, J; Ogin, G H; O'Reilly, B; O'Shaughnessy, R; Ottaway, D J; Ottens, R S; Overmier, H; Owen, B J; Pagliaroli, G; Palomba, C; Pan, Y; Pankow, C; Paoletti, F; Papa, M A; Parameshwaraiah, V; Pardi, S; Pasqualetti, A; Passaquieti, R; Passuello, D; Patel, P; Pedraza, M; Penn, S; Perreca, A; Persichetti, G; Pichot, M; Piergiovanni, F; Pierro, V; Pinard, L; Pinto, I M; Pitkin, M; Pletsch, H J; Plissi, M V; Poggiani, R; Postiglione, F; Principe, M; Prix, R; Prodi, G A; Prokhorov, L; Punken, O; Punturo, M; Puppo, P; Putten, S van der; Quetschke, V; Raab, F J; Rabaste, O; Rabeling, D S; Radkins, H; Raffai, P; Raics, Z; Rainer, N; Rakhmanov, M; Rapagnani, P; Raymond, V; Re, V; Reed, C M; Reed, T; Regimbau, T; Rehbein, H; Reid, S; Reitze, D H; Ricci, F; Riesen, R; Riles, K; Rivera, B; Roberts, P; Robertson, N A; Robinet, F; Robinson, C; Robinson, E L; Rocchi, A; Roddy, S; Rolland, L; Rollins, J; Romano, J D; Romano, R; Romie, J H; Röver, C; Rowan, S; Rüdiger, A; Ruggi, P; Russell, P; Ryan, K; Sakata, S; Salemi, F; Sandberg, V; Sannibale, V; Santamaría, L; Saraf, S; Sarin, P; Sassolas, B; Sathyaprakash, B S; Sato, S; Satterthwaite, M; Saulson, P R; Savage, R; Savov, P; Scanlan, M; Schilling, R; Schnabel, R; Schofield, R; Schulz, B; Schutz, B F; Schwinberg, P; Scott, J; Scott, S M; Searle, A C; Sears, B; Seifert, F; Sellers, D; Sengupta, A S; Sentenac, D; Sergeev, A; Shapiro, B; Shawhan, P; Shoemaker, D H; Sibley, A; Siemens, X; Sigg, D; Sinha, S; Sintes, A M; Slagmolen, B J J; Slutsky, J; van der Sluys, M V; Smith, J R; Smith, M R; Smith, N D; Somiya, K; Sorazu, B; Stein, A; Stein, L C; Steplewski, S; Stochino, A; Stone, R; Strain, K A; Strigin, S; Stroeer, A; Sturani, R; Stuver, A L; Summerscales, T Z; Sun, K-X; Sung, M; Sutton, P J; Swinkels, B L; Szokoly, G P; Talukder, D; Tang, L; Tanner, D B; Tarabrin, S P; Taylor, J R; Taylor, R; Terenzi, R; Thacker, J; Thorne, K A; Thorne, K S; Thüring, A; Tokmakov, K V; Toncelli, A; Tonelli, M; Torres, C; Torrie, C; Tournefier, E; Travasso, F; Traylor, G; Trias, M; Trummer, J; Ugolini, D; Ulmen, J; Urbanek, K; Vahlbruch, H; Vajente, G; Vallisneri, M; Vass, S; Vaulin, R; Vavoulidis, M; Vecchio, A; Vedovato, G; van Veggel, A A; Veitch, J; Veitch, P; Veltkamp, C; Verkindt, D; Vetrano, F; Viceré, A; Villar, A; Vinet, J-Y; Vocca, H; Vorvick, C; Vyachanin, S P; Waldman, S J; Wallace, L; Ward, H; Ward, R L; Was, M; Weidner, A; Weinert, M; Weinstein, A J; Weiss, R; Wen, L; Wen, S; Wette, K; Whelan, J T; Whitcomb, S E; Whiting, B F; Wilkinson, C; Willems, P A; Williams, H R; Williams, L; Willke, B; Wilmut, I; Winkelmann, L; Winkler, W; Wipf, C C; Wiseman, A G; Woan, G; Wooley, R; Worden, J; Wu, W; Yakushin, I; Yamamoto, H; Yan, Z; Yoshida, S; Yvert, M; Zanolin, M; Zhang, J; Zhang, L; Zhao, C; Zotov, N; Zucker, M E; Zweizig, J

    2009-08-20

    A stochastic background of gravitational waves is expected to arise from a superposition of a large number of unresolved gravitational-wave sources of astrophysical and cosmological origin. It should carry unique signatures from the earliest epochs in the evolution of the Universe, inaccessible to standard astrophysical observations. Direct measurements of the amplitude of this background are therefore of fundamental importance for understanding the evolution of the Universe when it was younger than one minute. Here we report limits on the amplitude of the stochastic gravitational-wave background using the data from a two-year science run of the Laser Interferometer Gravitational-wave Observatory (LIGO). Our result constrains the energy density of the stochastic gravitational-wave background normalized by the critical energy density of the Universe, in the frequency band around 100 Hz, to be <6.9 x 10(-6) at 95% confidence. The data rule out models of early Universe evolution with relatively large equation-of-state parameter, as well as cosmic (super)string models with relatively small string tension that are favoured in some string theory models. This search for the stochastic background improves on the indirect limits from Big Bang nucleosynthesis and cosmic microwave background at 100 Hz.

  13. Ensemble modeling of stochastic unsteady open-channel flow in terms of its time-space evolutionary probability distribution - Part 1: theoretical development

    NASA Astrophysics Data System (ADS)

    Dib, Alain; Kavvas, M. Levent

    2018-03-01

    The Saint-Venant equations are commonly used as the governing equations to solve for modeling the spatially varied unsteady flow in open channels. The presence of uncertainties in the channel or flow parameters renders these equations stochastic, thus requiring their solution in a stochastic framework in order to quantify the ensemble behavior and the variability of the process. While the Monte Carlo approach can be used for such a solution, its computational expense and its large number of simulations act to its disadvantage. This study proposes, explains, and derives a new methodology for solving the stochastic Saint-Venant equations in only one shot, without the need for a large number of simulations. The proposed methodology is derived by developing the nonlocal Lagrangian-Eulerian Fokker-Planck equation of the characteristic form of the stochastic Saint-Venant equations for an open-channel flow process, with an uncertain roughness coefficient. A numerical method for its solution is subsequently devised. The application and validation of this methodology are provided in a companion paper, in which the statistical results computed by the proposed methodology are compared against the results obtained by the Monte Carlo approach.

  14. Impacts of a Stochastic Ice Mass-Size Relationship on Squall Line Ensemble Simulations

    NASA Astrophysics Data System (ADS)

    Stanford, M.; Varble, A.; Morrison, H.; Grabowski, W.; McFarquhar, G. M.; Wu, W.

    2017-12-01

    Cloud and precipitation structure, evolution, and cloud radiative forcing of simulated mesoscale convective systems (MCSs) are significantly impacted by ice microphysics parameterizations. Most microphysics schemes assume power law relationships with constant parameters for ice particle mass, area, and terminal fallspeed relationships as a function of size, despite observations showing that these relationships vary in both time and space. To account for such natural variability, a stochastic representation of ice microphysical parameters was developed using the Predicted Particle Properties (P3) microphysics scheme in the Weather Research and Forecasting model, guided by in situ aircraft measurements from a number of field campaigns. Here, the stochastic framework is applied to the "a" and "b" parameters of the unrimed ice mass-size (m-D) relationship (m=aDb) with co-varying "a" and "b" values constrained by observational distributions tested over a range of spatiotemporal autocorrelation scales. Diagnostically altering a-b pairs in three-dimensional (3D) simulations of the 20 May 2011 Midlatitude Continental Convective Clouds Experiment (MC3E) squall line suggests that these parameters impact many important characteristics of the simulated squall line, including reflectivity structure (particularly in the anvil region), surface rain rates, surface and top of atmosphere radiative fluxes, buoyancy and latent cooling distributions, and system propagation speed. The stochastic a-b P3 scheme is tested using two frameworks: (1) a large ensemble of two-dimensional idealized squall line simulations and (2) a smaller ensemble of 3D simulations of the 20 May 2011 squall line, for which simulations are evaluated using observed radar reflectivity and radial velocity at multiple wavelengths, surface meteorology, and surface and satellite measured longwave and shortwave radiative fluxes. Ensemble spreads are characterized and compared against initial condition ensemble spreads for a range of variables.

  15. Machine learning from computer simulations with applications in rail vehicle dynamics

    NASA Astrophysics Data System (ADS)

    Taheri, Mehdi; Ahmadian, Mehdi

    2016-05-01

    The application of stochastic modelling for learning the behaviour of a multibody dynamics (MBD) models is investigated. Post-processing data from a simulation run are used to train the stochastic model that estimates the relationship between model inputs (suspension relative displacement and velocity) and the output (sum of suspension forces). The stochastic model can be used to reduce the computational burden of the MBD model by replacing a computationally expensive subsystem in the model (suspension subsystem). With minor changes, the stochastic modelling technique is able to learn the behaviour of a physical system and integrate its behaviour within MBD models. The technique is highly advantageous for MBD models where real-time simulations are necessary, or with models that have a large number of repeated substructures, e.g. modelling a train with a large number of railcars. The fact that the training data are acquired prior to the development of the stochastic model discards the conventional sampling plan strategies like Latin Hypercube sampling plans where simulations are performed using the inputs dictated by the sampling plan. Since the sampling plan greatly influences the overall accuracy and efficiency of the stochastic predictions, a sampling plan suitable for the process is developed where the most space-filling subset of the acquired data with ? number of sample points that best describes the dynamic behaviour of the system under study is selected as the training data.

  16. Hybrid stochastic simplifications for multiscale gene networks

    PubMed Central

    Crudu, Alina; Debussche, Arnaud; Radulescu, Ovidiu

    2009-01-01

    Background Stochastic simulation of gene networks by Markov processes has important applications in molecular biology. The complexity of exact simulation algorithms scales with the number of discrete jumps to be performed. Approximate schemes reduce the computational time by reducing the number of simulated discrete events. Also, answering important questions about the relation between network topology and intrinsic noise generation and propagation should be based on general mathematical results. These general results are difficult to obtain for exact models. Results We propose a unified framework for hybrid simplifications of Markov models of multiscale stochastic gene networks dynamics. We discuss several possible hybrid simplifications, and provide algorithms to obtain them from pure jump processes. In hybrid simplifications, some components are discrete and evolve by jumps, while other components are continuous. Hybrid simplifications are obtained by partial Kramers-Moyal expansion [1-3] which is equivalent to the application of the central limit theorem to a sub-model. By averaging and variable aggregation we drastically reduce simulation time and eliminate non-critical reactions. Hybrid and averaged simplifications can be used for more effective simulation algorithms and for obtaining general design principles relating noise to topology and time scales. The simplified models reproduce with good accuracy the stochastic properties of the gene networks, including waiting times in intermittence phenomena, fluctuation amplitudes and stationary distributions. The methods are illustrated on several gene network examples. Conclusion Hybrid simplifications can be used for onion-like (multi-layered) approaches to multi-scale biochemical systems, in which various descriptions are used at various scales. Sets of discrete and continuous variables are treated with different methods and are coupled together in a physically justified approach. PMID:19735554

  17. Technical efficiency of teaching hospitals in Iran: the use of Stochastic Frontier Analysis, 1999–2011

    PubMed Central

    Goudarzi, Reza; Pourreza, Abolghasem; Shokoohi, Mostafa; Askari, Roohollah; Mahdavi, Mahdi; Moghri, Javad

    2014-01-01

    Background: Hospitals are highly resource-dependent settings, which spend a large proportion of healthcare financial resources. The analysis of hospital efficiency can provide insight into how scarce resources are used to create health values. This study examines the Technical Efficiency (TE) of 12 teaching hospitals affiliated with Tehran University of Medical Sciences (TUMS) between 1999 and 2011. Methods: The Stochastic Frontier Analysis (SFA) method was applied to estimate the efficiency of TUMS hospitals. A best function, referred to as output and input parameters, was calculated for the hospitals. Number of medical doctors, nurses, and other personnel, active beds, and outpatient admissions were considered as the input variables and number of inpatient admissions as an output variable. Results: The mean level of TE was 59% (ranging from 22 to 81%). During the study period the efficiency increased from 61 to 71%. Outpatient admission, other personnel and medical doctors significantly and positively affected the production (P< 0.05). Concerning the Constant Return to Scale (CRS), an optimal production scale was found, implying that the productions of the hospitals were approximately constant. Conclusion: Findings of this study show a remarkable waste of resources in the TUMS hospital during the decade considered. This warrants policy-makers and top management in TUMS to consider steps to improve the financial management of the university hospitals. PMID:25114947

  18. Stochastic opinion formation in scale-free networks

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    M. Bartolozzi; D. B. Leinweber; A. W. Thomas

    2005-10-01

    The dynamics of opinion formation in large groups of people is a complex nonlinear phenomenon whose investigation is just beginning. Both collective behavior and personal views play an important role in this mechanism. In the present work we mimic the dynamics of opinion formation of a group of agents, represented by two states 1, as a stochastic response of each agent to the opinion of his/her neighbors in the social network and to feedback from the average opinion of the whole. In the light of recent studies, a scale-free Barabsi-Albert network has been selected to simulate the topology of themore » interactions. A turbulent-like dynamics, characterized by an intermittent behavior, is observed for a certain range of the model parameters. The problem of uncertainty in decision taking is also addressed both from a topological point of view, using random and targeted removal of agents from the network, and by implementing a three-state model, where the third state, zero, is related to the information available to each agent. Finally, the results of the model are tested against the best known network of social interactions: the stock market. A time series of daily closures of the Dow-Jones index has been used as an indicator of the possible applicability of our model in the financial context. Good qualitative agreement is found.« less

  19. A deterministic and stochastic velocity model for the Salton Trough/Basin and Range transition zone and constraints on magmatism during rifting

    NASA Astrophysics Data System (ADS)

    Larkin, Steven P.; Levander, Alan; Okaya, David; Goff, John A.

    1996-12-01

    As a high resolution addition to the 1992 Pacific to Arizona Crustal Experiment (PACE), a 45-km-long deep crustal seismic reflection profile was acquired across the Chocolate Mountains in southeastern California to illuminate crustal structure in the transition between the Salton Trough and the Basin and Range province. The complex seismic data are analyzed for both large-scale (deterministic) and fine-scale (stochastic) crustal features. A low-fold near-offset common-midpoint (CMP) stacked section shows the northeastward lateral extent of a high-velocity lower crustal body which is centered beneath the Salton Trough. Off-end shots record a high-amplitude diffraction from the point where the high velocity lower crust pinches out at the Moho. Above the high-velocity lower crust, moderate-amplitude reflections occur at midcrustal levels. These reflections display the coherency and frequency characteristics of reflections backscattered from a heterogeneous velocity field, which we model as horizontal intrusions with a von Kármán (fractal) distribution. The effects of upper crustal scattering are included by combining the mapped surface geology and laboratory measurements of exposed rocks within the Chocolate Mountains to reproduce the upper crustal velocity heterogeneity in our crustal velocity model. Viscoelastic finite difference simulations indicate that the volume of mafic material within the reflective zone necessary to produce the observed backscatter is about 5%. The presence of wavelength-scale heterogeneity within the near-surface, upper, and middle crust also produces a 0.5-s-thick zone of discontinuous reflections from a crust-mantle interface which is actually a first-order discontinuity.

  20. Finite element modelling of woven composite failure modes at the mesoscopic scale: deterministic versus stochastic approaches

    NASA Astrophysics Data System (ADS)

    Roirand, Q.; Missoum-Benziane, D.; Thionnet, A.; Laiarinandrasana, L.

    2017-09-01

    Textile composites are composed of 3D complex architecture. To assess the durability of such engineering structures, the failure mechanisms must be highlighted. Examinations of the degradation have been carried out thanks to tomography. The present work addresses a numerical damage model dedicated to the simulation of the crack initiation and propagation at the scale of the warp yarns. For the 3D woven composites under study, loadings in tension and combined tension and bending were considered. Based on an erosion procedure of broken elements, the failure mechanisms have been modelled on 3D periodic cells by finite element calculations. The breakage of one element was determined using a failure criterion at the mesoscopic scale based on the yarn stress at failure. The results were found to be in good agreement with the experimental data for the two kinds of macroscopic loadings. The deterministic approach assumed a homogeneously distributed stress at failure all over the integration points in the meshes of woven composites. A stochastic approach was applied to a simple representative elementary periodic cell. The distribution of the Weibull stress at failure was assigned to the integration points using a Monte Carlo simulation. It was shown that this stochastic approach allowed more realistic failure simulations avoiding the idealised symmetry due to the deterministic modelling. In particular, the stochastic simulations performed have shown several variations of the stress as well as strain at failure and the failure modes of the yarn.

  1. A stochastic SIS epidemic model with vaccination

    NASA Astrophysics Data System (ADS)

    Cao, Boqiang; Shan, Meijing; Zhang, Qimin; Wang, Weiming

    2017-11-01

    In this paper, we investigate the basic features of an SIS type infectious disease model with varying population size and vaccinations in presence of environment noise. By applying the Markov semigroup theory, we propose a stochastic reproduction number R0s which can be seen as a threshold parameter to utilize in identifying the stochastic extinction and persistence: If R0s < 1, under some mild extra conditions, there exists a disease-free absorbing set for the stochastic epidemic model, which implies that disease dies out with probability one; while if R0s > 1, under some mild extra conditions, the SDE model has an endemic stationary distribution which results in the stochastic persistence of the infectious disease. The most interesting finding is that large environmental noise can suppress the outbreak of the disease.

  2. Effects of stochastic time-delayed feedback on a dynamical system modeling a chemical oscillator.

    PubMed

    González Ochoa, Héctor O; Perales, Gualberto Solís; Epstein, Irving R; Femat, Ricardo

    2018-05-01

    We examine how stochastic time-delayed negative feedback affects the dynamical behavior of a model oscillatory reaction. We apply constant and stochastic time-delayed negative feedbacks to a point Field-Körös-Noyes photosensitive oscillator and compare their effects. Negative feedback is applied in the form of simulated inhibitory electromagnetic radiation with an intensity proportional to the concentration of oxidized light-sensitive catalyst in the oscillator. We first characterize the system under nondelayed inhibitory feedback; then we explore and compare the effects of constant (deterministic) versus stochastic time-delayed feedback. We find that the oscillatory amplitude, frequency, and waveform are essentially preserved when low-dispersion stochastic delayed feedback is used, whereas small but measurable changes appear when a large dispersion is applied.

  3. Effects of stochastic time-delayed feedback on a dynamical system modeling a chemical oscillator

    NASA Astrophysics Data System (ADS)

    González Ochoa, Héctor O.; Perales, Gualberto Solís; Epstein, Irving R.; Femat, Ricardo

    2018-05-01

    We examine how stochastic time-delayed negative feedback affects the dynamical behavior of a model oscillatory reaction. We apply constant and stochastic time-delayed negative feedbacks to a point Field-Körös-Noyes photosensitive oscillator and compare their effects. Negative feedback is applied in the form of simulated inhibitory electromagnetic radiation with an intensity proportional to the concentration of oxidized light-sensitive catalyst in the oscillator. We first characterize the system under nondelayed inhibitory feedback; then we explore and compare the effects of constant (deterministic) versus stochastic time-delayed feedback. We find that the oscillatory amplitude, frequency, and waveform are essentially preserved when low-dispersion stochastic delayed feedback is used, whereas small but measurable changes appear when a large dispersion is applied.

  4. Stochastic sensitivity analysis of the variability of dynamics and transition to chaos in the business cycles model

    NASA Astrophysics Data System (ADS)

    Bashkirtseva, Irina; Ryashko, Lev; Ryazanova, Tatyana

    2018-01-01

    A problem of mathematical modeling of complex stochastic processes in macroeconomics is discussed. For the description of dynamics of income and capital stock, the well-known Kaldor model of business cycles is used as a basic example. The aim of the paper is to give an overview of the variety of stochastic phenomena which occur in Kaldor model forced by additive and parametric random noise. We study a generation of small- and large-amplitude stochastic oscillations, and their mixed-mode intermittency. To analyze these phenomena, we suggest a constructive approach combining the study of the peculiarities of deterministic phase portrait, and stochastic sensitivity of attractors. We show how parametric noise can stabilize the unstable equilibrium and transform dynamics of Kaldor system from order to chaos.

  5. Stochastic modelling of intermittency.

    PubMed

    Stemler, Thomas; Werner, Johannes P; Benner, Hartmut; Just, Wolfram

    2010-01-13

    Recently, methods have been developed to model low-dimensional chaotic systems in terms of stochastic differential equations. We tested such methods in an electronic circuit experiment. We aimed to obtain reliable drift and diffusion coefficients even without a pronounced time-scale separation of the chaotic dynamics. By comparing the analytical solutions of the corresponding Fokker-Planck equation with experimental data, we show here that crisis-induced intermittency can be described in terms of a stochastic model which is dominated by state-space-dependent diffusion. Further on, we demonstrate and discuss some limits of these modelling approaches using numerical simulations. This enables us to state a criterion that can be used to decide whether a stochastic model will capture the essential features of a given time series. This journal is © 2010 The Royal Society

  6. Model Uncertainty Quantification Methods For Data Assimilation In Partially Observed Multi-Scale Systems

    NASA Astrophysics Data System (ADS)

    Pathiraja, S. D.; van Leeuwen, P. J.

    2017-12-01

    Model Uncertainty Quantification remains one of the central challenges of effective Data Assimilation (DA) in complex partially observed non-linear systems. Stochastic parameterization methods have been proposed in recent years as a means of capturing the uncertainty associated with unresolved sub-grid scale processes. Such approaches generally require some knowledge of the true sub-grid scale process or rely on full observations of the larger scale resolved process. We present a methodology for estimating the statistics of sub-grid scale processes using only partial observations of the resolved process. It finds model error realisations over a training period by minimizing their conditional variance, constrained by available observations. Special is that these realisations are binned conditioned on the previous model state during the minimization process, allowing for the recovery of complex error structures. The efficacy of the approach is demonstrated through numerical experiments on the multi-scale Lorenz 96' model. We consider different parameterizations of the model with both small and large time scale separations between slow and fast variables. Results are compared to two existing methods for accounting for model uncertainty in DA and shown to provide improved analyses and forecasts.

  7. A probabilistic approach to quantifying spatial patterns of flow regimes and network-scale connectivity

    NASA Astrophysics Data System (ADS)

    Garbin, Silvia; Alessi Celegon, Elisa; Fanton, Pietro; Botter, Gianluca

    2017-04-01

    The temporal variability of river flow regime is a key feature structuring and controlling fluvial ecological communities and ecosystem processes. In particular, streamflow variability induced by climate/landscape heterogeneities or other anthropogenic factors significantly affects the connectivity between streams with notable implication for river fragmentation. Hydrologic connectivity is a fundamental property that guarantees species persistence and ecosystem integrity in riverine systems. In riverine landscapes, most ecological transitions are flow-dependent and the structure of flow regimes may affect ecological functions of endemic biota (i.e., fish spawning or grazing of invertebrate species). Therefore, minimum flow thresholds must be guaranteed to support specific ecosystem services, like fish migration, aquatic biodiversity and habitat suitability. In this contribution, we present a probabilistic approach aiming at a spatially-explicit, quantitative assessment of hydrologic connectivity at the network-scale as derived from river flow variability. Dynamics of daily streamflows are estimated based on catchment-scale climatic and morphological features, integrating a stochastic, physically based approach that accounts for the stochasticity of rainfall with a water balance model and a geomorphic recession flow model. The non-exceedance probability of ecologically meaningful flow thresholds is used to evaluate the fragmentation of individual stream reaches, and the ensuing network-scale connectivity metrics. A multi-dimensional Poisson Process for the stochastic generation of rainfall is used to evaluate the impact of climate signature on reach-scale and catchment-scale connectivity. The analysis shows that streamflow patterns and network-scale connectivity are influenced by the topology of the river network and the spatial variability of climatic properties (rainfall, evapotranspiration). The framework offers a robust basis for the prediction of the impact of land-use/land-cover changes and river regulation on network-scale connectivity.

  8. Nature and origin of upper crustal seismic velocity fluctuations and associated scaling properties: Combined stochastic analyses of KTB velocity and lithology logs

    USGS Publications Warehouse

    Goff, J.A.; Holliger, K.

    1999-01-01

    The main borehole of the German Continental Deep Drilling Program (KTB) extends over 9000 m into a crystalline upper crust consisting primarily of interlayered gneiss and metabasite. We present a joint analysis of the velocity and lithology logs in an effort to extract the lithology component of the velocity log. Covariance analysis of lithology log, approximated as a binary series, indicates that it may originate from the superposition of two Brownian stochastic processes (fractal dimension 1.5) with characteristic scales of ???2800 m and ???150 m, respectively. Covariance analysis of the velocity fluctuations provides evidence for the superposition of four stochastic process with distinct characteristic scales. The largest two scales are identical to those derived from the lithology, confirming that these scales of velocity heterogeneity are caused by lithology variations. The third characteristic scale, ???20 m, also a Brownian process, is probably related to fracturing based on correlation with the resistivity log. The superposition of these three Brownian processes closely mimics the commonly observed 1/k decay (fractal dimension 2.0) of the velocity power spectrum. The smallest scale process (characteristic scale ???1.7 m) requires a low fractal dimension, ???1.0, and accounts for ???60% of the total rms velocity variation. A comparison of successive logs from 6900-7140 m depth indicates that such variations are not repeatable and thus probably do not represent true velocity variations in the crust. The results of this study resolve disparity between the differing published estimates of seismic heterogeneity based on the KTB sonic logs, and bridge the gap between estimates of crustal heterogeneity from geologic maps and borehole logs. Copyright 1999 by the American Geophysical Union.

  9. Renormalizing a viscous fluid model for large scale structure formation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Führer, Florian; Rigopoulos, Gerasimos, E-mail: fuhrer@thphys.uni-heidelberg.de, E-mail: gerasimos.rigopoulos@ncl.ac.uk

    2016-02-01

    Using the Stochastic Adhesion Model (SAM) as a simple toy model for cosmic structure formation, we study renormalization and the removal of the cutoff dependence from loop integrals in perturbative calculations. SAM shares the same symmetry with the full system of continuity+Euler equations and includes a viscosity term and a stochastic noise term, similar to the effective theories recently put forward to model CDM clustering. We show in this context that if the viscosity and noise terms are treated as perturbative corrections to the standard eulerian perturbation theory, they are necessarily non-local in time. To ensure Galilean Invariance higher ordermore » vertices related to the viscosity and the noise must then be added and we explicitly show at one-loop that these terms act as counter terms for vertex diagrams. The Ward Identities ensure that the non-local-in-time theory can be renormalized consistently. Another possibility is to include the viscosity in the linear propagator, resulting in exponential damping at high wavenumber. The resulting local-in-time theory is then renormalizable to one loop, requiring less free parameters for its renormalization.« less

  10. Critical Gradient Behavior of Alfvén Eigenmode Induced Fast-Ion Transport in Phase Space

    NASA Astrophysics Data System (ADS)

    Collins, C. S.; Pace, D. C.; van Zeeland, M. A.; Heidbrink, W. W.; Stagner, L.; Zhu, Y. B.; Kramer, G. J.; Podesta, M.; White, R. B.

    2016-10-01

    Experiments on DIII-D have shown that energetic particle (EP) transport suddenly increases when multiple Alfvén eigenmodes (AEs) cause particle orbits to become stochastic. Several key features have been observed; (1) the transport threshold is phase-space dependent and occurs above the AE linear stability threshold, (2) EP losses become intermittent above threshold and appear to depend on the types of AEs present, and (3) stiff transport causes the EP density profile to remain unchanged even if the source increases. Theoretical analysis using the NOVA and ORBIT codes shows that the threshold corresponds to when particle orbits become stochastic due to wave-particle resonances with AEs in the region of phase space measured by the diagnostics. The kick model in NUBEAM (TRANSP) is used to evolve the EP distribution function to study which modes cause the most transport and further characterize intermittent bursts of EP losses, which are associated with large scale redistribution through the domino effect. Work supported by the US DOE under DE-FC02-04ER54698.

  11. Extreme Quantum Memory Advantage for Rare-Event Sampling

    NASA Astrophysics Data System (ADS)

    Aghamohammadi, Cina; Loomis, Samuel P.; Mahoney, John R.; Crutchfield, James P.

    2018-02-01

    We introduce a quantum algorithm for memory-efficient biased sampling of rare events generated by classical memoryful stochastic processes. Two efficiency metrics are used to compare quantum and classical resources for rare-event sampling. For a fixed stochastic process, the first is the classical-to-quantum ratio of required memory. We show for two example processes that there exists an infinite number of rare-event classes for which the memory ratio for sampling is larger than r , for any large real number r . Then, for a sequence of processes each labeled by an integer size N , we compare how the classical and quantum required memories scale with N . In this setting, since both memories can diverge as N →∞ , the efficiency metric tracks how fast they diverge. An extreme quantum memory advantage exists when the classical memory diverges in the limit N →∞ , but the quantum memory has a finite bound. We then show that finite-state Markov processes and spin chains exhibit memory advantage for sampling of almost all of their rare-event classes.

  12. Multi-Scale Modeling to Improve Single-Molecule, Single-Cell Experiments

    NASA Astrophysics Data System (ADS)

    Munsky, Brian; Shepherd, Douglas

    2014-03-01

    Single-cell, single-molecule experiments are producing an unprecedented amount of data to capture the dynamics of biological systems. When integrated with computational models, observations of spatial, temporal and stochastic fluctuations can yield powerful quantitative insight. We concentrate on experiments that localize and count individual molecules of mRNA. These high precision experiments have large imaging and computational processing costs, and we explore how improved computational analyses can dramatically reduce overall data requirements. In particular, we show how analyses of spatial, temporal and stochastic fluctuations can significantly enhance parameter estimation results for small, noisy data sets. We also show how full probability distribution analyses can constrain parameters with far less data than bulk analyses or statistical moment closures. Finally, we discuss how a systematic modeling progression from simple to more complex analyses can reduce total computational costs by orders of magnitude. We illustrate our approach using single-molecule, spatial mRNA measurements of Interleukin 1-alpha mRNA induction in human THP1 cells following stimulation. Our approach could improve the effectiveness of single-molecule gene regulation analyses for many other process.

  13. Turbulent mixing noise from supersonic jets

    NASA Technical Reports Server (NTRS)

    Tam, Christopher K. W.; Chen, Ping

    1994-01-01

    There is now a substantial body of theoretical and experimental evidence that the dominant part of the turbulent noise of supersonic jets is generated directly by the large turbulence structures/instability waves of the jet flow. Earlier, Tam and Burton provided a description of the physical mechanism by which supersonically traveling instability waves can generate sound efficiently. They used the method of matched asymptotic expansions to construct an instability wave solution which is valid in the far field. The present work is an extension of the theory of Tam and Burton. It is argued that the instability wave spectrum of the jet may be regarded as generated by stochastic white noise excitation at the nozzle lip region. The reason why the excitation has white noise characteristics is that near the nozzle lip region the flow in the jet mixing layer has no intrinsic length and time scales. The present stochastic wave model theory of supersonic jet noise contains a single unknown multiplicative constant. Comparisons between the calculated noise directivities at selected Strouhal numbers and experimental measurements of a Mach 2 jet at different jet temperatures have been carried out. Favorable agreements are found.

  14. A conditional stochastic weather generator for seasonal to multi-decadal simulations

    NASA Astrophysics Data System (ADS)

    Verdin, Andrew; Rajagopalan, Balaji; Kleiber, William; Podestá, Guillermo; Bert, Federico

    2018-01-01

    We present the application of a parametric stochastic weather generator within a nonstationary context, enabling simulations of weather sequences conditioned on interannual and multi-decadal trends. The generalized linear model framework of the weather generator allows any number of covariates to be included, such as large-scale climate indices, local climate information, seasonal precipitation and temperature, among others. Here we focus on the Salado A basin of the Argentine Pampas as a case study, but the methodology is portable to any region. We include domain-averaged (e.g., areal) seasonal total precipitation and mean maximum and minimum temperatures as covariates for conditional simulation. Areal covariates are motivated by a principal component analysis that indicates the seasonal spatial average is the dominant mode of variability across the domain. We find this modification to be effective in capturing the nonstationarity prevalent in interseasonal precipitation and temperature data. We further illustrate the ability of this weather generator to act as a spatiotemporal downscaler of seasonal forecasts and multidecadal projections, both of which are generally of coarse resolution.

  15. FAST: a framework for simulation and analysis of large-scale protein-silicon biosensor circuits.

    PubMed

    Gu, Ming; Chakrabartty, Shantanu

    2013-08-01

    This paper presents a computer aided design (CAD) framework for verification and reliability analysis of protein-silicon hybrid circuits used in biosensors. It is envisioned that similar to integrated circuit (IC) CAD design tools, the proposed framework will be useful for system level optimization of biosensors and for discovery of new sensing modalities without resorting to laborious fabrication and experimental procedures. The framework referred to as FAST analyzes protein-based circuits by solving inverse problems involving stochastic functional elements that admit non-linear relationships between different circuit variables. In this regard, FAST uses a factor-graph netlist as a user interface and solving the inverse problem entails passing messages/signals between the internal nodes of the netlist. Stochastic analysis techniques like density evolution are used to understand the dynamics of the circuit and estimate the reliability of the solution. As an example, we present a complete design flow using FAST for synthesis, analysis and verification of our previously reported conductometric immunoassay that uses antibody-based circuits to implement forward error-correction (FEC).

  16. Online POMDP Algorithms for Very Large Observation Spaces

    DTIC Science & Technology

    2017-06-06

    stochastic optimization: From sets to paths." In Advances in Neural Information Processing Systems, pp. 1585- 1593 . 2015. • Luo, Yuanfu, Haoyu Bai...and Wee Sun Lee. "Adaptive stochastic optimization: From sets to paths." In Advances in Neural Information Processing Systems, pp. 1585- 1593 . 2015

  17. Evolution of the Climate Continuum from the Mid-Miocene Climatic Optimum to the Present

    NASA Astrophysics Data System (ADS)

    Aswasereelert, W.; Meyers, S. R.; Hinnov, L. A.; Kelly, D.

    2011-12-01

    The recognition of orbital rhythms in paleoclimate data has led to a rich understanding of climate evolution during the Neogene and Quaternary. In contrast, changes in stochastic variability associated with the transition from unipolar to bipolar glaciation have received less attention, although the stochastic component likely preserves key insights about climate. In this study, we seek to evaluate the dominance and character of stochastic climate energy since the Middle Miocene Climatic Optimum (~17 Ma). These analyses extend a previous study that suggested diagnostic stochastic responses associated with Northern Hemisphere ice sheet development during the Plio-Pleistocene (Meyers and Hinnov, 2010). A critical and challenging step necessary to conduct the work is the conversion of depth data to time data. We investigate climate proxy datasets using multiple time scale hypotheses, including depth-derived time scales, sedimentologic/geochemical "tuning", minimal orbital tuning, and comprehensive orbital tuning. To extract the stochastic component of climate, and also explore potential relationships between the orbital parameters and paleoclimate response, a number of approaches rooted in Thomson's (1982) multi-taper spectral method (MTM) are applied. Importantly, the MTM technique is capable of separating the spectral "continuum" - a measure of stochastic variability - from the deterministic periodic orbital signals (spectral "lines") preserved in proxy data. Time series analysis of the proxy records using different chronologic approaches allows us to evaluate the sensitivity of our conclusion about stochastic and deterministic orbital processes during the Middle Miocene to present. Moreover, comparison of individual records permits examination of the spatial dependence of the identified climate responses. Meyers, S.R., and Hinnov, L.A. (2010), Northern Hemisphere glaciation and the evolution of Plio-Pleistocene climate noise: Paleoceanography, 25, PA3207, doi:10.1029/2009PA001834. Thomson, D.J. (1982), Spectrum estimation and harmonic analysis: IEEE Proceedings, v. 70, p. 1055-1096.

  18. Stochastic demographic forecasting.

    PubMed

    Lee, R D

    1992-11-01

    "This paper describes a particular approach to stochastic population forecasting, which is implemented for the U.S.A. through 2065. Statistical time series methods are combined with demographic models to produce plausible long run forecasts of vital rates, with probability distributions. The resulting mortality forecasts imply gains in future life expectancy that are roughly twice as large as those forecast by the Office of the Social Security Actuary.... Resulting stochastic forecasts of the elderly population, elderly dependency ratios, and payroll tax rates for health, education and pensions are presented." excerpt

  19. Data Analysis and Non-local Parametrization Strategies for Organized Atmospheric Convection

    NASA Astrophysics Data System (ADS)

    Brenowitz, Noah D.

    The intrinsically multiscale nature of moist convective processes in the atmosphere complicates scientific understanding, and, as a result, current coarse-resolution climate models poorly represent convective variability in the tropics. This dissertation addresses this problem by 1) studying new cumulus convective closures in a pair of idealized models for tropical moist convection, and 2) developing innovative strategies for analyzing high-resolution numerical simulations of organized convection. The first two chapters of this dissertation revisit a historical controversy about the use of convective closures based on the large-scale wind field or moisture convergence. In the first chapter, a simple coarse resolution stochastic model for convective inhibition is designed which includes the non-local effects of wind-convergence on convective activity. This model is designed to replicate the convective dynamics of a typical coarse-resolution climate prediction model. The non-local convergence coupling is motivated by the phenomena of gregarious convection, whereby mesoscale convective systems emit gravity waves which can promote convection at a distant locations. Linearized analysis and nonlinear simulations show that this convergence coupling allows for increased interaction between cumulus convection and the large-scale circulation, but does not suffer from the deleterious behavior of traditional moisture-convergence closures. In the second chapter, the non-local convergence coupling idea is extended to an idealized stochastic multicloud model. This model allows for stochastic transitions between three distinct cloud types, and non-local convergence coupling is most beneficial when applied to the transition from shallow to deep convection. This is consistent with recent observational and numerical modeling evidence, and there is a growing body of work highlighting the importance of this transition in tropical meteorology. In a series of idealized Walker cell simulations, convergence coupling enhances the persistence of Kelvin wave analogs in dry regions of the domain while leaving the dynamics in moist regions largely unaltered. The final chapter of this dissertation presents a technique for analyzing the variability of a direct numerical simulation of Rayleigh-Benard convection at large aspect ratio, which is a basic prototype of convective organization. High resolution numerical models are an invaluable tool for studying atmospheric dynamics, but modern data analysis techniques struggle with the extreme size of the model outputs and the trivial symmetries of the underlying dynamical systems (e.g. shift-invariance). A new data analysis approach which is invariant to spatial symmetries is derived by combining a quasi-Lagrangian description of the data, time-lagged embedding, and manifold learning techniques. The quasi-Lagrangian description is obtained by a straightforward isothermal binning procedure, which compresses the data in a dynamically-aware fashion. A small number of orthogonal modes returned by this algorithm are able to explain the highly intermittent dynamics of the bulk heat transfer, as quantified by the Nusselt Number.

  20. Toward Control of Universal Scaling in Critical Dynamics

    DTIC Science & Technology

    2016-01-27

    program that aims to synergistically combine two powerful and very successful theories for non-linear stochastic dynamics of cooperative multi...RESPONSIBLE PERSON 19b. TELEPHONE NUMBER Uwe Tauber Uwe C. T? uber , Michel Pleimling, Daniel J. Stilwell 611102 c. THIS PAGE The public reporting burden...to synergistically combine two powerful and very successful theories for non-linear stochastic dynamics of cooperative multi-component systems, namely

  1. Stochastic model simulation using Kronecker product analysis and Zassenhaus formula approximation.

    PubMed

    Caglar, Mehmet Umut; Pal, Ranadip

    2013-01-01

    Probabilistic Models are regularly applied in Genetic Regulatory Network modeling to capture the stochastic behavior observed in the generation of biological entities such as mRNA or proteins. Several approaches including Stochastic Master Equations and Probabilistic Boolean Networks have been proposed to model the stochastic behavior in genetic regulatory networks. It is generally accepted that Stochastic Master Equation is a fundamental model that can describe the system being investigated in fine detail, but the application of this model is computationally enormously expensive. On the other hand, Probabilistic Boolean Network captures only the coarse-scale stochastic properties of the system without modeling the detailed interactions. We propose a new approximation of the stochastic master equation model that is able to capture the finer details of the modeled system including bistabilities and oscillatory behavior, and yet has a significantly lower computational complexity. In this new method, we represent the system using tensors and derive an identity to exploit the sparse connectivity of regulatory targets for complexity reduction. The algorithm involves an approximation based on Zassenhaus formula to represent the exponential of a sum of matrices as product of matrices. We derive upper bounds on the expected error of the proposed model distribution as compared to the stochastic master equation model distribution. Simulation results of the application of the model to four different biological benchmark systems illustrate performance comparable to detailed stochastic master equation models but with considerably lower computational complexity. The results also demonstrate the reduced complexity of the new approach as compared to commonly used Stochastic Simulation Algorithm for equivalent accuracy.

  2. An Assessment of the Subseasonal Forecast Performance in the Extended Global Ensemble Forecast System (GEFS)

    NASA Astrophysics Data System (ADS)

    Sinsky, E.; Zhu, Y.; Li, W.; Guan, H.; Melhauser, C.

    2017-12-01

    Optimal forecast quality is crucial for the preservation of life and property. Improving monthly forecast performance over both the tropics and extra-tropics requires attention to various physical aspects such as the representation of the underlying SST, model physics and the representation of the model physics uncertainty for an ensemble forecast system. This work focuses on the impact of stochastic physics, SST and the convection scheme on forecast performance for the sub-seasonal scale over the tropics and extra-tropics with emphasis on the Madden-Julian Oscillation (MJO). A 2-year period is evaluated using the National Centers for Environmental Prediction (NCEP) Global Ensemble Forecast System (GEFS). Three experiments with different configurations than the operational GEFS were performed to illustrate the impact of the stochastic physics, SST and convection scheme. These experiments are compared against a control experiment (CTL) which consists of the operational GEFS but its integration is extended from 16 to 35 days. The three configurations are: 1) SPs, which uses a Stochastically Perturbed Physics Tendencies (SPPT), Stochastic Perturbed Humidity (SHUM) and Stochastic Kinetic Energy Backscatter (SKEB); 2) SPs+SST_bc, which uses a combination of SPs and a bias-corrected forecast SST from the NCEP Climate Forecast System Version 2 (CFSv2); and 3) SPs+SST_bc+SA_CV, which combines SPs, a bias-corrected forecast SST and a scale aware convection scheme. When comparing to the CTL experiment, SPs shows substantial improvement. The MJO skill has improved by about 4 lead days during the 2-year period. Improvement is also seen over the extra-tropics due to the updated stochastic physics, where there is a 3.1% and a 4.2% improvement during weeks 3 and 4 over the northern hemisphere and southern hemisphere, respectively. Improvement is also seen when the bias-corrected CFSv2 SST is combined with SPs. Additionally, forecast performance enhances when the scale aware convection scheme (SPs+SST_bc+SA_CV) is added, especially over the tropics. Among the three experiments, the SPs+SST_bc+SA_CV is the best configuration in MJO forecast skill.

  3. Scaling view by the Virtual Nature Systems

    NASA Astrophysics Data System (ADS)

    Klenov, Valeriy

    2010-05-01

    The Virtual Nature System is irreplaceable for research and evaluation for governing processes on the Earth. Processes on the Earth depends on external exogenous and endogenous influences, and on own dynamics of the Actual Nature Systems (ANS). To select part of the actors is impossible without take in account factor of the Time, factor for information safety during the Time. The stochastic nature of external influences and stochastic pattern for dynamics of Nature systems complicates evaluation of 2D threat of disasters. These are multi-layer, multi-scale, and multi-driven structures of surface processes. Their spatial-temporal overlapping of them generates relatively stable structure of river basins and of river net. Dynamics of processes in river basins results in remove of the former sediments and levels, and in displace of erosion/sedimentation pattern, in destroy and dissipation for a memory the ANS. This complex process results in the Information Loss Law (ILL) in the ANS, which gradually cut off own Past. This view on the GeoDynamics appeared after long time field measurements thousands of terrace levels, hundreds of terrace ranks, and terrace complexes in river basins (Klenov, 1986, 2004). Action of the ILL leads to blanks in natural records, which are non-linearly increasing to the Past, and in appearance of false trends in the records. This temporal barrier prevents evaluation of the history. The way to view spatial-temporal dynamics of the ANS is creation for the portrait Virtual Nature Systems, as acting doubles of the actual nature systems (ANS). Exogenous and endogenous influences are governing drivers of the ANS and of corresponding VNS. The VNS is necessary for research of spatial-temporal GeoDynamics. Unfortunately, the ILL is working not only for the Past, but also restrict ‘view' the Future. It is because of future drivers are yet unknown with necessary exactness, and due high sensitivity of nature systems to external pressure. However, a time for validation of the VNS is short to receive non distorted records, but it provides satisfactory validation of the VNS, and provides satisfactory evaluation for stochastic patterns of disasters (floods and debris flows). The VNS gives a chance to divide exogenous (climatic) from tectonic influences. This property is invaluable for monitoring and scenarios of land use, engineering, and other human activity, under simultaneous climatic and tectonic impacts, for evaluation of threat's areas and tracks. The continually measured stochastic spatial-temporal interception of external impacts (storms, precipitations, of tectonic distorts, earthquakes, and others), does not make problems for the VNS (acting by observed records), and by imply the Moving Digital Earth (MDE) technology (by immediate reforming of external drivers to natural processes). It is a goal for the VNS and MDE, which becomes possible by remote sensing, by powerful computers, and by fast communications. The VNS/MDE presents corresponding mapping for processes in any area. Instead of problems of scaling the current task is to provide necessary spatial resolution of the basic multi-layer Matrix of variables and parameters. Problems are in procedures for filling up of large multi-layer M, quick computing and mapping of large areas. The scaling depends on a task. The acceptable spatial resolution of the Matrix must perceive in view to hazardous processes with acceptable in resolution. During the VNS practice were evaluated any imagined combines of exogenous-endogenous impacts (from linear to circle distort, blocks, volcanoes, earthquakes, and others, in a variety of scales from local to sub-continental. The single principle for choose a scale is that spatial resolution (cell size) should not ignore important details of the Earth. For the Rhine Basin was computed influence of small smooth tectonic distorts in a large area. It was resulted in essential change for pattern of erosion/sedimentation on a land, and in Coastal Zone. For small basis were computed scenarios for complex tectonic distorts, earthquakes, resulted in decreasing of soil/rock resistance and in sharp increasing of catastrophic debris flows and flash floods. Any scenarios are possible for the verified/validated VNS. The VNS is valuable for any area, and the MDE has a skill for mapping the soon Future, and for mapping of threats' areas and tracks.

  4. Large eddy simulation of orientation and rotation of ellipsoidal particles in isotropic turbulent flows

    NASA Astrophysics Data System (ADS)

    Chen, Jincai; Jin, Guodong; Zhang, Jian

    2016-03-01

    The rotational motion and orientational distribution of ellipsoidal particles in turbulent flows are of significance in environmental and engineering applications. Whereas the translational motion of an ellipsoidal particle is controlled by the turbulent motions at large scales, its rotational motion is determined by the fluid velocity gradient tensor at small scales, which raises a challenge when predicting the rotational dispersion of ellipsoidal particles using large eddy simulation (LES) method due to the lack of subgrid scale (SGS) fluid motions. We report the effects of the SGS fluid motions on the orientational and rotational statistics, such as the alignment between the long axis of ellipsoidal particles and the vorticity, the mean rotational energy at various aspect ratios against those obtained with direct numerical simulation (DNS) and filtered DNS. The performances of a stochastic differential equation (SDE) model for the SGS velocity gradient seen by the particles and the approximate deconvolution method (ADM) for LES are investigated. It is found that the missing SGS fluid motions in LES flow fields have significant effects on the rotational statistics of ellipsoidal particles. Alignment between the particles and the vorticity is weakened; and the rotational energy of the particles is reduced in LES. The SGS-SDE model leads to a large error in predicting the alignment between the particles and the vorticity and over-predicts the rotational energy of rod-like particles. The ADM significantly improves the rotational energy prediction of particles in LES.

  5. Reaction factoring and bipartite update graphs accelerate the Gillespie Algorithm for large-scale biochemical systems.

    PubMed

    Indurkhya, Sagar; Beal, Jacob

    2010-01-06

    ODE simulations of chemical systems perform poorly when some of the species have extremely low concentrations. Stochastic simulation methods, which can handle this case, have been impractical for large systems due to computational complexity. We observe, however, that when modeling complex biological systems: (1) a small number of reactions tend to occur a disproportionately large percentage of the time, and (2) a small number of species tend to participate in a disproportionately large percentage of reactions. We exploit these properties in LOLCAT Method, a new implementation of the Gillespie Algorithm. First, factoring reaction propensities allows many propensities dependent on a single species to be updated in a single operation. Second, representing dependencies between reactions with a bipartite graph of reactions and species requires only storage for reactions, rather than the required for a graph that includes only reactions. Together, these improvements allow our implementation of LOLCAT Method to execute orders of magnitude faster than currently existing Gillespie Algorithm variants when simulating several yeast MAPK cascade models.

  6. Reaction Factoring and Bipartite Update Graphs Accelerate the Gillespie Algorithm for Large-Scale Biochemical Systems

    PubMed Central

    Indurkhya, Sagar; Beal, Jacob

    2010-01-01

    ODE simulations of chemical systems perform poorly when some of the species have extremely low concentrations. Stochastic simulation methods, which can handle this case, have been impractical for large systems due to computational complexity. We observe, however, that when modeling complex biological systems: (1) a small number of reactions tend to occur a disproportionately large percentage of the time, and (2) a small number of species tend to participate in a disproportionately large percentage of reactions. We exploit these properties in LOLCAT Method, a new implementation of the Gillespie Algorithm. First, factoring reaction propensities allows many propensities dependent on a single species to be updated in a single operation. Second, representing dependencies between reactions with a bipartite graph of reactions and species requires only storage for reactions, rather than the required for a graph that includes only reactions. Together, these improvements allow our implementation of LOLCAT Method to execute orders of magnitude faster than currently existing Gillespie Algorithm variants when simulating several yeast MAPK cascade models. PMID:20066048

  7. Scale, mergers and efficiency: the case of Dutch housing corporations.

    PubMed

    Veenstra, Jacob; Koolma, Hendrik M; Allers, Maarten A

    2017-01-01

    The efficiency of social housing providers is a contentious issue. In the Netherlands, there is a widespread belief that housing corporations have substantial potential for efficiency improvements. A related question is whether scale influences efficiency, since recent decades have shown a trend of mergers among corporations. This paper offers a framework to assess the effects of scale and mergers on the efficiency of Dutch housing corporations by using both a data envelopment analysis and a stochastic frontier analysis, using panel data for 2001-2012. The results indicate that most housing corporations operate under diseconomies of scale, implying that merging would be undesirable in most cases. However, merging may have beneficial effects on pure technical efficiency as it forces organizations to reconsider existing practices. A data envelopment analysis indeed confirms this hypothesis, but these results cannot be replicated by a stochastic frontier analysis, meaning that the evidence for this effect is not robust.

  8. Role of weakest links and system-size scaling in multiscale modeling of stochastic plasticity

    NASA Astrophysics Data System (ADS)

    Ispánovity, Péter Dusán; Tüzes, Dániel; Szabó, Péter; Zaiser, Michael; Groma, István

    2017-02-01

    Plastic deformation of crystalline and amorphous matter often involves intermittent local strain burst events. To understand the physical background of the phenomenon a minimal stochastic mesoscopic model was introduced, where details of the microstructure evolution are statistically represented in terms of a fluctuating local yield threshold. In the present paper we propose a method for determining the corresponding yield stress distribution for the case of crystal plasticity from lower scale discrete dislocation dynamics simulations which we combine with weakest link arguments. The success of scale linking is demonstrated by comparing stress-strain curves obtained from the resulting mesoscopic and the underlying discrete dislocation models in the microplastic regime. As shown by various scaling relations they are statistically equivalent and behave identically in the thermodynamic limit. The proposed technique is expected to be applicable to different microstructures and also to amorphous materials.

  9. Stochastic dynamics and non-equilibrium thermodynamics of a bistable chemical system: the Schlögl model revisited.

    PubMed

    Vellela, Melissa; Qian, Hong

    2009-10-06

    Schlögl's model is the canonical example of a chemical reaction system that exhibits bistability. Because the biological examples of bistability and switching behaviour are increasingly numerous, this paper presents an integrated deterministic, stochastic and thermodynamic analysis of the model. After a brief review of the deterministic and stochastic modelling frameworks, the concepts of chemical and mathematical detailed balances are discussed and non-equilibrium conditions are shown to be necessary for bistability. Thermodynamic quantities such as the flux, chemical potential and entropy production rate are defined and compared across the two models. In the bistable region, the stochastic model exhibits an exchange of the global stability between the two stable states under changes in the pump parameters and volume size. The stochastic entropy production rate shows a sharp transition that mirrors this exchange. A new hybrid model that includes continuous diffusion and discrete jumps is suggested to deal with the multiscale dynamics of the bistable system. Accurate approximations of the exponentially small eigenvalue associated with the time scale of this switching and the full time-dependent solution are calculated using Matlab. A breakdown of previously known asymptotic approximations on small volume scales is observed through comparison with these and Monte Carlo results. Finally, in the appendix section is an illustration of how the diffusion approximation of the chemical master equation can fail to represent correctly the mesoscopically interesting steady-state behaviour of the system.

  10. Single realization stochastic FDTD for weak scattering waves in biological random media.

    PubMed

    Tan, Tengmeng; Taflove, Allen; Backman, Vadim

    2013-02-01

    This paper introduces an iterative scheme to overcome the unresolved issues presented in S-FDTD (stochastic finite-difference time-domain) for obtaining ensemble average field values recently reported by Smith and Furse in an attempt to replace the brute force multiple-realization also known as Monte-Carlo approach with a single-realization scheme. Our formulation is particularly useful for studying light interactions with biological cells and tissues having sub-wavelength scale features. Numerical results demonstrate that such a small scale variation can be effectively modeled with a random medium problem which when simulated with the proposed S-FDTD indeed produces a very accurate result.

  11. Single realization stochastic FDTD for weak scattering waves in biological random media

    PubMed Central

    Tan, Tengmeng; Taflove, Allen; Backman, Vadim

    2015-01-01

    This paper introduces an iterative scheme to overcome the unresolved issues presented in S-FDTD (stochastic finite-difference time-domain) for obtaining ensemble average field values recently reported by Smith and Furse in an attempt to replace the brute force multiple-realization also known as Monte-Carlo approach with a single-realization scheme. Our formulation is particularly useful for studying light interactions with biological cells and tissues having sub-wavelength scale features. Numerical results demonstrate that such a small scale variation can be effectively modeled with a random medium problem which when simulated with the proposed S-FDTD indeed produces a very accurate result. PMID:27158153

  12. Universal Stochastic Multiscale Image Fusion: An Example Application for Shale Rock.

    PubMed

    Gerke, Kirill M; Karsanina, Marina V; Mallants, Dirk

    2015-11-02

    Spatial data captured with sensors of different resolution would provide a maximum degree of information if the data were to be merged into a single image representing all scales. We develop a general solution for merging multiscale categorical spatial data into a single dataset using stochastic reconstructions with rescaled correlation functions. The versatility of the method is demonstrated by merging three images of shale rock representing macro, micro and nanoscale spatial information on mineral, organic matter and porosity distribution. Merging multiscale images of shale rock is pivotal to quantify more reliably petrophysical properties needed for production optimization and environmental impacts minimization. Images obtained by X-ray microtomography and scanning electron microscopy were fused into a single image with predefined resolution. The methodology is sufficiently generic for implementation of other stochastic reconstruction techniques, any number of scales, any number of material phases, and any number of images for a given scale. The methodology can be further used to assess effective properties of fused porous media images or to compress voluminous spatial datasets for efficient data storage. Practical applications are not limited to petroleum engineering or more broadly geosciences, but will also find their way in material sciences, climatology, and remote sensing.

  13. Local-scale Partitioning of Functional and Phylogenetic Beta Diversity in a Tropical Tree Assemblage.

    PubMed

    Yang, Jie; Swenson, Nathan G; Zhang, Guocheng; Ci, Xiuqin; Cao, Min; Sha, Liqing; Li, Jie; Ferry Slik, J W; Lin, Luxiang

    2015-08-03

    The relative degree to which stochastic and deterministic processes underpin community assembly is a central problem in ecology. Quantifying local-scale phylogenetic and functional beta diversity may shed new light on this problem. We used species distribution, soil, trait and phylogenetic data to quantify whether environmental distance, geographic distance or their combination are the strongest predictors of phylogenetic and functional beta diversity on local scales in a 20-ha tropical seasonal rainforest dynamics plot in southwest China. The patterns of phylogenetic and functional beta diversity were generally consistent. The phylogenetic and functional dissimilarity between subplots (10 × 10 m, 20 × 20 m, 50 × 50 m and 100 × 100 m) was often higher than that expected by chance. The turnover of lineages and species function within habitats was generally slower than that across habitats. Partitioning the variation in phylogenetic and functional beta diversity showed that environmental distance was generally a better predictor of beta diversity than geographic distance thereby lending relatively more support for deterministic environmental filtering over stochastic processes. Overall, our results highlight that deterministic processes play a stronger role than stochastic processes in structuring community composition in this diverse assemblage of tropical trees.

  14. Universal Stochastic Multiscale Image Fusion: An Example Application for Shale Rock

    PubMed Central

    Gerke, Kirill M.; Karsanina, Marina V.; Mallants, Dirk

    2015-01-01

    Spatial data captured with sensors of different resolution would provide a maximum degree of information if the data were to be merged into a single image representing all scales. We develop a general solution for merging multiscale categorical spatial data into a single dataset using stochastic reconstructions with rescaled correlation functions. The versatility of the method is demonstrated by merging three images of shale rock representing macro, micro and nanoscale spatial information on mineral, organic matter and porosity distribution. Merging multiscale images of shale rock is pivotal to quantify more reliably petrophysical properties needed for production optimization and environmental impacts minimization. Images obtained by X-ray microtomography and scanning electron microscopy were fused into a single image with predefined resolution. The methodology is sufficiently generic for implementation of other stochastic reconstruction techniques, any number of scales, any number of material phases, and any number of images for a given scale. The methodology can be further used to assess effective properties of fused porous media images or to compress voluminous spatial datasets for efficient data storage. Practical applications are not limited to petroleum engineering or more broadly geosciences, but will also find their way in material sciences, climatology, and remote sensing. PMID:26522938

  15. Many roads to synchrony: natural time scales and their algorithms.

    PubMed

    James, Ryan G; Mahoney, John R; Ellison, Christopher J; Crutchfield, James P

    2014-04-01

    We consider two important time scales-the Markov and cryptic orders-that monitor how an observer synchronizes to a finitary stochastic process. We show how to compute these orders exactly and that they are most efficiently calculated from the ε-machine, a process's minimal unifilar model. Surprisingly, though the Markov order is a basic concept from stochastic process theory, it is not a probabilistic property of a process. Rather, it is a topological property and, moreover, it is not computable from any finite-state model other than the ε-machine. Via an exhaustive survey, we close by demonstrating that infinite Markov and infinite cryptic orders are a dominant feature in the space of finite-memory processes. We draw out the roles played in statistical mechanical spin systems by these two complementary length scales.

  16. Ultimate open pit stochastic optimization

    NASA Astrophysics Data System (ADS)

    Marcotte, Denis; Caron, Josiane

    2013-02-01

    Classical open pit optimization (maximum closure problem) is made on block estimates, without directly considering the block grades uncertainty. We propose an alternative approach of stochastic optimization. The stochastic optimization is taken as the optimal pit computed on the block expected profits, rather than expected grades, computed from a series of conditional simulations. The stochastic optimization generates, by construction, larger ore and waste tonnages than the classical optimization. Contrary to the classical approach, the stochastic optimization is conditionally unbiased for the realized profit given the predicted profit. A series of simulated deposits with different variograms are used to compare the stochastic approach, the classical approach and the simulated approach that maximizes expected profit among simulated designs. Profits obtained with the stochastic optimization are generally larger than the classical or simulated pit. The main factor controlling the relative gain of stochastic optimization compared to classical approach and simulated pit is shown to be the information level as measured by the boreholes spacing/range ratio. The relative gains of the stochastic approach over the classical approach increase with the treatment costs but decrease with mining costs. The relative gains of the stochastic approach over the simulated pit approach increase both with the treatment and mining costs. At early stages of an open pit project, when uncertainty is large, the stochastic optimization approach appears preferable to the classical approach or the simulated pit approach for fair comparison of the values of alternative projects and for the initial design and planning of the open pit.

  17. A stochastic fractional dynamics model of space-time variability of rain

    NASA Astrophysics Data System (ADS)

    Kundu, Prasun K.; Travis, James E.

    2013-09-01

    varies in space and time in a highly irregular manner and is described naturally in terms of a stochastic process. A characteristic feature of rainfall statistics is that they depend strongly on the space-time scales over which rain data are averaged. A spectral model of precipitation has been developed based on a stochastic differential equation of fractional order for the point rain rate, which allows a concise description of the second moment statistics of rain at any prescribed space-time averaging scale. The model is thus capable of providing a unified description of the statistics of both radar and rain gauge data. The underlying dynamical equation can be expressed in terms of space-time derivatives of fractional orders that are adjusted together with other model parameters to fit the data. The form of the resulting spectrum gives the model adequate flexibility to capture the subtle interplay between the spatial and temporal scales of variability of rain but strongly constrains the predicted statistical behavior as a function of the averaging length and time scales. We test the model with radar and gauge data collected contemporaneously at the NASA TRMM ground validation sites located near Melbourne, Florida and on the Kwajalein Atoll, Marshall Islands in the tropical Pacific. We estimate the parameters by tuning them to fit the second moment statistics of radar data at the smaller spatiotemporal scales. The model predictions are then found to fit the second moment statistics of the gauge data reasonably well at these scales without any further adjustment.

  18. Characterizing riverbed sediment using high-frequency acoustics 1: spectral properties of scattering

    USGS Publications Warehouse

    Buscombe, Daniel D.; Grams, Paul E.; Kaplinski, Matt A.

    2014-01-01

    Bed-sediment classification using high-frequency hydro-acoustic instruments is challenging when sediments are spatially heterogeneous, which is often the case in rivers. The use of acoustic backscatter to classify sediments is an attractive alternative to analysis of topography because it is potentially sensitive to grain-scale roughness. Here, a new method is presented which uses high-frequency acoustic backscatter from multibeam sonar to classify heterogeneous riverbed sediments by type (sand, gravel,rock) continuously in space and at small spatial resolution. In this, the first of a pair of papers that examine the scattering signatures from a heterogeneous riverbed, methods are presented to construct spatially explicit maps of spectral properties from geo-referenced point clouds of geometrically and radiometrically corrected echoes. Backscatter power spectra are computed to produce scale and amplitude metrics that collectively characterize the length scales of stochastic measures of riverbed scattering, termed ‘stochastic geometries’. Backscatter aggregated over small spatial scales have spectra that obey a power-law. This apparently self-affine behavior could instead arise from morphological- and grain-scale roughnesses over multiple overlapping scales, or riverbed scattering being transitional between Rayleigh and geometric regimes. Relationships exist between stochastic geometries of backscatter and areas of rough and smooth sediments. However, no one parameter can uniquely characterize a particular substrate, nor definitively separate the relative contributions of roughness and acoustic impedance (hardness). Combinations of spectral quantities do, however, have the potential to delineate riverbed sediment patchiness, in a data-driven approach comparing backscatter with bed-sediment observations (which is the subject of part two of this manuscript).

  19. Predicting cell viability within tissue scaffolds under equiaxial strain: multi-scale finite element model of collagen-cardiomyocytes constructs.

    PubMed

    Elsaadany, Mostafa; Yan, Karen Chang; Yildirim-Ayan, Eda

    2017-06-01

    Successful tissue engineering and regenerative therapy necessitate having extensive knowledge about mechanical milieu in engineered tissues and the resident cells. In this study, we have merged two powerful analysis tools, namely finite element analysis and stochastic analysis, to understand the mechanical strain within the tissue scaffold and residing cells and to predict the cell viability upon applying mechanical strains. A continuum-based multi-length scale finite element model (FEM) was created to simulate the physiologically relevant equiaxial strain exposure on cell-embedded tissue scaffold and to calculate strain transferred to the tissue scaffold (macro-scale) and residing cells (micro-scale) upon various equiaxial strains. The data from FEM were used to predict cell viability under various equiaxial strain magnitudes using stochastic damage criterion analysis. The model validation was conducted through mechanically straining the cardiomyocyte-encapsulated collagen constructs using a custom-built mechanical loading platform (EQUicycler). FEM quantified the strain gradients over the radial and longitudinal direction of the scaffolds and the cells residing in different areas of interest. With the use of the experimental viability data, stochastic damage criterion, and the average cellular strains obtained from multi-length scale models, cellular viability was predicted and successfully validated. This methodology can provide a great tool to characterize the mechanical stimulation of bioreactors used in tissue engineering applications in providing quantification of mechanical strain and predicting cellular viability variations due to applied mechanical strain.

  20. Diffusion approximations to the chemical master equation only have a consistent stochastic thermodynamics at chemical equilibrium

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Horowitz, Jordan M., E-mail: jordan.horowitz@umb.edu

    The stochastic thermodynamics of a dilute, well-stirred mixture of chemically reacting species is built on the stochastic trajectories of reaction events obtained from the chemical master equation. However, when the molecular populations are large, the discrete chemical master equation can be approximated with a continuous diffusion process, like the chemical Langevin equation or low noise approximation. In this paper, we investigate to what extent these diffusion approximations inherit the stochastic thermodynamics of the chemical master equation. We find that a stochastic-thermodynamic description is only valid at a detailed-balanced, equilibrium steady state. Away from equilibrium, where there is no consistent stochasticmore » thermodynamics, we show that one can still use the diffusive solutions to approximate the underlying thermodynamics of the chemical master equation.« less

Top