Science.gov

Sample records for fuel stochastic monte

  1. Multidimensional stochastic approximation Monte Carlo.

    PubMed

    Zablotskiy, Sergey V; Ivanov, Victor A; Paul, Wolfgang

    2016-06-01

    Stochastic Approximation Monte Carlo (SAMC) has been established as a mathematically founded powerful flat-histogram Monte Carlo method, used to determine the density of states, g(E), of a model system. We show here how it can be generalized for the determination of multidimensional probability distributions (or equivalently densities of states) of macroscopic or mesoscopic variables defined on the space of microstates of a statistical mechanical system. This establishes this method as a systematic way for coarse graining a model system, or, in other words, for performing a renormalization group step on a model. We discuss the formulation of the Kadanoff block spin transformation and the coarse-graining procedure for polymer models in this language. We also apply it to a standard case in the literature of two-dimensional densities of states, where two competing energetic effects are present g(E_{1},E_{2}). We show when and why care has to be exercised when obtaining the microcanonical density of states g(E_{1}+E_{2}) from g(E_{1},E_{2}). PMID:27415383

  2. Multidimensional stochastic approximation Monte Carlo

    NASA Astrophysics Data System (ADS)

    Zablotskiy, Sergey V.; Ivanov, Victor A.; Paul, Wolfgang

    2016-06-01

    Stochastic Approximation Monte Carlo (SAMC) has been established as a mathematically founded powerful flat-histogram Monte Carlo method, used to determine the density of states, g (E ) , of a model system. We show here how it can be generalized for the determination of multidimensional probability distributions (or equivalently densities of states) of macroscopic or mesoscopic variables defined on the space of microstates of a statistical mechanical system. This establishes this method as a systematic way for coarse graining a model system, or, in other words, for performing a renormalization group step on a model. We discuss the formulation of the Kadanoff block spin transformation and the coarse-graining procedure for polymer models in this language. We also apply it to a standard case in the literature of two-dimensional densities of states, where two competing energetic effects are present g (E1,E2) . We show when and why care has to be exercised when obtaining the microcanonical density of states g (E1+E2) from g (E1,E2) .

  3. Optimization of Monte Carlo transport simulations in stochastic media

    SciTech Connect

    Liang, C.; Ji, W.

    2012-07-01

    This paper presents an accurate and efficient approach to optimize radiation transport simulations in a stochastic medium of high heterogeneity, like the Very High Temperature Gas-cooled Reactor (VHTR) configurations packed with TRISO fuel particles. Based on a fast nearest neighbor search algorithm, a modified fast Random Sequential Addition (RSA) method is first developed to speed up the generation of the stochastic media systems packed with both mono-sized and poly-sized spheres. A fast neutron tracking method is then developed to optimize the next sphere boundary search in the radiation transport procedure. In order to investigate their accuracy and efficiency, the developed sphere packing and neutron tracking methods are implemented into an in-house continuous energy Monte Carlo code to solve an eigenvalue problem in VHTR unit cells. Comparison with the MCNP benchmark calculations for the same problem indicates that the new methods show considerably higher computational efficiency. (authors)

  4. Quantum Monte Carlo using a Stochastic Poisson Solver

    SciTech Connect

    Das, D; Martin, R M; Kalos, M H

    2005-05-06

    Quantum Monte Carlo (QMC) is an extremely powerful method to treat many-body systems. Usually quantum Monte Carlo has been applied in cases where the interaction potential has a simple analytic form, like the 1/r Coulomb potential. However, in a complicated environment as in a semiconductor heterostructure, the evaluation of the interaction itself becomes a non-trivial problem. Obtaining the potential from any grid-based finite-difference method, for every walker and every step is unfeasible. We demonstrate an alternative approach of solving the Poisson equation by a classical Monte Carlo within the overall quantum Monte Carlo scheme. We have developed a modified ''Walk On Spheres'' algorithm using Green's function techniques, which can efficiently account for the interaction energy of walker configurations, typical of quantum Monte Carlo algorithms. This stochastically obtained potential can be easily incorporated within popular quantum Monte Carlo techniques like variational Monte Carlo (VMC) or diffusion Monte Carlo (DMC). We demonstrate the validity of this method by studying a simple problem, the polarization of a helium atom in the electric field of an infinite capacitor.

  5. Longitudinal functional principal component modeling via Stochastic Approximation Monte Carlo

    PubMed Central

    Martinez, Josue G.; Liang, Faming; Zhou, Lan; Carroll, Raymond J.

    2010-01-01

    The authors consider the analysis of hierarchical longitudinal functional data based upon a functional principal components approach. In contrast to standard frequentist approaches to selecting the number of principal components, the authors do model averaging using a Bayesian formulation. A relatively straightforward reversible jump Markov Chain Monte Carlo formulation has poor mixing properties and in simulated data often becomes trapped at the wrong number of principal components. In order to overcome this, the authors show how to apply Stochastic Approximation Monte Carlo (SAMC) to this problem, a method that has the potential to explore the entire space and does not become trapped in local extrema. The combination of reversible jump methods and SAMC in hierarchical longitudinal functional data is simplified by a polar coordinate representation of the principal components. The approach is easy to implement and does well in simulated data in determining the distribution of the number of principal components, and in terms of its frequentist estimation properties. Empirical applications are also presented. PMID:20689648

  6. Monte Carlo Hybrid Applied to Binary Stochastic Mixtures

    2008-08-11

    The purpose of this set of codes isto use an inexpensive, approximate deterministic flux distribution to generate weight windows, wihich will then be used to bound particle weights for the Monte Carlo code run. The process is not automated; the user must run the deterministic code and use the output file as a command-line argument for the Monte Carlo code. Two sets of text input files are included as test problems/templates.

  7. Semi-stochastic full configuration interaction quantum Monte Carlo: Developments and application.

    PubMed

    Blunt, N S; Smart, Simon D; Kersten, J A F; Spencer, J S; Booth, George H; Alavi, Ali

    2015-05-14

    We expand upon the recent semi-stochastic adaptation to full configuration interaction quantum Monte Carlo (FCIQMC). We present an alternate method for generating the deterministic space without a priori knowledge of the wave function and present stochastic efficiencies for a variety of both molecular and lattice systems. The algorithmic details of an efficient semi-stochastic implementation are presented, with particular consideration given to the effect that the adaptation has on parallel performance in FCIQMC. We further demonstrate the benefit for calculation of reduced density matrices in FCIQMC through replica sampling, where the semi-stochastic adaptation seems to have even larger efficiency gains. We then combine these ideas to produce explicitly correlated corrected FCIQMC energies for the beryllium dimer, for which stochastic errors on the order of wavenumber accuracy are achievable. PMID:25978883

  8. Semi-stochastic full configuration interaction quantum Monte Carlo: Developments and application

    SciTech Connect

    Blunt, N. S. Kersten, J. A. F.; Smart, Simon D.; Spencer, J. S.; Booth, George H.; Alavi, Ali

    2015-05-14

    We expand upon the recent semi-stochastic adaptation to full configuration interaction quantum Monte Carlo (FCIQMC). We present an alternate method for generating the deterministic space without a priori knowledge of the wave function and present stochastic efficiencies for a variety of both molecular and lattice systems. The algorithmic details of an efficient semi-stochastic implementation are presented, with particular consideration given to the effect that the adaptation has on parallel performance in FCIQMC. We further demonstrate the benefit for calculation of reduced density matrices in FCIQMC through replica sampling, where the semi-stochastic adaptation seems to have even larger efficiency gains. We then combine these ideas to produce explicitly correlated corrected FCIQMC energies for the beryllium dimer, for which stochastic errors on the order of wavenumber accuracy are achievable.

  9. Empirical Analysis of Stochastic Volatility Model by Hybrid Monte Carlo Algorithm

    NASA Astrophysics Data System (ADS)

    Takaishi, Tetsuya

    2013-04-01

    The stochastic volatility model is one of volatility models which infer latent volatility of asset returns. The Bayesian inference of the stochastic volatility (SV) model is performed by the hybrid Monte Carlo (HMC) algorithm which is superior to other Markov Chain Monte Carlo methods in sampling volatility variables. We perform the HMC simulations of the SV model for two liquid stock returns traded on the Tokyo Stock Exchange and measure the volatilities of those stock returns. Then we calculate the accuracy of the volatility measurement using the realized volatility as a proxy of the true volatility and compare the SV model with the GARCH model which is one of other volatility models. Using the accuracy calculated with the realized volatility we find that empirically the SV model performs better than the GARCH model.

  10. A Hybrid Monte Carlo-Deterministic Method for Global Binary Stochastic Medium Transport Problems

    SciTech Connect

    Keady, K P; Brantley, P

    2010-03-04

    Global deep-penetration transport problems are difficult to solve using traditional Monte Carlo techniques. In these problems, the scalar flux distribution is desired at all points in the spatial domain (global nature), and the scalar flux typically drops by several orders of magnitude across the problem (deep-penetration nature). As a result, few particle histories may reach certain regions of the domain, producing a relatively large variance in tallies in those regions. Implicit capture (also known as survival biasing or absorption suppression) can be used to increase the efficiency of the Monte Carlo transport algorithm to some degree. A hybrid Monte Carlo-deterministic technique has previously been developed by Cooper and Larsen to reduce variance in global problems by distributing particles more evenly throughout the spatial domain. This hybrid method uses an approximate deterministic estimate of the forward scalar flux distribution to automatically generate weight windows for the Monte Carlo transport simulation, avoiding the necessity for the code user to specify the weight window parameters. In a binary stochastic medium, the material properties at a given spatial location are known only statistically. The most common approach to solving particle transport problems involving binary stochastic media is to use the atomic mix (AM) approximation in which the transport problem is solved using ensemble-averaged material properties. The most ubiquitous deterministic model developed specifically for solving binary stochastic media transport problems is the Levermore-Pomraning (L-P) model. Zimmerman and Adams proposed a Monte Carlo algorithm (Algorithm A) that solves the Levermore-Pomraning equations and another Monte Carlo algorithm (Algorithm B) that is more accurate as a result of improved local material realization modeling. Recent benchmark studies have shown that Algorithm B is often significantly more accurate than Algorithm A (and therefore the L-P model

  11. Fuel temperature reactivity coefficient calculation by Monte Carlo perturbation techniques

    SciTech Connect

    Shim, H. J.; Kim, C. H.

    2013-07-01

    We present an efficient method to estimate the fuel temperature reactivity coefficient (FTC) by the Monte Carlo adjoint-weighted correlated sampling method. In this method, a fuel temperature change is regarded as variations of the microscopic cross sections and the temperature in the free gas model which is adopted to correct the asymptotic double differential scattering kernel. The effectiveness of the new method is examined through the continuous energy MC neutronics calculations for PWR pin cell problems. The isotope-wise and reaction-type-wise contributions to the FTCs are investigated for two free gas models - the constant scattering cross section model and the exact model. It is shown that the proposed method can efficiently predict the reactivity change due to the fuel temperature variation. (authors)

  12. Stochastic Monte-Carlo Markov Chain Inversions on Models Regionalized Using Receiver Functions

    NASA Astrophysics Data System (ADS)

    Larmat, C. S.; Maceira, M.; Kato, Y.; Bodin, T.; Calo, M.; Romanowicz, B. A.; Chai, C.; Ammon, C. J.

    2014-12-01

    There is currently a strong interest in stochastic approaches to seismic modeling - versus deterministic methods such as gradient methods - due to the ability of these methods to better deal with highly non-linear problems. Another advantage of stochastic methods is that they allow the estimation of the a posteriori probability distribution of the derived parameters, meaning the envisioned Bayesian inversion of Tarantola allowing the quantification of the solution error. The cost to pay of stochastic methods is that they require testing thousands of variations of each unknown parameter and their associated weights to ensure reliable probabilistic inferences. Even with the best High-Performance Computing resources available, 3D stochastic full waveform modeling at the regional scale still remains out-of-reach. We are exploring regionalization as one way to reduce the dimension of the parameter space, allowing the identification of areas in the models that can be treated as one block in a subsequent stochastic inversion. Regionalization is classically performed through the identification of tectonic or structural elements. Lekic & Romanowicz (2011) proposed a new approach with a cluster analysis of the tomographic velocity models instead. Here we present the results of a clustering analysis on the P-wave receiver-functions used in the subsequent inversion. Different clustering algorithms and quality of clustering are tested for different datasets of North America and China. Preliminary results with the kmean clustering algorithm show that an interpolated receiver function wavefield (Chai et al., GRL, in review) improve the agreement with the geological and tectonic regions of North America compared to the traditional approach of stacked receiver functions. After regionalization, 1D profile for each region is stochastically inferred using a parallelized code based on Monte-Carlo Markov Chains (MCMC), and modeling surfacewave-dispersion and receiver

  13. A Monte Carlo simulation based inverse propagation method for stochastic model updating

    NASA Astrophysics Data System (ADS)

    Bao, Nuo; Wang, Chunjie

    2015-08-01

    This paper presents an efficient stochastic model updating method based on statistical theory. Significant parameters have been selected implementing the F-test evaluation and design of experiments, and then the incomplete fourth-order polynomial response surface model (RSM) has been developed. Exploiting of the RSM combined with Monte Carlo simulation (MCS), reduces the calculation amount and the rapid random sampling becomes possible. The inverse uncertainty propagation is given by the equally weighted sum of mean and covariance matrix objective functions. The mean and covariance of parameters are estimated synchronously by minimizing the weighted objective function through hybrid of particle-swarm and Nelder-Mead simplex optimization method, thus the better correlation between simulation and test is achieved. Numerical examples of a three degree-of-freedom mass-spring system under different conditions and GARTEUR assembly structure validated the feasibility and effectiveness of the proposed method.

  14. Integration of Monte-Carlo ray tracing with a stochastic optimisation method: application to the design of solar receiver geometry.

    PubMed

    Asselineau, Charles-Alexis; Zapata, Jose; Pye, John

    2015-06-01

    A stochastic optimisation method adapted to illumination and radiative heat transfer problems involving Monte-Carlo ray-tracing is presented. A solar receiver shape optimisation case study illustrates the advantages of the method and its potential: efficient receivers are identified using a moderate computational cost.

  15. Stochastic sensitivity analysis of the biosphere model for Canadian nuclear fuel waste management

    SciTech Connect

    Reid, J.A.K.; Corbett, B.J. . Whiteshell Labs.)

    1993-01-01

    The biosphere model, BIOTRAC, was constructed to assess Canada's concept for nuclear fuel waste disposal in a vault deep in crystalline rock at some as yet undetermined location in the Canadian Shield. The model is therefore very general and based on the shield as a whole. BIOTRAC is made up of four linked submodels for surface water, soil, atmosphere, and food chain and dose. The model simulates physical conditions and radionuclide flows from the discharge of a hypothetical nuclear fuel waste disposal vault through groundwater, a well, a lake, air, soil, and plants to a critical group of individuals, i.e., those who are most exposed and therefore receive the highest dose. This critical group is totally self-sufficient and is represented by the International Commission for Radiological Protection reference man for dose prediction. BIOTRAC is a dynamic model that assumes steady-state physical conditions for each simulation, and deals with variation and uncertainty through Monte Carlo simulation techniques. This paper describes SENSYV, a technique for analyzing pathway and parameter sensitivities for the BIOTRAC code run in stochastic mode. Results are presented for [sup 129]I from the disposal of used fuel, and they confirm the importance of doses via the soil/plant/man and the air/plant/man ingestion pathways. The results also indicate that the lake/well water use switch, the aquatic iodine mass loading parameter, the iodine soil evasion rate, and the iodine plant/soil concentration ratio are important parameters.

  16. Neutronic analysis stochastic distribution of fuel particles in Very High Temperature Gas-Cooled Reactors

    NASA Astrophysics Data System (ADS)

    Ji, Wei

    The Very High Temperature Gas-Cooled Reactor (VHTR) is a promising candidate for Generation IV designs due to its inherent safety, efficiency, and its proliferation-resistant and waste minimizing fuel cycle. A number of these advantages stem from its unique fuel design, consisting of a stochastic mixture of tiny (0.78mm diameter) microspheres with multiple coatings. However, the microsphere fuel regions represent point absorbers for resonance energy neutrons, resulting in the "double heterogeneity" for particle fuel. Special care must be taken to analyze this fuel in order to predict the spatial and spectral dependence of the neutron population in a steady-state reactor configuration. The challenges are considerable and resist brute force computation: there are over 1010 microspheres in a typical reactor configuration, with no hope of identifying individual microspheres in this stochastic mixture. Moreover, when individual microspheres "deplete" (e.g., burn the fissile isotope U-235 or transmute the fertile isotope U-238 (eventually) to Pu-239), the stochastic time-dependent nature of the depletion compounds the difficulty posed by the stochastic spatial mixture of the fuel, resulting in a prohibitive computational challenge. The goal of this research is to develop a methodology to analyze particle fuel randomly distributed in the reactor, accounting for the kernel absorptions as well as the stochastic depletion of the fuel mixture. This Ph.D. dissertation will address these challenges by developing a methodology for analyzing particle fuel that will be accurate enough to properly model stochastic particle fuel in both static and time-dependent configurations and yet be efficient enough to be used for routine analyses. This effort includes creation of a new physical model, development of a simulation algorithm, and application to real reactor configurations.

  17. Chaotic versus nonchaotic stochastic dynamics in Monte Carlo simulations: a route for accurate energy differences in N-body systems.

    PubMed

    Assaraf, Roland; Caffarel, Michel; Kollias, A C

    2011-04-15

    We present a method to efficiently evaluate small energy differences of two close N-body systems by employing stochastic processes having a stability versus chaos property. By using the same random noise, energy differences are computed from close trajectories without reweighting procedures. The approach is presented for quantum systems but can be applied to classical N-body systems as well. It is exemplified with diffusion Monte Carlo simulations for long chains of hydrogen atoms and molecules for which it is shown that the long-standing problem of computing energy derivatives is solved. PMID:21568537

  18. Comparing three stochastic search algorithms for computational protein design: Monte Carlo, replica exchange Monte Carlo, and a multistart, steepest-descent heuristic.

    PubMed

    Mignon, David; Simonson, Thomas

    2016-07-15

    Computational protein design depends on an energy function and an algorithm to search the sequence/conformation space. We compare three stochastic search algorithms: a heuristic, Monte Carlo (MC), and a Replica Exchange Monte Carlo method (REMC). The heuristic performs a steepest-descent minimization starting from thousands of random starting points. The methods are applied to nine test proteins from three structural families, with a fixed backbone structure, a molecular mechanics energy function, and with 1, 5, 10, 20, 30, or all amino acids allowed to mutate. Results are compared to an exact, "Cost Function Network" method that identifies the global minimum energy conformation (GMEC) in favorable cases. The designed sequences accurately reproduce experimental sequences in the hydrophobic core. The heuristic and REMC agree closely and reproduce the GMEC when it is known, with a few exceptions. Plain MC performs well for most cases, occasionally departing from the GMEC by 3-4 kcal/mol. With REMC, the diversity of the sequences sampled agrees with exact enumeration where the latter is possible: up to 2 kcal/mol above the GMEC. Beyond, room temperature replicas sample sequences up to 10 kcal/mol above the GMEC, providing thermal averages and a solution to the inverse protein folding problem. © 2016 Wiley Periodicals, Inc. PMID:27197555

  19. Comparing three stochastic search algorithms for computational protein design: Monte Carlo, replica exchange Monte Carlo, and a multistart, steepest-descent heuristic.

    PubMed

    Mignon, David; Simonson, Thomas

    2016-07-15

    Computational protein design depends on an energy function and an algorithm to search the sequence/conformation space. We compare three stochastic search algorithms: a heuristic, Monte Carlo (MC), and a Replica Exchange Monte Carlo method (REMC). The heuristic performs a steepest-descent minimization starting from thousands of random starting points. The methods are applied to nine test proteins from three structural families, with a fixed backbone structure, a molecular mechanics energy function, and with 1, 5, 10, 20, 30, or all amino acids allowed to mutate. Results are compared to an exact, "Cost Function Network" method that identifies the global minimum energy conformation (GMEC) in favorable cases. The designed sequences accurately reproduce experimental sequences in the hydrophobic core. The heuristic and REMC agree closely and reproduce the GMEC when it is known, with a few exceptions. Plain MC performs well for most cases, occasionally departing from the GMEC by 3-4 kcal/mol. With REMC, the diversity of the sequences sampled agrees with exact enumeration where the latter is possible: up to 2 kcal/mol above the GMEC. Beyond, room temperature replicas sample sequences up to 10 kcal/mol above the GMEC, providing thermal averages and a solution to the inverse protein folding problem. © 2016 Wiley Periodicals, Inc.

  20. Monte Carlo method based radiative transfer simulation of stochastic open forest generated by circle packing application

    NASA Astrophysics Data System (ADS)

    Jin, Shengye; Tamura, Masayuki

    2013-10-01

    Monte Carlo Ray Tracing (MCRT) method is a versatile application for simulating radiative transfer regime of the Solar - Atmosphere - Landscape system. Moreover, it can be used to compute the radiation distribution over a complex landscape configuration, as an example like a forest area. Due to its robustness to the complexity of the 3-D scene altering, MCRT method is also employed for simulating canopy radiative transfer regime as the validation source of other radiative transfer models. In MCRT modeling within vegetation, one basic step is the canopy scene set up. 3-D scanning application was used for representing canopy structure as accurately as possible, but it is time consuming. Botanical growth function can be used to model the single tree growth, but cannot be used to express the impaction among trees. L-System is also a functional controlled tree growth simulation model, but it costs large computing memory. Additionally, it only models the current tree patterns rather than tree growth during we simulate the radiative transfer regime. Therefore, it is much more constructive to use regular solid pattern like ellipsoidal, cone, cylinder etc. to indicate single canopy. Considering the allelopathy phenomenon in some open forest optical images, each tree in its own `domain' repels other trees. According to this assumption a stochastic circle packing algorithm is developed to generate the 3-D canopy scene in this study. The canopy coverage (%) and the tree amount (N) of the 3-D scene are declared at first, similar to the random open forest image. Accordingly, we randomly generate each canopy radius (rc). Then we set the circle central coordinate on XY-plane as well as to keep circles separate from each other by the circle packing algorithm. To model the individual tree, we employ the Ishikawa's tree growth regressive model to set the tree parameters including DBH (dt), tree height (H). However, the relationship between canopy height (Hc) and trunk height (Ht) is

  1. Nano-structural analysis of effective transport paths in fuel-cell catalyst layers by using stochastic material network methods

    NASA Astrophysics Data System (ADS)

    Shin, Seungho; Kim, Ah-Reum; Um, Sukkee

    2016-02-01

    A two-dimensional material network model has been developed to visualize the nano-structures of fuel-cell catalysts and to search for effective transport paths for the optimal performance of fuel cells in randomly-disordered composite catalysts. Stochastic random modeling based on the Monte Carlo method is developed using random number generation processes over a catalyst layer domain at a 95% confidence level. After the post-determination process of the effective connectivity, particularly for mass transport, the effective catalyst utilization factors are introduced to determine the extent of catalyst utilization in the fuel cells. The results show that the superficial pore volume fractions of 600 trials approximate a normal distribution curve with a mean of 0.5. In contrast, the estimated volume fraction of effectively inter-connected void clusters ranges from 0.097 to 0.420, which is much smaller than the superficial porosity of 0.5 before the percolation process. Furthermore, the effective catalyst utilization factor is determined to be linearly proportional to the effective porosity. More importantly, this study reveals that the average catalyst utilization is less affected by the variations of the catalyst's particle size and the absolute catalyst loading at a fixed volume fraction of void spaces.

  2. Comparison of Ensemble Kalman Filter groundwater-data assimilation methods based on stochastic moment equations and Monte Carlo simulation

    NASA Astrophysics Data System (ADS)

    Panzeri, M.; Riva, M.; Guadagnini, A.; Neuman, S. P.

    2014-04-01

    Traditional Ensemble Kalman Filter (EnKF) data assimilation requires computationally intensive Monte Carlo (MC) sampling, which suffers from filter inbreeding unless the number of simulations is large. Recently we proposed an alternative EnKF groundwater-data assimilation method that obviates the need for sampling and is free of inbreeding issues. In our new approach, theoretical ensemble moments are approximated directly by solving a system of corresponding stochastic groundwater flow equations. Like MC-based EnKF, our moment equations (ME) approach allows Bayesian updating of system states and parameters in real-time as new data become available. Here we compare the performances and accuracies of the two approaches on two-dimensional transient groundwater flow toward a well pumping water in a synthetic, randomly heterogeneous confined aquifer subject to prescribed head and flux boundary conditions.

  3. Stochastic method for accommodation of equilibrating basins in kinetic Monte Carlo simulations

    SciTech Connect

    Van Siclen, Clinton D

    2007-02-01

    A computationally simple way to accommodate "basins" of trapping states in standard kinetic Monte Carlo simulations is presented. By assuming the system is effectively equilibrated in the basin, the residence time (time spent in the basin before escape) and the probabilities for transition to states outside the basin may be calculated. This is demonstrated for point defect diffusion over a periodic grid of sites containing a complex basin.

  4. Stochastic modeling of polarized light scattering using a Monte Carlo based stencil method.

    PubMed

    Sormaz, Milos; Stamm, Tobias; Jenny, Patrick

    2010-05-01

    This paper deals with an efficient and accurate simulation algorithm to solve the vector Boltzmann equation for polarized light transport in scattering media. The approach is based on a stencil method, which was previously developed for unpolarized light scattering and proved to be much more efficient (speedup factors of up to 10 were reported) than the classical Monte Carlo while being equally accurate. To validate what we believe to be the new stencil method, a substrate composed of spherical non-absorbing particles embedded in a non-absorbing medium was considered. The corresponding single scattering Mueller matrix, which is required to model scattering of polarized light, was determined based on the Lorenz-Mie theory. From simulations of a reflected polarized laser beam, the Mueller matrix of the substrate was computed and compared with an established reference. The agreement is excellent, and it could be demonstrated that a significant speedup of the simulations is achieved due to the stencil approach compared with the classical Monte Carlo. PMID:20448777

  5. Multiscale spatial Monte Carlo simulations: Multigriding, computational singular perturbation, and hierarchical stochastic closures

    NASA Astrophysics Data System (ADS)

    Chatterjee, Abhijit; Vlachos, Dionisios G.

    2006-02-01

    Monte Carlo (MC) simulation of most spatially distributed systems is plagued by several problems, namely, execution of one process at a time, large separation of time scales of various processes, and large length scales. Recently, a coarse-grained Monte Carlo (CGMC) method was introduced that can capture large length scales at reasonable computational times. An inherent assumption in this CGMC method revolves around a mean-field closure invoked in each coarse cell that is inaccurate for short-ranged interactions. Two new approaches are explored to improve upon this closure. The first employs the local quasichemical approximation, which is applicable to first nearest-neighbor interactions. The second, termed multiscale CGMC method, employs singular perturbation ideas on multiple grids to capture the entire cluster probability distribution function via short microscopic MC simulations on small, fine-grid lattices by taking advantage of the time scale separation of multiple processes. Computational strategies for coupling the fast process at small length scales (fine grid) with the slow processes at large length scales (coarse grid) are discussed. Finally, the binomial τ-leap method is combined with the multiscale CGMC method to execute multiple processes over the entire lattice and provide additional computational acceleration. Numerical simulations demonstrate that in the presence of fast diffusion and slow adsorption and desorption processes the two new approaches provide more accurate solutions in comparison to the previously introduced CGMC method.

  6. The impact of fuel particle size distribution on neutron transport in stochastic media

    SciTech Connect

    Liang, C.; Pavlou, A. T.; Ji, W.

    2013-07-01

    This paper presents a study of the particle size distribution impact on neutron transport in three-dimensional stochastic media. An eigenvalue problem is simulated in a cylindrical container consisting of fissile fuel particles with five different size distributions: constant, uniform, power, exponential and Gaussian. We construct 15 cases by altering the fissile particle volume packing fraction and its optical thickness, but keeping the mean chord length of the spherical fuel particle the same at different size distributions. The tallied effective multiplication factor (k{sub eff}) and flux distribution along axial and radial directions are compared between different size distributions. At low packing fraction and low optical thickness, the size distribution has a significant impact on radiation transport in stochastic media, which can cause as high as {approx}270 pcm difference in k{sub eff} value and {approx}2.6% relative error difference in peak flux. As the packing fraction and optical thickness increase, the impact gradually dissipates. (authors)

  7. Solution of deterministic-stochastic epidemic models by dynamical Monte Carlo method

    NASA Astrophysics Data System (ADS)

    Aièllo, O. E.; Haas, V. J.; daSilva, M. A. A.; Caliri, A.

    2000-07-01

    This work is concerned with dynamical Monte Carlo (MC) method and its application to models originally formulated in a continuous-deterministic approach. Specifically, a susceptible-infected-removed-susceptible (SIRS) model is used in order to analyze aspects of the dynamical MC algorithm and achieve its applications in epidemic contexts. We first examine two known approaches to the dynamical interpretation of the MC method and follow with the application of one of them in the SIRS model. The working method chosen is based on the Poisson process where hierarchy of events, properly calculated waiting time between events, and independence of the events simulated, are the basic requirements. To verify the consistence of the method, some preliminary MC results are compared against exact steady-state solutions and other general numerical results (provided by Runge-Kutta method): good agreement is found. Finally, a space-dependent extension of the SIRS model is introduced and treated by MC. The results are interpreted under and in accordance with aspects of the herd-immunity concept.

  8. A stochastic Markov chain approach for tennis: Monte Carlo simulation and modeling

    NASA Astrophysics Data System (ADS)

    Aslam, Kamran

    This dissertation describes the computational formulation of probability density functions (pdfs) that facilitate head-to-head match simulations in tennis along with ranking systems developed from their use. A background on the statistical method used to develop the pdfs , the Monte Carlo method, and the resulting rankings are included along with a discussion on ranking methods currently being used both in professional sports and in other applications. Using an analytical theory developed by Newton and Keller in [34] that defines a tennis player's probability of winning a game, set, match and single elimination tournament, a computational simulation has been developed in Matlab that allows further modeling not previously possible with the analytical theory alone. Such experimentation consists of the exploration of non-iid effects, considers the concept the varying importance of points in a match and allows an unlimited number of matches to be simulated between unlikely opponents. The results of these studies have provided pdfs that accurately model an individual tennis player's ability along with a realistic, fair and mathematically sound platform for ranking them.

  9. ATR WG-MOX Fuel Pellet Burnup Measurement by Monte Carlo - Mass Spectrometric Method

    SciTech Connect

    Chang, Gray Sen I

    2002-10-01

    This paper presents a new method for calculating the burnup of nuclear reactor fuel, the MCWO-MS method, and describes its application to an experiment currently in progress to assess the suitability for use in light-water reactors of Mixed-OXide (MOX) fuel that contains plutonium derived from excess nuclear weapons material. To demonstrate that the available experience base with Reactor-Grade Mixed uranium-plutonium OXide (RGMOX) can be applied to Weapons-Grade (WG)-MOX in light water reactors, and to support potential licensing of MOX fuel made from weapons-grade plutonium and depleted uranium for use in United States reactors, an experiment containing WG-MOX fuel is being irradiated in the Advanced Test Reactor (ATR) at the Idaho National Engineering and Environmental Laboratory. Fuel burnup is an important parameter needed for fuel performance evaluation. For the irradiated MOX fuel’s Post-Irradiation Examination, the 148Nd method is used to measure the burnup. The fission product 148Nd is an ideal burnup indicator, when appropriate correction factors are applied. In the ATR test environment, the spectrum-dependent and burnup-dependent correction factors (see Section 5 for detailed discussion) can be substantial in high fuel burnup. The validated Monte Carlo depletion tool (MCWO) used in this study can provide a burnup-dependent correction factor for the reactor parameters, such as capture-to-fission ratios, isotopic concentrations and compositions, fission power, and spectrum in a straightforward fashion. Furthermore, the correlation curve generated by MCWO can be coupled with the 239Pu/Pu ratio measured by a Mass Spectrometer (in the new MCWO-MS method) to obtain a best-estimate MOX fuel burnup. A Monte Carlo - MCWO method can eliminate the generation of few-group cross sections. The MCWO depletion tool can analyze the detailed spatial and spectral self-shielding effects in UO2, WG-MOX, and reactor-grade mixed oxide (RG-MOX) fuel pins. The MCWO-MS tool only

  10. Developments in Stochastic Fuel Efficient Cruise Control and Constrained Control with Applications to Aircraft

    NASA Astrophysics Data System (ADS)

    McDonough, Kevin K.

    The dissertation presents contributions to fuel-efficient control of vehicle speed and constrained control with applications to aircraft. In the first part of this dissertation a stochastic approach to fuel-efficient vehicle speed control is developed. This approach encompasses stochastic modeling of road grade and traffic speed, modeling of fuel consumption through the use of a neural network, and the application of stochastic dynamic programming to generate vehicle speed control policies that are optimized for the trade-off between fuel consumption and travel time. The fuel economy improvements with the proposed policies are quantified through simulations and vehicle experiments. It is shown that the policies lead to the emergence of time-varying vehicle speed patterns that are referred to as time-varying cruise. Through simulations and experiments it is confirmed that these time-varying vehicle speed profiles are more fuel-efficient than driving at a comparable constant speed. Motivated by these results, a simpler implementation strategy that is more appealing for practical implementation is also developed. This strategy relies on a finite state machine and state transition threshold optimization, and its benefits are quantified through model-based simulations and vehicle experiments. Several additional contributions are made to approaches for stochastic modeling of road grade and vehicle speed that include the use of Kullback-Liebler divergence and divergence rate and a stochastic jump-like model for the behavior of the road grade. In the second part of the dissertation, contributions to constrained control with applications to aircraft are described. Recoverable sets and integral safe sets of initial states of constrained closed-loop systems are introduced first and computational procedures of such sets based on linear discrete-time models are given. The use of linear discrete-time models is emphasized as they lead to fast computational procedures. Examples of

  11. Study of CANDU thorium-based fuel cycles by deterministic and Monte Carlo methods

    SciTech Connect

    Nuttin, A.; Guillemin, P.; Courau, T.; Marleau, G.; Meplan, O.; David, S.; Michel-Sendis, F.; Wilson, J. N.

    2006-07-01

    In the framework of the Generation IV forum, there is a renewal of interest in self-sustainable thorium fuel cycles applied to various concepts such as Molten Salt Reactors [1, 2] or High Temperature Reactors [3, 4]. Precise evaluations of the U-233 production potential relying on existing reactors such as PWRs [5] or CANDUs [6] are hence necessary. As a consequence of its design (online refueling and D{sub 2}O moderator in a thermal spectrum), the CANDU reactor has moreover an excellent neutron economy and consequently a high fissile conversion ratio [7]. For these reasons, we try here, with a shorter term view, to re-evaluate the economic competitiveness of once-through thorium-based fuel cycles in CANDU [8]. Two simulation tools are used: the deterministic Canadian cell code DRAGON [9] and MURE [10], a C++ tool for reactor evolution calculations based on the Monte Carlo code MCNP [11]. (authors)

  12. Improvements of MCOR: A Monte Carlo depletion code system for fuel assembly reference calculations

    SciTech Connect

    Tippayakul, C.; Ivanov, K.; Misu, S.

    2006-07-01

    This paper presents the improvements of MCOR, a Monte Carlo depletion code system for fuel assembly reference calculations. The improvements of MCOR were initiated by the cooperation between the Penn State Univ. and AREVA NP to enhance the original Penn State Univ. MCOR version in order to be used as a new Monte Carlo depletion analysis tool. Essentially, a new depletion module using KORIGEN is utilized to replace the existing ORIGEN-S depletion module in MCOR. Furthermore, the online burnup cross section generation by the Monte Carlo calculation is implemented in the improved version instead of using the burnup cross section library pre-generated by a transport code. Other code features have also been added to make the new MCOR version easier to use. This paper, in addition, presents the result comparisons of the original and the improved MCOR versions against CASMO-4 and OCTOPUS. It was observed in the comparisons that there were quite significant improvements of the results in terms of k{sub inf}, fission rate distributions and isotopic contents. (authors)

  13. A new stochastic algorithm for proton exchange membrane fuel cell stack design optimization

    NASA Astrophysics Data System (ADS)

    Chakraborty, Uttara

    2012-10-01

    This paper develops a new stochastic heuristic for proton exchange membrane fuel cell stack design optimization. The problem involves finding the optimal size and configuration of stand-alone, fuel-cell-based power supply systems: the stack is to be configured so that it delivers the maximum power output at the load's operating voltage. The problem apparently looks straightforward but is analytically intractable and computationally hard. No exact solution can be found, nor is it easy to find the exact number of local optima; we, therefore, are forced to settle with approximate or near-optimal solutions. This real-world problem, first reported in Journal of Power Sources 131, poses both engineering challenges and computational challenges and is representative of many of today's open problems in fuel cell design involving a mix of discrete and continuous parameters. The new algorithm is compared against genetic algorithm, simulated annealing, and (1+1)-EA. Statistical tests of significance show that the results produced by our method are better than the best-known solutions for this problem published in the literature. A finite Markov chain analysis of the new algorithm establishes an upper bound on the expected time to find the optimum solution.

  14. Properties of Solar Thermal Fuels by Accurate Quantum Monte Carlo Calculations

    NASA Astrophysics Data System (ADS)

    Saritas, Kayahan; Ataca, Can; Grossman, Jeffrey C.

    2014-03-01

    Efficient utilization of the sun as a renewable and clean energy source is one of the major goals of this century due to increasing energy demand and environmental impact. Solar thermal fuels are materials that capture and store the sun's energy in the form of chemical bonds, which can then be released as heat on demand and charged again. Previous work on solar thermal fuels faced challenges related to the cyclability of the fuel over time, as well as the need for higher energy densities. Recently, it was shown that by templating photoswitches onto carbon nanostructures, both high energy density as well as high stability can be achieved. In this work, we explore alternative molecules to azobenzene in such a nano-templated system. We employ the highly accurate quantum Monte Carlo (QMC) method to predict the energy storage potential for each molecule. Our calculations show that in many cases the level of accuracy provided by density functional theory (DFT) is sufficient. However, in some cases, such as dihydroazulene, the drastic change in conjugation upon light absorption causes the DFT predictions to be inconsistent and incorrect. For this case, we compare our QMC results for the geometric structure, band gap and reaction enthalpy with different DFT functionals.

  15. Monte carlo Techniques for the Comprehensive Modeling of Isotopic Inventories in Future Nuclear Systems and Fuel Cycles

    SciTech Connect

    Paul P.H. Wilson

    2005-07-30

    The development of Monte Carlo techniques for isotopic inventory analysis has been explored in order to facilitate the modeling of systems with flowing streams of material through varying neutron irradiation environments. This represents a novel application of Monte Carlo methods to a field that has traditionally relied on deterministic solutions to systems of first-order differential equations. The Monte Carlo techniques were based largely on the known modeling techniques of Monte Carlo radiation transport, but with important differences, particularly in the area of variance reduction and efficiency measurement. The software that was developed to implement and test these methods now provides a basis for validating approximate modeling techniques that are available to deterministic methodologies. The Monte Carlo methods have been shown to be effective in reproducing the solutions of simple problems that are possible using both stochastic and deterministic methods. The Monte Carlo methods are also effective for tracking flows of materials through complex systems including the ability to model removal of individual elements or isotopes in the system. Computational performance is best for flows that have characteristic times that are large fractions of the system lifetime. As the characteristic times become short, leading to thousands or millions of passes through the system, the computational performance drops significantly. Further research is underway to determine modeling techniques to improve performance within this range of problems. This report describes the technical development of Monte Carlo techniques for isotopic inventory analysis. The primary motivation for this solution methodology is the ability to model systems of flowing material being exposed to varying and stochastically varying radiation environments. The methodology was developed in three stages: analog methods which model each atom with true reaction probabilities (Section 2), non-analog methods

  16. Convergence of the variational parameter without convergence of the energy in Quantum Monte Carlo (QMC) calculations using the Stochastic Gradient Approximation

    NASA Astrophysics Data System (ADS)

    Nissenbaum, Daniel; Lin, Hsin; Barbiellini, Bernardo; Bansil, Arun

    2009-03-01

    To study the performance of the Stochastic Gradient Approximation (SGA) for variational Quantum Monte Carlo methods, we have considered lithium nano-clusters [1] described by Hartree-Fock wavefunctions multiplied by two-body Jastrow factors with a single variational parameter b. Even when the system size increases, we have shown the feasibility of obtaining an accurate value of b that minimizes the energy without an explicit calculation of the energy itself. The present SGA algorithm is so efficient because an analytic gradient formula is used and because the statistical noise in the gradient is smaller than in the energy [2]. Interestingly, in this scheme the absolute value of the gradient is less important than the sign of the gradient. Work supported in part by U.S. DOE. [1] D. Nissenbaum et al., Phys. Rev. B 76, 033412 (2007). [2] A. Harju, J. Low. Temp. Phys. 140, 181 (2005).

  17. A Stochastic Method for Estimating the Effect of Isotopic Uncertainties in Spent Nuclear Fuel

    SciTech Connect

    DeHart, M.D.

    2001-08-24

    This report describes a novel approach developed at the Oak Ridge National Laboratory (ORNL) for the estimation of the uncertainty in the prediction of the neutron multiplication factor for spent nuclear fuel. This technique focuses on burnup credit, where credit is taken in criticality safety analysis for the reduced reactivity of fuel irradiated in and discharged from a reactor. Validation methods for burnup credit have attempted to separate the uncertainty associated with isotopic prediction methods from that of criticality eigenvalue calculations. Biases and uncertainties obtained in each step are combined additively. This approach, while conservative, can be excessive because of a physical assumptions employed. This report describes a statistical approach based on Monte Carlo sampling to directly estimate the total uncertainty in eigenvalue calculations resulting from uncertainties in isotopic predictions. The results can also be used to demonstrate the relative conservatism and statistical confidence associated with the method of additively combining uncertainties. This report does not make definitive conclusions on the magnitude of biases and uncertainties associated with isotopic predictions in a burnup credit analysis. These terms will vary depending on system design and the set of isotopic measurements used as a basis for estimating isotopic variances. Instead, the report describes a method that can be applied with a given design and set of isotopic data for estimating design-specific biases and uncertainties.

  18. Application of stochastic approach based on Monte Carlo (MC) simulation for life cycle inventory (LCI) to the steel process chain: case study.

    PubMed

    Bieda, Bogusław

    2014-05-15

    The purpose of the paper is to present the results of application of stochastic approach based on Monte Carlo (MC) simulation for life cycle inventory (LCI) data of Mittal Steel Poland (MSP) complex in Kraków, Poland. In order to assess the uncertainty, the software CrystalBall® (CB), which is associated with Microsoft® Excel spreadsheet model, is used. The framework of the study was originally carried out for 2005. The total production of steel, coke, pig iron, sinter, slabs from continuous steel casting (CSC), sheets from hot rolling mill (HRM) and blast furnace gas, collected in 2005 from MSP was analyzed and used for MC simulation of the LCI model. In order to describe random nature of all main products used in this study, normal distribution has been applied. The results of the simulation (10,000 trials) performed with the use of CB consist of frequency charts and statistical reports. The results of this study can be used as the first step in performing a full LCA analysis in the steel industry. Further, it is concluded that the stochastic approach is a powerful method for quantifying parameter uncertainty in LCA/LCI studies and it can be applied to any steel industry. The results obtained from this study can help practitioners and decision-makers in the steel production management.

  19. An extended stochastic reconstruction method for catalyst layers in proton exchange membrane fuel cells

    NASA Astrophysics Data System (ADS)

    Kang, Jinfen; Moriyama, Koji; Kim, Seung Hyun

    2016-09-01

    This paper presents an extended, stochastic reconstruction method for catalyst layers (CLs) of Proton Exchange Membrane Fuel Cells (PEMFCs). The focus is placed on the reconstruction of customized, low platinum (Pt) loading CLs where the microstructure of CLs can substantially influence the performance. The sphere-based simulated annealing (SSA) method is extended to generate the CL microstructures with specified and controllable structural properties for agglomerates, ionomer, and Pt catalysts. In the present method, the agglomerate structures are controlled by employing a trial two-point correlation function used in the simulated annealing process. An off-set method is proposed to generate more realistic ionomer structures. The variations of ionomer structures at different humidity conditions are considered to mimic the swelling effects. A method to control Pt loading, distribution, and utilization is presented. The extension of the method to consider heterogeneity in structural properties, which can be found in manufactured CL samples, is presented. Various reconstructed CLs are generated to demonstrate the capability of the proposed method. Proton transport properties of the reconstructed CLs are calculated and validated with experimental data.

  20. Application of Monte Carlo techniques to optimization of high-energy beam transport in a stochastic environment

    NASA Technical Reports Server (NTRS)

    Parrish, R. V.; Dieudonne, J. E.; Filippas, T. A.

    1971-01-01

    An algorithm employing a modified sequential random perturbation, or creeping random search, was applied to the problem of optimizing the parameters of a high-energy beam transport system. The stochastic solution of the mathematical model for first-order magnetic-field expansion allows the inclusion of state-variable constraints, and the inclusion of parameter constraints allowed by the method of algorithm application eliminates the possibility of infeasible solutions. The mathematical model and the algorithm were programmed for a real-time simulation facility; thus, two important features are provided to the beam designer: (1) a strong degree of man-machine communication (even to the extent of bypassing the algorithm and applying analog-matching techniques), and (2) extensive graphics for displaying information concerning both algorithm operation and transport-system behavior. Chromatic aberration was also included in the mathematical model and in the optimization process. Results presented show this method as yielding better solutions (in terms of resolutions) to the particular problem than those of a standard analog program as well as demonstrating flexibility, in terms of elements, constraints, and chromatic aberration, allowed by user interaction with both the algorithm and the stochastic model. Example of slit usage and a limited comparison of predicted results and actual results obtained with a 600 MeV cyclotron are given.

  1. Predicting fissile content of spent nuclear fuel assemblies with the passive neutron Albedo reactivity technique and Monte Carlo code emulation

    SciTech Connect

    Conlin, Jeremy Lloyd; Tobin, Stephen J

    2010-10-13

    There is a great need in the safeguards community to be able to nondestructively quantify the mass of plutonium of a spent nuclear fuel assembly. As part of the Next Generation of Safeguards Initiative, we are investigating several techniques, or detector systems, which, when integrated, will be capable of quantifying the plutonium mass of a spent fuel assembly without dismantling the assembly. This paper reports on the simulation of one of these techniques, the Passive Neutron Albedo Reactivity with Fission Chambers (PNAR-FC) system. The response of this system over a wide range of spent fuel assemblies with different burnup, initial enrichment, and cooling time characteristics is shown. A Monte Carlo method of using these modeled results to estimate the fissile content of a spent fuel assembly has been developed. A few numerical simulations of using this method are shown. Finally, additional developments still needed and being worked on are discussed.

  2. Effects of fuel cetane number on the structure of diesel spray combustion: An accelerated Eulerian stochastic fields method

    NASA Astrophysics Data System (ADS)

    Jangi, Mehdi; Lucchini, Tommaso; Gong, Cheng; Bai, Xue-Song

    2015-09-01

    An Eulerian stochastic fields (ESF) method accelerated with the chemistry coordinate mapping (CCM) approach for modelling spray combustion is formulated, and applied to model diesel combustion in a constant volume vessel. In ESF-CCM, the thermodynamic states of the discretised stochastic fields are mapped into a low-dimensional phase space. Integration of the chemical stiff ODEs is performed in the phase space and the results are mapped back to the physical domain. After validating the ESF-CCM, the method is used to investigate the effects of fuel cetane number on the structure of diesel spray combustion. It is shown that, depending of the fuel cetane number, liftoff length is varied, which can lead to a change in combustion mode from classical diesel spray combustion to fuel-lean premixed burned combustion. Spray combustion with a shorter liftoff length exhibits the characteristics of the classical conceptual diesel combustion model proposed by Dec in 1997 (http://dx.doi.org/10.4271/970873), whereas in a case with a lower cetane number the liftoff length is much larger and the spray combustion probably occurs in a fuel-lean-premixed mode of combustion. Nevertheless, the transport budget at the liftoff location shows that stabilisation at all cetane numbers is governed primarily by the auto-ignition process.

  3. Two methods of random seed generation to avoid over-segmentation with stochastic watershed: application to nuclear fuel micrographs.

    PubMed

    Tolosa, S Cativa; Blacher, S; Denis, A; Marajofsky, A; Pirard, J-P; Gommes, C J

    2009-10-01

    A stochastic version of the watershed algorithm is obtained by choosing randomly in the image the seeds from which the watershed regions are grown. The output of the procedure is a probability density function corresponding to the probability that each pixel belongs to a boundary. In the present paper, two stochastic seed-generation processes are explored to avoid over-segmentation. The first is a non-uniform Poisson process, the density of which is optimized on the basis of opening granulometry. The second process positions the seeds randomly within disks centred on the maxima of a distance map. The two methods are applied to characterize the grain structure of nuclear fuel pellets. Estimators are proposed for the total edge length and grain number per unit area, L(A) and N(A), which take advantage of the probabilistic nature of the probability density function and do not require segmentation.

  4. Sampling errors for satellite-derived tropical rainfall - Monte Carlo study using a space-time stochastic model

    NASA Technical Reports Server (NTRS)

    Bell, Thomas L.; Abdullah, A.; Martin, Russell L.; North, Gerald R.

    1990-01-01

    Estimates of monthly average rainfall based on satellite observations from a low earth orbit will differ from the true monthly average because the satellite observes a given area only intermittently. This sampling error inherent in satellite monitoring of rainfall would occur even if the satellite instruments could measure rainfall perfectly. The size of this error is estimated for a satellite system being studied at NASA, the Tropical Rainfall Measuring Mission (TRMM). First, the statistical description of rainfall on scales from 1 to 1000 km is examined in detail, based on rainfall data from the Global Atmospheric Research Project Atlantic Tropical Experiment (GATE). A TRMM-like satellite is flown over a two-dimensional time-evolving simulation of rainfall using a stochastic model with statistics tuned to agree with GATE statistics. The distribution of sampling errors found from many months of simulated observations is found to be nearly normal, even though the distribution of area-averaged rainfall is far from normal. For a range of orbits likely to be employed in TRMM, sampling error is found to be less than 10 percent of the mean for rainfall averaged over a 500 x 500 sq km area.

  5. Stochastic Convection Parameterizations

    NASA Technical Reports Server (NTRS)

    Teixeira, Joao; Reynolds, Carolyn; Suselj, Kay; Matheou, Georgios

    2012-01-01

    computational fluid dynamics, radiation, clouds, turbulence, convection, gravity waves, surface interaction, radiation interaction, cloud and aerosol microphysics, complexity (vegetation, biogeochemistry, radiation versus turbulence/convection stochastic approach, non-linearities, Monte Carlo, high resolutions, large-Eddy Simulations, cloud structure, plumes, saturation in tropics, forecasting, parameterizations, stochastic, radiation-clod interaction, hurricane forecasts

  6. Stochastic simulation of fission product activity in primary coolant due to fuel rod failures in typical PWRs under power transients

    NASA Astrophysics Data System (ADS)

    Javed Iqbal, M.; Mirza, Nasir M.; Mirza, Sikander M.

    2008-01-01

    During normal operation of PWRs, routine fuel rods failures result in release of radioactive fission products (RFPs) in the primary coolant of PWRs. In this work, a stochastic model has been developed for simulation of failure time sequences and release rates for the estimation of fission product activity in primary coolant of a typical PWR under power perturbations. In the first part, a stochastic approach is developed, based on generation of fuel failure event sequences by sampling the time dependent intensity functions. Then a three-stage model based deterministic methodology of the FPCART code has been extended to include failure sequences and random release rates in a computer code FPCART-ST, which uses state-of-the-art LEOPARD and ODMUG codes as its subroutines. The value of the 131I activity in primary coolant predicted by FPCART-ST code has been found in good agreement with the corresponding values measured at ANGRA-1 nuclear power plant. The predictions of FPCART-ST code with constant release option have also been found to have good agreement with corresponding experimental values for time dependent 135I, 135Xe and 89Kr concentrations in primary coolant measured during EDITHMOX-1 experiments.

  7. Thermal aging stability of infiltrated solid oxide fuel cell electrode microstructures: A three-dimensional kinetic Monte Carlo simulation

    NASA Astrophysics Data System (ADS)

    Zhang, Yanxiang; Ni, Meng; Yan, Mufu; Chen, Fanglin

    2015-12-01

    Nanostructured electrodes are widely used for low temperature solid oxide fuel cells, due to their remarkably high activity. However, the industrial applications of the infiltrated electrodes are hindered by the durability issues, such as the microstructure stability against thermal aging. Few strategies are available to overcome this challenge due to the limited knowledge about the coarsening kinetics of the infiltrated electrodes and how the potentially important factors affect the stability. In this work, the generic thermal aging kinetics of the three-dimensional microstructures of the infiltrate electrodes is investigated by a kinetic Monte Carlo simulation model considering surface diffusion mechanism. Effects of temperature, infiltration loading, wettability, and electrode configuration are studied and the key geometric parameters are calculated such as the infiltrate particle size, the total and percolated quantities of three-phase boundary length and infiltrate surface area, and the tortuosity factor of infiltrate network. Through parametric study, several strategies to improve the thermal aging stability are proposed.

  8. Characterization of exposure-dependent eigenvalue drift using Monte Carlo based nuclear fuel management

    NASA Astrophysics Data System (ADS)

    Xoubi, Ned

    2005-12-01

    The ability to accurately predict the multiplication factor (keff) of a nuclear reactor core as a function of exposure continues to be an elusive task for core designers despite decades of advances in computational methods. The difference between a predicted eigenvalue (target) and the actual eigenvalue at critical reactor conditions is herein referred to as the "eigenvalue drift." This dissertation studies exposure-dependent eigenvalue drift using MCNP-based fuel management analysis of the ORNL High Flux Isotope Reactor core. Spatial-dependent burnup is evaluated using the MONTEBURNS and ALEPH codes to link MCNP to ORIGEN to help analyze the behavior of keff as a function of fuel exposure. Understanding the exposure-dependent eigenvalue drift of a nuclear reactor is of particular relevance when trying to predict the impact of major design changes upon fuel cycle behavior and length. In this research, the design of an advanced HFIR core with a fuel loading of 12 kg of 235U is contrasted against the current loading of 9.4 kg. The goal of applying exposure dependent eigenvalue characterization is to produce a more accurate prediction of the fuel cycle length than prior analysis techniques, and to improve our understanding of the reactivity behavior of the core throughout the cycle. This investigation predicted a fuel cycle length of 40 days, representing a 50% increase in the cycle length in response to a 25% increase in fuel loading. The average burnup increased by about 48 MWd/kg U and it was confirmed that the excess reactivity can be controlled with the present design and arrangement of control elements throughout the core's life. Another major design change studied was the effect of installing an internal beryllium reflector upon cycle length. Exposure dependent eigenvalue predictions indicate that the actual benefit could be twice as large as that originally assessed via beginning-of-life (BOL) analyses.

  9. Monte Carlo boundary source approach in MOX fuel test capsule design

    SciTech Connect

    Chang, G.S.; Ryskamp, J.M.

    1999-09-01

    To demonstrate that the differences between weapons-grade (WG) mixed oxide (MOX) and reactor-grade MOX fuel are minimal, and therefore the commercial MOX experience base is applicable, an average power test (6 to 10 kW/ft) of WG MOX fuel was inserted into the Advanced Test Reactor (ATR) in January 1998. A high-power test (10 to 15 kW/ft) of WG MOX fuel in ATR is being fabricated as a follow-on to the average-power test. Two MOX capsules with 8.9 GWd/t burnup were removed from ATR on September 13, 1998, and replaced by two fresh WG MOX fuel capsules in regions with less thermal neutron flux (top-1 and bottom-1, which are away from the core center). To compensate for {sup 239}Pu depletion, which causes the linear heat generation rates (LHGRs) to decrease, the INCONEL shield was replaced by an aluminum shield in the phase-II irradiation. The authors describe and compare the results of the detailed MCNP ATR quarter core model (QCM) and isolated box model with boundary source (IBMBS). Physics analysis were performed with these two different models to provide the neutron/fission heat rate distribution data in the WG MOX fuel test assembly, with INCONEL and aluminum shrouds, located in the small I-24 hole of ATR.

  10. Monte Carlo characterization of PWR spent fuel assemblies to determine the detectability of pin diversion

    NASA Astrophysics Data System (ADS)

    Burdo, James S.

    This research is based on the concept that the diversion of nuclear fuel pins from Light Water Reactor (LWR) spent fuel assemblies is feasible by a careful comparison of spontaneous fission neutron and gamma levels in the guide tube locations of the fuel assemblies. The goal is to be able to determine whether some of the assembly fuel pins are either missing or have been replaced with dummy or fresh fuel pins. It is known that for typical commercial power spent fuel assemblies, the dominant spontaneous neutron emissions come from Cm-242 and Cm-244. Because of the shorter half-life of Cm-242 (0.45 yr) relative to that of Cm-244 (18.1 yr), Cm-244 is practically the only neutron source contributing to the neutron source term after the spent fuel assemblies are more than two years old. Initially, this research focused upon developing MCNP5 models of PWR fuel assemblies, modeling their depletion using the MONTEBURNS code, and by carrying out a preliminary depletion of a ¼ model 17x17 assembly from the TAKAHAMA-3 PWR. Later, the depletion and more accurate isotopic distribution in the pins at discharge was modeled using the TRITON depletion module of the SCALE computer code. Benchmarking comparisons were performed with the MONTEBURNS and TRITON results. Subsequently, the neutron flux in each of the guide tubes of the TAKAHAMA-3 PWR assembly at two years after discharge as calculated by the MCNP5 computer code was determined for various scenarios. Cases were considered for all spent fuel pins present and for replacement of a single pin at a position near the center of the assembly (10,9) and at the corner (17,1). Some scenarios were duplicated with a gamma flux calculation for high energies associated with Cm-244. For each case, the difference between the flux (neutron or gamma) for all spent fuel pins and with a pin removed or replaced is calculated for each guide tube. Different detection criteria were established. The first was whether the relative error of the

  11. Random-Walk Monte Carlo Simulation of Intergranular Gas Bubble Nucleation in UO2 Fuel

    SciTech Connect

    Yongfeng Zhang; Michael R. Tonks; S. B. Biner; D.A. Andersson

    2012-11-01

    Using a random-walk particle algorithm, we investigate the clustering of fission gas atoms on grain bound- aries in oxide fuels. The computational algorithm implemented in this work considers a planar surface representing a grain boundary on which particles appear at a rate dictated by the Booth flux, migrate two dimensionally according to their grain boundary diffusivity, and coalesce by random encounters. Specifically, the intergranular bubble nucleation density is the key variable we investigate using a parametric study in which the temperature, grain boundary gas diffusivity, and grain boundary segregation energy are varied. The results reveal that the grain boundary bubble nucleation density can vary widely due to these three parameters, which may be an important factor in the observed variability in intergranular bubble percolation among grain boundaries in oxide fuel during fission gas release.

  12. Decadal climatic variability and regional weather simulation: stochastic nature of forest fuel moisture and climatic forcing

    NASA Astrophysics Data System (ADS)

    Tsinko, Y.; Johnson, E. A.; Martin, Y. E.

    2014-12-01

    Natural range of variability of forest fire frequency is of great interest due to the current changing climate and seeming increase in the number of fires. The variability of the annual area burned in Canada has not been stable in the 20th century. Recently, these changes have been linked to large scale climate cycles, such as Pacific Decadal Oscillation (PDO) phases and El Nino Southern Oscillation (ENSO). The positive phase of the PDO was associated with the increased probability of hot dry spells leading to drier fuels and increased area burned. However, so far only one historical timeline was used to assess correlations between the natural climate oscillations and forest fire frequency. To counteract similar problems, weather generators are extensively used in hydrological and agricultural modeling to extend short instrumental record and to synthesize long sequences of daily weather parameters that are different from but statistically similar to historical weather. In the current study synthetic weather models were used to assess effects of alternative weather timelines on fuel moisture in Canada by using Canadian Forest Fire Weather Index moisture codes and potential fire frequency. The variability of fuel moisture codes was found to increase with the increased length of simulated series, thus indicating that the natural range of variability of forest fire frequency may be larger than that calculated from available short records. It may be viewed as a manifestation of a Hurst effect. Since PDO phases are thought to be caused by diverse mechanisms including overturning oceanic circulation, some of the lower frequency signals may be attributed to the long term memory of the oceanic system. Thus, care must be taken when assessing natural variability of climate dependent processes without accounting for potential long-term mechanisms.

  13. A stochastic model and Monte Carlo algorithm for fluctuation-induced H{sub 2} formation on the surface of interstellar dust grains

    SciTech Connect

    Sabelfeld, K.K.

    2015-09-01

    A stochastic algorithm for simulation of fluctuation-induced kinetics of H{sub 2} formation on grain surfaces is suggested as a generalization of the technique developed in our recent studies [1] where this method was developed to describe the annihilation of spatially separate electrons and holes in a disordered semiconductor. The stochastic model is based on the spatially inhomogeneous, nonlinear integro-differential Smoluchowski equations with random source term. In this paper we derive the general system of Smoluchowski type equations for the formation of H{sub 2} from two hydrogen atoms on the surface of interstellar dust grains with physisorption and chemisorption sites. We focus in this study on the spatial distribution, and numerically investigate the segregation in the case of a source with a continuous generation in time and randomly distributed in space. The stochastic particle method presented is based on a probabilistic interpretation of the underlying process as a stochastic Markov process of interacting particle system in discrete but randomly progressed time instances. The segregation is analyzed through the correlation analysis of the vector random field of concentrations which appears to be isotropic in space and stationary in time.

  14. A stochastic model and Monte Carlo algorithm for fluctuation-induced H2 formation on the surface of interstellar dust grains

    NASA Astrophysics Data System (ADS)

    Sabelfeld, K. K.

    2015-09-01

    A stochastic algorithm for simulation of fluctuation-induced kinetics of H2 formation on grain surfaces is suggested as a generalization of the technique developed in our recent studies [1] where this method was developed to describe the annihilation of spatially separate electrons and holes in a disordered semiconductor. The stochastic model is based on the spatially inhomogeneous, nonlinear integro-differential Smoluchowski equations with random source term. In this paper we derive the general system of Smoluchowski type equations for the formation of H2 from two hydrogen atoms on the surface of interstellar dust grains with physisorption and chemisorption sites. We focus in this study on the spatial distribution, and numerically investigate the segregation in the case of a source with a continuous generation in time and randomly distributed in space. The stochastic particle method presented is based on a probabilistic interpretation of the underlying process as a stochastic Markov process of interacting particle system in discrete but randomly progressed time instances. The segregation is analyzed through the correlation analysis of the vector random field of concentrations which appears to be isotropic in space and stationary in time.

  15. Neutron analysis of spent fuel storage installation using parallel computing and advance discrete ordinates and Monte Carlo techniques.

    PubMed

    Shedlock, Daniel; Haghighat, Alireza

    2005-01-01

    In the United States, the Nuclear Waste Policy Act of 1982 mandated centralised storage of spent nuclear fuel by 1988. However, the Yucca Mountain project is currently scheduled to start accepting spent nuclear fuel in 2010. Since many nuclear power plants were only designed for -10 y of spent fuel pool storage, > 35 plants have been forced into alternate means of spent fuel storage. In order to continue operation and make room in spent fuel pools, nuclear generators are turning towards independent spent fuel storage installations (ISFSIs). Typical vertical concrete ISFSIs are -6.1 m high and 3.3 m in diameter. The inherently large system, and the presence of thick concrete shields result in difficulties for both Monte Carlo (MC) and discrete ordinates (SN) calculations. MC calculations require significant variance reduction and multiple runs to obtain a detailed dose distribution. SN models need a large number of spatial meshes to accurately model the geometry and high quadrature orders to reduce ray effects, therefore, requiring significant amounts of computer memory and time. The use of various differencing schemes is needed to account for radial heterogeneity in material cross sections and densities. Two P3, S12, discrete ordinate, PENTRAN (parallel environment neutral-particle TRANsport) models were analysed and different MC models compared. A multigroup MCNP model was developed for direct comparison to the SN models. The biased A3MCNP (automated adjoint accelerated MCNP) and unbiased (MCNP) continuous energy MC models were developed to assess the adequacy of the CASK multigroup (22 neutron, 18 gamma) cross sections. The PENTRAN SN results are in close agreement (5%) with the multigroup MC results; however, they differ by -20-30% from the continuous-energy MC predictions. This large difference can be attributed to the expected difference between multigroup and continuous energy cross sections, and the fact that the CASK library is based on the old ENDF

  16. Neutron analysis of spent fuel storage installation using parallel computing and advance discrete ordinates and Monte Carlo techniques.

    PubMed

    Shedlock, Daniel; Haghighat, Alireza

    2005-01-01

    In the United States, the Nuclear Waste Policy Act of 1982 mandated centralised storage of spent nuclear fuel by 1988. However, the Yucca Mountain project is currently scheduled to start accepting spent nuclear fuel in 2010. Since many nuclear power plants were only designed for -10 y of spent fuel pool storage, > 35 plants have been forced into alternate means of spent fuel storage. In order to continue operation and make room in spent fuel pools, nuclear generators are turning towards independent spent fuel storage installations (ISFSIs). Typical vertical concrete ISFSIs are -6.1 m high and 3.3 m in diameter. The inherently large system, and the presence of thick concrete shields result in difficulties for both Monte Carlo (MC) and discrete ordinates (SN) calculations. MC calculations require significant variance reduction and multiple runs to obtain a detailed dose distribution. SN models need a large number of spatial meshes to accurately model the geometry and high quadrature orders to reduce ray effects, therefore, requiring significant amounts of computer memory and time. The use of various differencing schemes is needed to account for radial heterogeneity in material cross sections and densities. Two P3, S12, discrete ordinate, PENTRAN (parallel environment neutral-particle TRANsport) models were analysed and different MC models compared. A multigroup MCNP model was developed for direct comparison to the SN models. The biased A3MCNP (automated adjoint accelerated MCNP) and unbiased (MCNP) continuous energy MC models were developed to assess the adequacy of the CASK multigroup (22 neutron, 18 gamma) cross sections. The PENTRAN SN results are in close agreement (5%) with the multigroup MC results; however, they differ by -20-30% from the continuous-energy MC predictions. This large difference can be attributed to the expected difference between multigroup and continuous energy cross sections, and the fact that the CASK library is based on the old ENDF

  17. Stochastic Optimization of Complex Systems

    SciTech Connect

    Birge, John R.

    2014-03-20

    This project focused on methodologies for the solution of stochastic optimization problems based on relaxation and penalty methods, Monte Carlo simulation, parallel processing, and inverse optimization. The main results of the project were the development of a convergent method for the solution of models that include expectation constraints as in equilibrium models, improvement of Monte Carlo convergence through the use of a new method of sample batch optimization, the development of new parallel processing methods for stochastic unit commitment models, and the development of improved methods in combination with parallel processing for incorporating automatic differentiation methods into optimization.

  18. A stochastic Monte Carlo approach to modelling real star cluster evolution - III. Direct integration of three- and four-body interactions

    NASA Astrophysics Data System (ADS)

    Giersz, M.; Spurzem, R.

    2003-08-01

    Spherically symmetric equal-mass star clusters containing a large number of primordial binaries are studied using a hybrid method, consisting of a gas dynamical model for single stars and a Monte Carlo treatment for relaxation of binaries and the setup of close resonant and fly-by encounters of single stars with binaries and binaries with each other (three- and four-body encounters). What differs from our previous work is that each encounter is being integrated using a highly accurate direct few-body integrator which uses regularized variables. Hence we can study the systematic evolution of individual binary orbital parameters (eccentricity, semimajor axis) and differential and total cross-sections for hardening, dissolution or merging of binaries (minimum distance) from a sampling of several tens of thousands of scattering events as they occur in real cluster evolution, including mass segregation of binaries, gravothermal collapse and re-expansion, a binary burning phase and ultimately gravothermal oscillations. For the first time we are able to present empirical cross-sections for eccentricity variation of binaries in close three- and four-body encounters. It is found that a large fraction of three- and four-body encounters result in merging. Eccentricities are generally increased in strong three- and four-body encounters and there is a characteristic scaling law ~ exp (4efin) of the differential cross-section for eccentricity changes, where efin is the final eccentricity of the binary, or harder binary for four-body encounters. Despite these findings the overall eccentricity distribution remains thermal for all binding energies of binaries, which is understood from the dominant influence of resonant encounters. Previous cross-sections obtained by Spitzer and Gao for strong encounters can be reproduced, while for weak encounters non-standard processes such as the formation of hierarchical triples occur.

  19. Stochastic microstructural modeling of fuel cell gas diffusion layers and numerical determination of transport properties in different liquid water saturation levels

    NASA Astrophysics Data System (ADS)

    Tayarani-Yoosefabadi, Z.; Harvey, D.; Bellerive, J.; Kjeang, E.

    2016-01-01

    Gas diffusion layer (GDL) materials in polymer electrolyte membrane fuel cells (PEMFCs) are commonly made hydrophobic to enhance water management by avoiding liquid water blockage of the pores and facilitating reactant gas transport to the adjacent catalyst layer. In this work, a stochastic microstructural modeling approach is developed to simulate the transport properties of a commercial carbon paper based GDL under a range of PTFE loadings and liquid water saturation levels. The proposed novel stochastic method mimics the GDL manufacturing process steps and resolves all relevant phases including fiber, binder, PTFE, liquid water, and gas. After thorough validation of the general microstructure with literature and in-house data, a comprehensive set of anisotropic transport properties is simulated for the reconstructed GDL in different PTFE loadings and liquid water saturation levels and validated through a comparison with in-house ex situ experimental data and empirical formulations. In general, the results show good agreement between simulated and measured data. Decreasing trends in porosity, gas diffusivity, and permeability is obtained by increasing the PTFE loading and liquid water content, while the thermal conductivity is found to increase with liquid water saturation. Using the validated model, new correlations for saturation dependent GDL properties are proposed.

  20. Accelerating population balance-Monte Carlo simulation for coagulation dynamics from the Markov jump model, stochastic algorithm and GPU parallel computing

    SciTech Connect

    Xu, Zuwei; Zhao, Haibo Zheng, Chuguang

    2015-01-15

    This paper proposes a comprehensive framework for accelerating population balance-Monte Carlo (PBMC) simulation of particle coagulation dynamics. By combining Markov jump model, weighted majorant kernel and GPU (graphics processing unit) parallel computing, a significant gain in computational efficiency is achieved. The Markov jump model constructs a coagulation-rule matrix of differentially-weighted simulation particles, so as to capture the time evolution of particle size distribution with low statistical noise over the full size range and as far as possible to reduce the number of time loopings. Here three coagulation rules are highlighted and it is found that constructing appropriate coagulation rule provides a route to attain the compromise between accuracy and cost of PBMC methods. Further, in order to avoid double looping over all simulation particles when considering the two-particle events (typically, particle coagulation), the weighted majorant kernel is introduced to estimate the maximum coagulation rates being used for acceptance–rejection processes by single-looping over all particles, and meanwhile the mean time-step of coagulation event is estimated by summing the coagulation kernels of rejected and accepted particle pairs. The computational load of these fast differentially-weighted PBMC simulations (based on the Markov jump model) is reduced greatly to be proportional to the number of simulation particles in a zero-dimensional system (single cell). Finally, for a spatially inhomogeneous multi-dimensional (multi-cell) simulation, the proposed fast PBMC is performed in each cell, and multiple cells are parallel processed by multi-cores on a GPU that can implement the massively threaded data-parallel tasks to obtain remarkable speedup ratio (comparing with CPU computation, the speedup ratio of GPU parallel computing is as high as 200 in a case of 100 cells with 10 000 simulation particles per cell). These accelerating approaches of PBMC are

  1. Accelerating population balance-Monte Carlo simulation for coagulation dynamics from the Markov jump model, stochastic algorithm and GPU parallel computing

    NASA Astrophysics Data System (ADS)

    Xu, Zuwei; Zhao, Haibo; Zheng, Chuguang

    2015-01-01

    This paper proposes a comprehensive framework for accelerating population balance-Monte Carlo (PBMC) simulation of particle coagulation dynamics. By combining Markov jump model, weighted majorant kernel and GPU (graphics processing unit) parallel computing, a significant gain in computational efficiency is achieved. The Markov jump model constructs a coagulation-rule matrix of differentially-weighted simulation particles, so as to capture the time evolution of particle size distribution with low statistical noise over the full size range and as far as possible to reduce the number of time loopings. Here three coagulation rules are highlighted and it is found that constructing appropriate coagulation rule provides a route to attain the compromise between accuracy and cost of PBMC methods. Further, in order to avoid double looping over all simulation particles when considering the two-particle events (typically, particle coagulation), the weighted majorant kernel is introduced to estimate the maximum coagulation rates being used for acceptance-rejection processes by single-looping over all particles, and meanwhile the mean time-step of coagulation event is estimated by summing the coagulation kernels of rejected and accepted particle pairs. The computational load of these fast differentially-weighted PBMC simulations (based on the Markov jump model) is reduced greatly to be proportional to the number of simulation particles in a zero-dimensional system (single cell). Finally, for a spatially inhomogeneous multi-dimensional (multi-cell) simulation, the proposed fast PBMC is performed in each cell, and multiple cells are parallel processed by multi-cores on a GPU that can implement the massively threaded data-parallel tasks to obtain remarkable speedup ratio (comparing with CPU computation, the speedup ratio of GPU parallel computing is as high as 200 in a case of 100 cells with 10 000 simulation particles per cell). These accelerating approaches of PBMC are

  2. Environmental decision-making using life cycle impact assessment and stochastic multiattribute decision analysis: a case study on alternative transportation fuels.

    PubMed

    Rogers, Kristin; Seager, Thomas P

    2009-03-15

    Life cycle impact assessment (LCIA) involves weighing trade-offs between multiple and incommensurate criteria. Current state-of-the-art LCIA tools typically compute an overall environmental score using a linear-weighted aggregation of characterized inventory data that has been normalized relative to total industry, regional, or national emissions. However, current normalization practices risk masking impacts that may be significant within the context of the decision, albeit small relative to the reference data (e.g., total U.S. emissions). Additionally, uncertainty associated with quantification of weights is generally very high. Partly for these reasons, many LCA studies truncate impact assessment at the inventory characterization step, rather than completing normalization and weighting steps. This paper describes a novel approach called stochastic multiattribute life cycle impact assessment (SMA-LCIA) that combines an outranking approach to normalization with stochastic exploration of weight spaces-avoiding some of the drawbacks of current LCIA methods. To illustrate the new approach, SMA-LCIA is compared with a typical LCIA method for crop-based, fossil-based, and electric fuels using the Greenhouse gas Regulated Emissions and Energy Use in Transportation (GREET) model for inventory data and the Tool for the Reduction and Assessment of Chemical and other Environmental Impacts (TRACI) model for data characterization. In contrast to the typical LCIA case, in which results are dominated by fossil fuel depletion and global warming considerations regardless of criteria weights, the SMA-LCIA approach results in a rank ordering that is more sensitive to decisionmaker preferences. The principal advantage of the SMA-LCIA method is the ability to facilitate exploration and construction of context-specific criteria preferences by simultaneously representing multiple weights spaces and the sensitivity of the rank ordering to uncertain stakeholder values. PMID:19368162

  3. Using the Monte Carlo Coupling Technique to Evaluate the Shielding Ability of a Modular Shielding House to Accommodate Spent-Fuel Transportable Storage Casks

    SciTech Connect

    Ueki, Kohtaro; Kawakami, Kazuo; Shimizu, Daisuke

    2003-02-15

    The Monte Carlo coupling technique with the coordinate transformation is used to evaluate the shielding ability of a modular shielding house that accommodates four spent-fuel transportable storage casks for two units. The effective dose rate distributions can be obtained as far as 300 m from the center of the shielding house. The coupling technique is created with the Surface Source Write (SSW) card and the Surface Source Read/Coordinate Transformation (SSR/CRT) card in the MCNP 4C continuous energy Monte Carlo code as the 'SSW-SSR/CRT calculation system'. In the present Monte Carlo coupling calculation, the total effective dose rates 100, 200, and 300 m from the center of the shielding house are estimated to be 1.69, 0.285, and 0.0826 ({mu}Sv/yr per four casks), respectively. Accordingly, if the distance between the center of the shielding house and the site boundary of the storage facility is kept at >300 m, approximately 2400 casks are able to be accommodated in the modular shielding houses, under the Japanese severe criterion of 50 {mu}Sv/yr at the site boundary. The shielding house alone satisfies not only the technical conditions but also the economic requirements.It became evident that secondary gamma rays account for >60% of the effective total dose rate at all the calculated points around the shielding house, most of which are produced from the water in the steel-water-steel shielding system of the shielding house. The remainder of the dose rate comes mostly from neutrons; the fission product and {sup 60}Co activation gamma rays account for small percentages. Accordingly, reducing the secondary gamma rays is critical to improving not only the shielding ability but also the radiation safety of the shielding house.

  4. Stochastic Feedforward Control Technique

    NASA Technical Reports Server (NTRS)

    Halyo, Nesim

    1990-01-01

    Class of commanded trajectories modeled as stochastic process. Advanced Transport Operating Systems (ATOPS) research and development program conducted by NASA Langley Research Center aimed at developing capabilities for increases in capacities of airports, safe and accurate flight in adverse weather conditions including shear, winds, avoidance of wake vortexes, and reduced consumption of fuel. Advances in techniques for design of modern controls and increased capabilities of digital flight computers coupled with accurate guidance information from Microwave Landing System (MLS). Stochastic feedforward control technique developed within context of ATOPS program.

  5. Experiments and Theoretical Data for Studying the Impact of Fission Yield Uncertainties on the Nuclear Fuel Cycle with TALYS/GEF and the Total Monte Carlo Method

    NASA Astrophysics Data System (ADS)

    Pomp, S.; Al-Adili, A.; Alhassan, E.; Gustavsson, C.; Helgesson, P.; Hellesen, C.; Koning, A. J.; Lantz, M.; Österlund, M.; Rochman, D.; Simutkin, V.; Sjöstrand, H.; Solders, A.

    2015-01-01

    We describe the research program of the nuclear reactions research group at Uppsala University concerning experimental and theoretical efforts to quantify and reduce nuclear data uncertainties relevant for the nuclear fuel cycle. We briefly describe the Total Monte Carlo (TMC) methodology and how it can be used to study fuel cycle and accident scenarios, and summarize our relevant experimental activities. Input from the latter is to be used to guide the nuclear models and constrain parameter space for TMC. The TMC method relies on the availability of good nuclear models. For this we use the TALYS code which is currently being extended to include the GEF model for the fission channel. We present results from TALYS-1.6 using different versions of GEF with both default and randomized input parameters and compare calculations with experimental data for 234U(n,f) in the fast energy range. These preliminary studies reveal some systematic differences between experimental data and calculations but give overall good and promising results.

  6. Experiments and Theoretical Data for Studying the Impact of Fission Yield Uncertainties on the Nuclear Fuel Cycle with TALYS/GEF and the Total Monte Carlo Method

    SciTech Connect

    Pomp, S.; Al-Adili, A.; Alhassan, E.; Gustavsson, C.; Helgesson, P.; Hellesen, C.; Koning, A.J.; Lantz, M.; Österlund, M.; Rochman, D.; Simutkin, V.; Sjöstrand, H.; Solders, A.

    2015-01-15

    We describe the research program of the nuclear reactions research group at Uppsala University concerning experimental and theoretical efforts to quantify and reduce nuclear data uncertainties relevant for the nuclear fuel cycle. We briefly describe the Total Monte Carlo (TMC) methodology and how it can be used to study fuel cycle and accident scenarios, and summarize our relevant experimental activities. Input from the latter is to be used to guide the nuclear models and constrain parameter space for TMC. The TMC method relies on the availability of good nuclear models. For this we use the TALYS code which is currently being extended to include the GEF model for the fission channel. We present results from TALYS-1.6 using different versions of GEF with both default and randomized input parameters and compare calculations with experimental data for {sup 234}U(n,f) in the fast energy range. These preliminary studies reveal some systematic differences between experimental data and calculations but give overall good and promising results.

  7. Stochastic games

    PubMed Central

    Solan, Eilon; Vieille, Nicolas

    2015-01-01

    In 1953, Lloyd Shapley contributed his paper “Stochastic games” to PNAS. In this paper, he defined the model of stochastic games, which were the first general dynamic model of a game to be defined, and proved that it admits a stationary equilibrium. In this Perspective, we summarize the historical context and the impact of Shapley’s contribution. PMID:26556883

  8. Stochastic resonance

    NASA Astrophysics Data System (ADS)

    Gammaitoni, Luca; Hänggi, Peter; Jung, Peter; Marchesoni, Fabio

    1998-01-01

    Over the last two decades, stochastic resonance has continuously attracted considerable attention. The term is given to a phenomenon that is manifest in nonlinear systems whereby generally feeble input information (such as a weak signal) can be be amplified and optimized by the assistance of noise. The effect requires three basic ingredients: (i) an energetic activation barrier or, more generally, a form of threshold; (ii) a weak coherent input (such as a periodic signal); (iii) a source of noise that is inherent in the system, or that adds to the coherent input. Given these features, the response of the system undergoes resonance-like behavior as a function of the noise level; hence the name stochastic resonance. The underlying mechanism is fairly simple and robust. As a consequence, stochastic resonance has been observed in a large variety of systems, including bistable ring lasers, semiconductor devices, chemical reactions, and mechanoreceptor cells in the tail fan of a crayfish. In this paper, the authors report, interpret, and extend much of the current understanding of the theory and physics of stochastic resonance. They introduce the readers to the basic features of stochastic resonance and its recent history. Definitions of the characteristic quantities that are important to quantify stochastic resonance, together with the most important tools necessary to actually compute those quantities, are presented. The essence of classical stochastic resonance theory is presented, and important applications of stochastic resonance in nonlinear optics, solid state devices, and neurophysiology are described and put into context with stochastic resonance theory. More elaborate and recent developments of stochastic resonance theory are discussed, ranging from fundamental quantum properties-being important at low temperatures-over spatiotemporal aspects in spatially distributed systems, to realizations in chaotic maps. In conclusion the authors summarize the achievements

  9. Stochastic models: theory and simulation.

    SciTech Connect

    Field, Richard V., Jr.

    2008-03-01

    Many problems in applied science and engineering involve physical phenomena that behave randomly in time and/or space. Examples are diverse and include turbulent flow over an aircraft wing, Earth climatology, material microstructure, and the financial markets. Mathematical models for these random phenomena are referred to as stochastic processes and/or random fields, and Monte Carlo simulation is the only general-purpose tool for solving problems of this type. The use of Monte Carlo simulation requires methods and algorithms to generate samples of the appropriate stochastic model; these samples then become inputs and/or boundary conditions to established deterministic simulation codes. While numerous algorithms and tools currently exist to generate samples of simple random variables and vectors, no cohesive simulation tool yet exists for generating samples of stochastic processes and/or random fields. There are two objectives of this report. First, we provide some theoretical background on stochastic processes and random fields that can be used to model phenomena that are random in space and/or time. Second, we provide simple algorithms that can be used to generate independent samples of general stochastic models. The theory and simulation of random variables and vectors is also reviewed for completeness.

  10. The Effect of Stochastic Perturbation of Fuel Distribution on the Criticality of a One Speed Reactor and the Development of Multi-Material Multinomial Line Statistics

    NASA Technical Reports Server (NTRS)

    Jahshan, S. N.; Singleterry, R. C.

    2001-01-01

    The effect of random fuel redistribution on the eigenvalue of a one-speed reactor is investigated. An ensemble of such reactors that are identical to a homogeneous reference critical reactor except for the fissile isotope density distribution is constructed such that it meets a set of well-posed redistribution requirements. The average eigenvalue, , is evaluated when the total fissile loading per ensemble element, or realization, is conserved. The perturbation is proven to increase the reactor criticality on average when it is uniformly distributed. The various causes of the change in reactivity, and their relative effects are identified and ranked. From this, a path towards identifying the causes. and relative effects of reactivity fluctuations for the energy dependent problem is pointed to. The perturbation method of using multinomial distributions for representing the perturbed reactor is developed. This method has some advantages that can be of use in other stochastic problems. Finally, some of the features of this perturbation problem are related to other techniques that have been used for addressing similar problems.

  11. QB1 - Stochastic Gene Regulation

    SciTech Connect

    Munsky, Brian

    2012-07-23

    Summaries of this presentation are: (1) Stochastic fluctuations or 'noise' is present in the cell - Random motion and competition between reactants, Low copy, quantization of reactants, Upstream processes; (2) Fluctuations may be very important - Cell-to-cell variability, Cell fate decisions (switches), Signal amplification or damping, stochastic resonances; and (3) Some tools are available to mode these - Kinetic Monte Carlo simulations (SSA and variants), Moment approximation methods, Finite State Projection. We will see how modeling these reactions can tell us more about the underlying processes of gene regulation.

  12. Radiation Transport Computation in Stochastic Media: Method and Application

    NASA Astrophysics Data System (ADS)

    Liang, Chao

    Stochastic media, characterized by the stochastic distribution of inclusions in a background medium, are typical radiation transport media encountered in natural or engineering systems. In the community of radiation transport computation, there is always a demand of accurate and efficient methods that can account for the nature of the stochastic distribution. In this dissertation, we focus on methodology development for the radiation transport computation that is applied to neutronic analyses of nuclear reactor designs characterized by the stochastic distribution of particle fuel. Reactor concepts with the employment of a fuel design consisting of a random heterogeneous mixture of fissile material and non-fissile moderator are constantly proposed. Key physical quantities such as core criticality and power distribution, reactivity control design parameters, depletion and fuel burn-up need to be carefully evaluated. In order to meet these practical requirements, we first need to develop accurate and fast computational methods that can effectively account for the stochastic nature of double heterogeneity configuration. A Monte Carlo based method called Chord Length Sampling (CLS) method is considered to be a promising method for analyzing those TRISO-type fueled reactors. Although the CLS method has been proposed for more than two decades and much research has been conducted to enhance its applicability, further efforts are still needed to address some key research gaps that exist for the CLS method. (1) There is a general lack of thorough investigation of the factors that give rise to the inaccuracy of the CLS method found by many researchers. The accuracy of the CLS method depends on the optical and geometric properties of the system. In some specific scenarios, considerable inaccuracies have been reported. However, no research has been providing a clear interpretation of the reasons responsible for the inaccuracy in the reported scenarios. Furthermore, no any

  13. An advanced deterministic method for spent fuel criticality safety analysis

    SciTech Connect

    DeHart, M.D.

    1998-01-01

    Over the past two decades, criticality safety analysts have come to rely to a large extent on Monte Carlo methods for criticality calculations. Monte Carlo has become popular because of its capability to model complex, non-orthogonal configurations or fissile materials, typical of real world problems. Over the last few years, however, interest in determinist transport methods has been revived, due shortcomings in the stochastic nature of Monte Carlo approaches for certain types of analyses. Specifically, deterministic methods are superior to stochastic methods for calculations requiring accurate neutron density distributions or differential fluxes. Although Monte Carlo methods are well suited for eigenvalue calculations, they lack the localized detail necessary to assess uncertainties and sensitivities important in determining a range of applicability. Monte Carlo methods are also inefficient as a transport solution for multiple pin depletion methods. Discrete ordinates methods have long been recognized as one of the most rigorous and accurate approximations used to solve the transport equation. However, until recently, geometric constraints in finite differencing schemes have made discrete ordinates methods impractical for non-orthogonal configurations such as reactor fuel assemblies. The development of an extended step characteristic (ESC) technique removes the grid structure limitations of traditional discrete ordinates methods. The NEWT computer code, a discrete ordinates code built upon the ESC formalism, is being developed as part of the SCALE code system. This paper will demonstrate the power, versatility, and applicability of NEWT as a state-of-the-art solution for current computational needs.

  14. Stochastic thermodynamics

    NASA Astrophysics Data System (ADS)

    Eichhorn, Ralf; Aurell, Erik

    2014-04-01

    'Stochastic thermodynamics as a conceptual framework combines the stochastic energetics approach introduced a decade ago by Sekimoto [1] with the idea that entropy can consistently be assigned to a single fluctuating trajectory [2]'. This quote, taken from Udo Seifert's [3] 2008 review, nicely summarizes the basic ideas behind stochastic thermodynamics: for small systems, driven by external forces and in contact with a heat bath at a well-defined temperature, stochastic energetics [4] defines the exchanged work and heat along a single fluctuating trajectory and connects them to changes in the internal (system) energy by an energy balance analogous to the first law of thermodynamics. Additionally, providing a consistent definition of trajectory-wise entropy production gives rise to second-law-like relations and forms the basis for a 'stochastic thermodynamics' along individual fluctuating trajectories. In order to construct meaningful concepts of work, heat and entropy production for single trajectories, their definitions are based on the stochastic equations of motion modeling the physical system of interest. Because of this, they are valid even for systems that are prevented from equilibrating with the thermal environment by external driving forces (or other sources of non-equilibrium). In that way, the central notions of equilibrium thermodynamics, such as heat, work and entropy, are consistently extended to the non-equilibrium realm. In the (non-equilibrium) ensemble, the trajectory-wise quantities acquire distributions. General statements derived within stochastic thermodynamics typically refer to properties of these distributions, and are valid in the non-equilibrium regime even beyond the linear response. The extension of statistical mechanics and of exact thermodynamic statements to the non-equilibrium realm has been discussed from the early days of statistical mechanics more than 100 years ago. This debate culminated in the development of linear response

  15. Application of surface-harmonics code SUHAM-U and Monte-Carlo code UNK-MC for calculations of 2D light water benchmark-experiment VENUS-2 with UO{sub 2} and MOX fuel

    SciTech Connect

    Boyarinov, V. F.; Davidenko, V. D.; Nevinitsa, V. A.; Tsibulsky, V. F.

    2006-07-01

    Verification of the SUHAM-U code has been carried out by the calculation of two-dimensional benchmark-experiment on critical light-water facility VENUS-2. Comparisons with experimental data and calculations by Monte-Carlo code UNK with the same nuclear data library B645 for basic isotopes have been fulfilled. Calculations of two-dimensional facility were carried out with using experimentally measured buckling values. Possibility of SUHAM code application for computations of PWR reactor with uranium and MOX fuel has been demonstrated. (authors)

  16. Numerical studies of the stochastic Korteweg-de Vries equation

    SciTech Connect

    Lin Guang; Grinberg, Leopold; Karniadakis, George Em . E-mail: gk@dam.brown.edu

    2006-04-10

    We present numerical solutions of the stochastic Korteweg-de Vries equation for three cases corresponding to additive time-dependent noise, multiplicative space-dependent noise and a combination of the two. We employ polynomial chaos for discretization in random space, and discontinuous Galerkin and finite difference for discretization in physical space. The accuracy of the stochastic solutions is investigated by comparing the first two moments against analytical and Monte Carlo simulation results. Of particular interest is the interplay of spatial discretization error with the stochastic approximation error, which is examined for different orders of spatial and stochastic approximation.

  17. Stochastic-field cavitation model

    NASA Astrophysics Data System (ADS)

    Dumond, J.; Magagnato, F.; Class, A.

    2013-07-01

    Nonlinear phenomena can often be well described using probability density functions (pdf) and pdf transport models. Traditionally, the simulation of pdf transport requires Monte-Carlo codes based on Lagrangian "particles" or prescribed pdf assumptions including binning techniques. Recently, in the field of combustion, a novel formulation called the stochastic-field method solving pdf transport based on Eulerian fields has been proposed which eliminates the necessity to mix Eulerian and Lagrangian techniques or prescribed pdf assumptions. In the present work, for the first time the stochastic-field method is applied to multi-phase flow and, in particular, to cavitating flow. To validate the proposed stochastic-field cavitation model, two applications are considered. First, sheet cavitation is simulated in a Venturi-type nozzle. The second application is an innovative fluidic diode which exhibits coolant flashing. Agreement with experimental results is obtained for both applications with a fixed set of model constants. The stochastic-field cavitation model captures the wide range of pdf shapes present at different locations.

  18. Stochastic-field cavitation model

    SciTech Connect

    Dumond, J.; Magagnato, F.; Class, A.

    2013-07-15

    Nonlinear phenomena can often be well described using probability density functions (pdf) and pdf transport models. Traditionally, the simulation of pdf transport requires Monte-Carlo codes based on Lagrangian “particles” or prescribed pdf assumptions including binning techniques. Recently, in the field of combustion, a novel formulation called the stochastic-field method solving pdf transport based on Eulerian fields has been proposed which eliminates the necessity to mix Eulerian and Lagrangian techniques or prescribed pdf assumptions. In the present work, for the first time the stochastic-field method is applied to multi-phase flow and, in particular, to cavitating flow. To validate the proposed stochastic-field cavitation model, two applications are considered. First, sheet cavitation is simulated in a Venturi-type nozzle. The second application is an innovative fluidic diode which exhibits coolant flashing. Agreement with experimental results is obtained for both applications with a fixed set of model constants. The stochastic-field cavitation model captures the wide range of pdf shapes present at different locations.

  19. Stochastic kinetic mean field model

    NASA Astrophysics Data System (ADS)

    Erdélyi, Zoltán; Pasichnyy, Mykola; Bezpalchuk, Volodymyr; Tomán, János J.; Gajdics, Bence; Gusak, Andriy M.

    2016-07-01

    This paper introduces a new model for calculating the change in time of three-dimensional atomic configurations. The model is based on the kinetic mean field (KMF) approach, however we have transformed that model into a stochastic approach by introducing dynamic Langevin noise. The result is a stochastic kinetic mean field model (SKMF) which produces results similar to the lattice kinetic Monte Carlo (KMC). SKMF is, however, far more cost-effective and easier to implement the algorithm (open source program code is provided on http://skmf.eu website). We will show that the result of one SKMF run may correspond to the average of several KMC runs. The number of KMC runs is inversely proportional to the amplitude square of the noise in SKMF. This makes SKMF an ideal tool also for statistical purposes.

  20. Shell model Monte Carlo methods

    SciTech Connect

    Koonin, S.E.; Dean, D.J.

    1996-10-01

    We review quantum Monte Carlo methods for dealing with large shell model problems. These methods reduce the imaginary-time many-body evolution operator to a coherent superposition of one-body evolutions in fluctuating one-body fields; resultant path integral is evaluated stochastically. We first discuss the motivation, formalism, and implementation of such Shell Model Monte Carlo methods. There then follows a sampler of results and insights obtained from a number of applications. These include the ground state and thermal properties of pf-shell nuclei, thermal behavior of {gamma}-soft nuclei, and calculation of double beta-decay matrix elements. Finally, prospects for further progress in such calculations are discussed. 87 refs.

  1. Stochastic solution to quantum dynamics

    NASA Technical Reports Server (NTRS)

    John, Sarah; Wilson, John W.

    1994-01-01

    The quantum Liouville equation in the Wigner representation is solved numerically by using Monte Carlo methods. For incremental time steps, the propagation is implemented as a classical evolution in phase space modified by a quantum correction. The correction, which is a momentum jump function, is simulated in the quasi-classical approximation via a stochastic process. The technique, which is developed and validated in two- and three- dimensional momentum space, extends an earlier one-dimensional work. Also, by developing a new algorithm, the application to bound state motion in an anharmonic quartic potential shows better agreement with exact solutions in two-dimensional phase space.

  2. An advanced deterministic method for spent-fuel criticality safety analysis

    SciTech Connect

    DeHart, M.D.

    1998-09-01

    Over the past two decades, criticality safety analysts have come to rely to a large extent on Monte Carlo methods for criticality calculations. Monte Carlo has become popular because of its capability to model complex, nonorthogonal configurations or fissile materials, typical of real-world problems. In the last few years, however, interest in determinist transport methods has been revived, due to shortcomings in the stochastic nature of Monte Carlo approaches for certain types of analyses. Specifically, deterministic methods are superior to stochastic methods for calculations requiring accurate neutron density distributions or differential fluxes. Although Monte Carlo methods are well suited for eigenvalue calculations, they lack the localized detail necessary to assess uncertainties and sensitivities important in determining a range of applicability. Monte Carlo methods are also inefficient as a transport solution for multiple-pin depletion methods. Discrete ordinates methods have long been recognized as one of the most rigorous and accurate approximations used to solve the transport equation. However, until recently, geometric constrains in finite differencing schemes have made discrete ordinates methods impractical for nonorthogonal configurations such as reactor fuel assemblies. The development of an extended step characteristic (ESC) technique removes the grid structure limitation of traditional discrete ordinates methods. The NEWT computer code, a discrete ordinates code built on the ESC formalism, is being developed as part of the SCALE code system. This paper demonstrates the power, versatility, and applicability of NEWT as a state-of-the-art solution for current computational needs.

  3. Planning under uncertainty solving large-scale stochastic linear programs

    SciTech Connect

    Infanger, G. . Dept. of Operations Research Technische Univ., Vienna . Inst. fuer Energiewirtschaft)

    1992-12-01

    For many practical problems, solutions obtained from deterministic models are unsatisfactory because they fail to hedge against certain contingencies that may occur in the future. Stochastic models address this shortcoming, but up to recently seemed to be intractable due to their size. Recent advances both in solution algorithms and in computer technology now allow us to solve important and general classes of practical stochastic problems. We show how large-scale stochastic linear programs can be efficiently solved by combining classical decomposition and Monte Carlo (importance) sampling techniques. We discuss the methodology for solving two-stage stochastic linear programs with recourse, present numerical results of large problems with numerous stochastic parameters, show how to efficiently implement the methodology on a parallel multi-computer and derive the theory for solving a general class of multi-stage problems with dependency of the stochastic parameters within a stage and between different stages.

  4. Binomial moment equations for stochastic reaction systems.

    PubMed

    Barzel, Baruch; Biham, Ofer

    2011-04-15

    A highly efficient formulation of moment equations for stochastic reaction networks is introduced. It is based on a set of binomial moments that capture the combinatorics of the reaction processes. The resulting set of equations can be easily truncated to include moments up to any desired order. The number of equations is dramatically reduced compared to the master equation. This formulation enables the simulation of complex reaction networks, involving a large number of reactive species much beyond the feasibility limit of any existing method. It provides an equation-based paradigm to the analysis of stochastic networks, complementing the commonly used Monte Carlo simulations. PMID:21568538

  5. Stochastic Cooling

    SciTech Connect

    Blaskiewicz, M.

    2011-01-01

    Stochastic Cooling was invented by Simon van der Meer and was demonstrated at the CERN ISR and ICE (Initial Cooling Experiment). Operational systems were developed at Fermilab and CERN. A complete theory of cooling of unbunched beams was developed, and was applied at CERN and Fermilab. Several new and existing rings employ coasting beam cooling. Bunched beam cooling was demonstrated in ICE and has been observed in several rings designed for coasting beam cooling. High energy bunched beams have proven more difficult. Signal suppression was achieved in the Tevatron, though operational cooling was not pursued at Fermilab. Longitudinal cooling was achieved in the RHIC collider. More recently a vertical cooling system in RHIC cooled both transverse dimensions via betatron coupling.

  6. Algorithmic advances in stochastic programming

    SciTech Connect

    Morton, D.P.

    1993-07-01

    Practical planning problems with deterministic forecasts of inherently uncertain parameters often yield unsatisfactory solutions. Stochastic programming formulations allow uncertain parameters to be modeled as random variables with known distributions, but the size of the resulting mathematical programs can be formidable. Decomposition-based algorithms take advantage of special structure and provide an attractive approach to such problems. We consider two classes of decomposition-based stochastic programming algorithms. The first type of algorithm addresses problems with a ``manageable`` number of scenarios. The second class incorporates Monte Carlo sampling within a decomposition algorithm. We develop and empirically study an enhanced Benders decomposition algorithm for solving multistage stochastic linear programs within a prespecified tolerance. The enhancements include warm start basis selection, preliminary cut generation, the multicut procedure, and decision tree traversing strategies. Computational results are presented for a collection of ``real-world`` multistage stochastic hydroelectric scheduling problems. Recently, there has been an increased focus on decomposition-based algorithms that use sampling within the optimization framework. These approaches hold much promise for solving stochastic programs with many scenarios. A critical component of such algorithms is a stopping criterion to ensure the quality of the solution. With this as motivation, we develop a stopping rule theory for algorithms in which bounds on the optimal objective function value are estimated by sampling. Rules are provided for selecting sample sizes and terminating the algorithm under which asymptotic validity of confidence interval statements for the quality of the proposed solution can be verified. Issues associated with the application of this theory to two sampling-based algorithms are considered, and preliminary empirical coverage results are presented.

  7. Interaction picture density matrix quantum Monte Carlo

    SciTech Connect

    Malone, Fionn D. Lee, D. K. K.; Foulkes, W. M. C.; Blunt, N. S.; Shepherd, James J.; Spencer, J. S.

    2015-07-28

    The recently developed density matrix quantum Monte Carlo (DMQMC) algorithm stochastically samples the N-body thermal density matrix and hence provides access to exact properties of many-particle quantum systems at arbitrary temperatures. We demonstrate that moving to the interaction picture provides substantial benefits when applying DMQMC to interacting fermions. In this first study, we focus on a system of much recent interest: the uniform electron gas in the warm dense regime. The basis set incompleteness error at finite temperature is investigated and extrapolated via a simple Monte Carlo sampling procedure. Finally, we provide benchmark calculations for a four-electron system, comparing our results to previous work where possible.

  8. Interaction picture density matrix quantum Monte Carlo.

    PubMed

    Malone, Fionn D; Blunt, N S; Shepherd, James J; Lee, D K K; Spencer, J S; Foulkes, W M C

    2015-07-28

    The recently developed density matrix quantum Monte Carlo (DMQMC) algorithm stochastically samples the N-body thermal density matrix and hence provides access to exact properties of many-particle quantum systems at arbitrary temperatures. We demonstrate that moving to the interaction picture provides substantial benefits when applying DMQMC to interacting fermions. In this first study, we focus on a system of much recent interest: the uniform electron gas in the warm dense regime. The basis set incompleteness error at finite temperature is investigated and extrapolated via a simple Monte Carlo sampling procedure. Finally, we provide benchmark calculations for a four-electron system, comparing our results to previous work where possible.

  9. Monte Carlo Methods in the Physical Sciences

    SciTech Connect

    Kalos, M H

    2007-06-06

    I will review the role that Monte Carlo methods play in the physical sciences. They are very widely used for a number of reasons: they permit the rapid and faithful transformation of a natural or model stochastic process into a computer code. They are powerful numerical methods for treating the many-dimensional problems that derive from important physical systems. Finally, many of the methods naturally permit the use of modern parallel computers in efficient ways. In the presentation, I will emphasize four aspects of the computations: whether or not the computation derives from a natural or model stochastic process; whether the system under study is highly idealized or realistic; whether the Monte Carlo methodology is straightforward or mathematically sophisticated; and finally, the scientific role of the computation.

  10. Efficiency of Health Care Production in Low-Resource Settings: A Monte-Carlo Simulation to Compare the Performance of Data Envelopment Analysis, Stochastic Distance Functions, and an Ensemble Model

    PubMed Central

    Giorgio, Laura Di; Flaxman, Abraham D.; Moses, Mark W.; Fullman, Nancy; Hanlon, Michael; Conner, Ruben O.; Wollum, Alexandra; Murray, Christopher J. L.

    2016-01-01

    Low-resource countries can greatly benefit from even small increases in efficiency of health service provision, supporting a strong case to measure and pursue efficiency improvement in low- and middle-income countries (LMICs). However, the knowledge base concerning efficiency measurement remains scarce for these contexts. This study shows that current estimation approaches may not be well suited to measure technical efficiency in LMICs and offers an alternative approach for efficiency measurement in these settings. We developed a simulation environment which reproduces the characteristics of health service production in LMICs, and evaluated the performance of Data Envelopment Analysis (DEA) and Stochastic Distance Function (SDF) for assessing efficiency. We found that an ensemble approach (ENS) combining efficiency estimates from a restricted version of DEA (rDEA) and restricted SDF (rSDF) is the preferable method across a range of scenarios. This is the first study to analyze efficiency measurement in a simulation setting for LMICs. Our findings aim to heighten the validity and reliability of efficiency analyses in LMICs, and thus inform policy dialogues about improving the efficiency of health service production in these settings. PMID:26812685

  11. Monte Carlo Benchmark

    2010-10-20

    The "Monte Carlo Benchmark" (MCB) is intended to model the computatiional performance of Monte Carlo algorithms on parallel architectures. It models the solution of a simple heuristic transport equation using a Monte Carlo technique. The MCB employs typical features of Monte Carlo algorithms such as particle creation, particle tracking, tallying particle information, and particle destruction. Particles are also traded among processors using MPI calls.

  12. The Analysis of the Patterns of Radiation-Induced DNA Damage Foci by a Stochastic Monte Carlo Model of DNA Double Strand Breaks Induction by Heavy Ions and Image Segmentation Software

    NASA Technical Reports Server (NTRS)

    Ponomarev, Artem; Cucinotta, F.

    2011-01-01

    To create a generalized mechanistic model of DNA damage in human cells that will generate analytical and image data corresponding to experimentally observed DNA damage foci and will help to improve the experimental foci yields by simulating spatial foci patterns and resolving problems with quantitative image analysis. Material and Methods: The analysis of patterns of RIFs (radiation-induced foci) produced by low- and high-LET (linear energy transfer) radiation was conducted by using a Monte Carlo model that combines the heavy ion track structure with characteristics of the human genome on the level of chromosomes. The foci patterns were also simulated in the maximum projection plane for flat nuclei. Some data analysis was done with the help of image segmentation software that identifies individual classes of RIFs and colocolized RIFs, which is of importance to some experimental assays that assign DNA damage a dual phosphorescent signal. Results: The model predicts the spatial and genomic distributions of DNA DSBs (double strand breaks) and associated RIFs in a human cell nucleus for a particular dose of either low- or high-LET radiation. We used the model to do analyses for different irradiation scenarios. In the beam-parallel-to-the-disk-of-a-flattened-nucleus scenario we found that the foci appeared to be merged due to their high density, while, in the perpendicular-beam scenario, the foci appeared as one bright spot per hit. The statistics and spatial distribution of regions of densely arranged foci, termed DNA foci chains, were predicted numerically using this model. Another analysis was done to evaluate the number of ion hits per nucleus, which were visible from streaks of closely located foci. In another analysis, our image segmentaiton software determined foci yields directly from images with single-class or colocolized foci. Conclusions: We showed that DSB clustering needs to be taken into account to determine the true DNA damage foci yield, which helps to

  13. A heterogeneous stochastic FEM framework for elliptic PDEs

    SciTech Connect

    Hou, Thomas Y. Liu, Pengfei

    2015-01-15

    We introduce a new concept of sparsity for the stochastic elliptic operator −div(a(x,ω)∇(⋅)), which reflects the compactness of its inverse operator in the stochastic direction and allows for spatially heterogeneous stochastic structure. This new concept of sparsity motivates a heterogeneous stochastic finite element method (HSFEM) framework for linear elliptic equations, which discretizes the equations using the heterogeneous coupling of spatial basis with local stochastic basis to exploit the local stochastic structure of the solution space. We also provide a sampling method to construct the local stochastic basis for this framework using the randomized range finding techniques. The resulting HSFEM involves two stages and suits the multi-query setting: in the offline stage, the local stochastic structure of the solution space is identified; in the online stage, the equation can be efficiently solved for multiple forcing functions. An online error estimation and correction procedure through Monte Carlo sampling is given. Numerical results for several problems with high dimensional stochastic input are presented to demonstrate the efficiency of the HSFEM in the online stage.

  14. Computing Optimal Stochastic Portfolio Execution Strategies: A Parametric Approach Using Simulations

    NASA Astrophysics Data System (ADS)

    Moazeni, Somayeh; Coleman, Thomas F.; Li, Yuying

    2010-09-01

    Computing optimal stochastic portfolio execution strategies under appropriate risk consideration presents great computational challenge. We investigate a parametric approach for computing optimal stochastic strategies using Monte Carlo simulations. This approach allows reduction in computational complexity by computing coefficients for a parametric representation of a stochastic dynamic strategy based on static optimization. Using this technique, constraints can be similarly handled using appropriate penalty functions. We illustrate the proposed approach to minimize the expected execution cost and Conditional Value-at-Risk (CVaR).

  15. Stochastic cooling in RHIC

    SciTech Connect

    Brennan,J.M.; Blaskiewicz, M. M.; Severino, F.

    2009-05-04

    After the success of longitudinal stochastic cooling of bunched heavy ion beam in RHIC, transverse stochastic cooling in the vertical plane of Yellow ring was installed and is being commissioned with proton beam. This report presents the status of the effort and gives an estimate, based on simulation, of the RHIC luminosity with stochastic cooling in all planes.

  16. Stochastic volatility models and Kelvin waves

    NASA Astrophysics Data System (ADS)

    Lipton, Alex; Sepp, Artur

    2008-08-01

    We use stochastic volatility models to describe the evolution of an asset price, its instantaneous volatility and its realized volatility. In particular, we concentrate on the Stein and Stein model (SSM) (1991) for the stochastic asset volatility and the Heston model (HM) (1993) for the stochastic asset variance. By construction, the volatility is not sign definite in SSM and is non-negative in HM. It is well known that both models produce closed-form expressions for the prices of vanilla option via the Lewis-Lipton formula. However, the numerical pricing of exotic options by means of the finite difference and Monte Carlo methods is much more complex for HM than for SSM. Until now, this complexity was considered to be an acceptable price to pay for ensuring that the asset volatility is non-negative. We argue that having negative stochastic volatility is a psychological rather than financial or mathematical problem, and advocate using SSM rather than HM in most applications. We extend SSM by adding volatility jumps and obtain a closed-form expression for the density of the asset price and its realized volatility. We also show that the current method of choice for solving pricing problems with stochastic volatility (via the affine ansatz for the Fourier-transformed density function) can be traced back to the Kelvin method designed in the 19th century for studying wave motion problems arising in fluid dynamics.

  17. Stochastic differential equations

    SciTech Connect

    Sobczyk, K. )

    1990-01-01

    This book provides a unified treatment of both regular (or random) and Ito stochastic differential equations. It focuses on solution methods, including some developed only recently. Applications are discussed, in particular an insight is given into both the mathematical structure, and the most efficient solution methods (analytical as well as numerical). Starting from basic notions and results of the theory of stochastic processes and stochastic calculus (including Ito's stochastic integral), many principal mathematical problems and results related to stochastic differential equations are expounded here for the first time. Applications treated include those relating to road vehicles, earthquake excitations and offshore structures.

  18. Fast Quantum Algorithms for Numerical Integrals and Stochastic Processes

    NASA Technical Reports Server (NTRS)

    Abrams, D.; Williams, C.

    1999-01-01

    We discuss quantum algorithms that calculate numerical integrals and descriptive statistics of stochastic processes. With either of two distinct approaches, one obtains an exponential speed increase in comparison to the fastest known classical deterministic algotithms and a quadratic speed increase incomparison to classical Monte Carlo methods.

  19. Algebraic, geometric, and stochastic aspects of genetic operators

    NASA Technical Reports Server (NTRS)

    Foo, N. Y.; Bosworth, J. L.

    1972-01-01

    Genetic algorithms for function optimization employ genetic operators patterned after those observed in search strategies employed in natural adaptation. Two of these operators, crossover and inversion, are interpreted in terms of their algebraic and geometric properties. Stochastic models of the operators are developed which are employed in Monte Carlo simulations of their behavior.

  20. Monte Carlo methods in ICF

    SciTech Connect

    Zimmerman, G.B.

    1997-06-24

    Monte Carlo methods appropriate to simulate the transport of x-rays, neutrons, ion and electrons in Inertial Confinement Fusion targets are described and analyzed. The Implicit Monte Carlo method of x-ray transport handles symmetry within indirect drive ICF hohlraums well, but can be improved 50X in efficiency by angular biasing the x-rays towards the fuel capsule. Accurate simulation of thermonuclear burns nd burn diagnostics involves detailed particle source spectra, charged particle ranges, inflight reaction kinematics, corrections for bulk and thermal Doppler effects and variance reduction to obtain adequate statistics for rare events. It is found that the effects of angular Coulomb scattering must be included in models of charged particle transport through heterogeneous materials.

  1. Monte Carlo Example Programs

    2006-05-09

    The Monte Carlo example programs VARHATOM and DMCATOM are two small, simple FORTRAN programs that illustrate the use of the Monte Carlo Mathematical technique for calculating the ground state energy of the hydrogen atom.

  2. Fission Matrix Capability for MCNP Monte Carlo

    SciTech Connect

    Carney, Sean E.; Brown, Forrest B.; Kiedrowski, Brian C.; Martin, William R.

    2012-09-05

    In a Monte Carlo criticality calculation, before the tallying of quantities can begin, a converged fission source (the fundamental eigenvector of the fission kernel) is required. Tallies of interest may include powers, absorption rates, leakage rates, or the multiplication factor (the fundamental eigenvalue of the fission kernel, k{sub eff}). Just as in the power iteration method of linear algebra, if the dominance ratio (the ratio of the first and zeroth eigenvalues) is high, many iterations of neutron history simulations are required to isolate the fundamental mode of the problem. Optically large systems have large dominance ratios, and systems containing poor neutron communication between regions are also slow to converge. The fission matrix method, implemented into MCNP[1], addresses these problems. When Monte Carlo random walk from a source is executed, the fission kernel is stochastically applied to the source. Random numbers are used for: distances to collision, reaction types, scattering physics, fission reactions, etc. This method is used because the fission kernel is a complex, 7-dimensional operator that is not explicitly known. Deterministic methods use approximations/discretization in energy, space, and direction to the kernel. Consequently, they are faster. Monte Carlo directly simulates the physics, which necessitates the use of random sampling. Because of this statistical noise, common convergence acceleration methods used in deterministic methods do not work. In the fission matrix method, we are using the random walk information not only to build the next-iteration fission source, but also a spatially-averaged fission kernel. Just like in deterministic methods, this involves approximation and discretization. The approximation is the tallying of the spatially-discretized fission kernel with an incorrect fission source. We address this by making the spatial mesh fine enough that this error is negligible. As a consequence of discretization we get a

  3. Digital simulation and modeling of nonlinear stochastic systems

    SciTech Connect

    Richardson, J M; Rowland, J R

    1981-04-01

    Digitally generated solutions of nonlinear stochastic systems are not unique but depend critically on the numerical integration algorithm used. Some theoretical and practical implications of this dependence are examined. The Ito-Stratonovich controversy concerning the solution of nonlinear stochastic systems is shown to be more than a theoretical debate on maintaining Markov properties as opposed to utilizing the computational rules of ordinary calculus. The theoretical arguments give rise to practical considerations in the formation and solution of discrete models from continuous stochastic systems. Well-known numerical integration algorithms are shown not only to provide different solutions for the same stochastic system but also to correspond to different stochastic integral definitions. These correspondences are proved by considering first and second moments of solutions that result from different integration algorithms and then comparing the moments to those arising from various stochastic integral definitions. This algorithm-dependence of solutions is in sharp contrast to the deterministic and linear stochastic cases in which unique solutions are determined by any convergent numerical algorithm. Consequences of the relationship between stochastic system solutions and simulation procedures are presented for a nonlinear filtering example. Monte Carlo simulations and statistical tests are applied to the example to illustrate the determining role which computational procedures play in generating solutions.

  4. Fluctuations as stochastic deformation

    NASA Astrophysics Data System (ADS)

    Kazinski, P. O.

    2008-04-01

    A notion of stochastic deformation is introduced and the corresponding algebraic deformation procedure is developed. This procedure is analogous to the deformation of an algebra of observables like deformation quantization, but for an imaginary deformation parameter (the Planck constant). This method is demonstrated on diverse relativistic and nonrelativistic models with finite and infinite degrees of freedom. It is shown that under stochastic deformation the model of a nonrelativistic particle interacting with the electromagnetic field on a curved background passes into the stochastic model described by the Fokker-Planck equation with the diffusion tensor being the inverse metric tensor. The first stochastic correction to the Newton equations for this system is found. The Klein-Kramers equation is also derived as the stochastic deformation of a certain classical model. Relativistic generalizations of the Fokker-Planck and Klein-Kramers equations are obtained by applying the procedure of stochastic deformation to appropriate relativistic classical models. The analog of the Fokker-Planck equation associated with the stochastic Lorentz-Dirac equation is derived too. The stochastic deformation of the models of a free scalar field and an electromagnetic field is investigated. It turns out that in the latter case the obtained stochastic model describes a fluctuating electromagnetic field in a transparent medium.

  5. Fluctuations as stochastic deformation.

    PubMed

    Kazinski, P O

    2008-04-01

    A notion of stochastic deformation is introduced and the corresponding algebraic deformation procedure is developed. This procedure is analogous to the deformation of an algebra of observables like deformation quantization, but for an imaginary deformation parameter (the Planck constant). This method is demonstrated on diverse relativistic and nonrelativistic models with finite and infinite degrees of freedom. It is shown that under stochastic deformation the model of a nonrelativistic particle interacting with the electromagnetic field on a curved background passes into the stochastic model described by the Fokker-Planck equation with the diffusion tensor being the inverse metric tensor. The first stochastic correction to the Newton equations for this system is found. The Klein-Kramers equation is also derived as the stochastic deformation of a certain classical model. Relativistic generalizations of the Fokker-Planck and Klein-Kramers equations are obtained by applying the procedure of stochastic deformation to appropriate relativistic classical models. The analog of the Fokker-Planck equation associated with the stochastic Lorentz-Dirac equation is derived too. The stochastic deformation of the models of a free scalar field and an electromagnetic field is investigated. It turns out that in the latter case the obtained stochastic model describes a fluctuating electromagnetic field in a transparent medium.

  6. Monte Carlo Simulation of River Meander Modelling

    NASA Astrophysics Data System (ADS)

    Posner, A. J.; Duan, J. G.

    2010-12-01

    This study first compares the first order analytical solutions for flow field by Ikeda et. al. (1981) and Johanesson and Parker (1989b). Ikeda et. al.’s (1981) linear bank erosion model was implemented to predict the rate of bank erosion in which the bank erosion coefficient is treated as a stochastic variable that varies with physical properties of the bank (e.g. cohesiveness, stratigraphy, vegetation density). The developed model was used to predict the evolution of meandering planforms. Then, the modeling results were analyzed and compared to the observed data. Since the migration of meandering channel consists of downstream translation, lateral expansion, and downstream or upstream rotations. Several measures are formulated in order to determine which of the resulting planform is closest to the experimental measured one. Results from the deterministic model highly depend on the calibrated erosion coefficient. Since field measurements are always limited, the stochastic model yielded more realistic predictions of meandering planform evolutions. Due to the random nature of bank erosion coefficient, the meandering planform evolution is a stochastic process that can only be accurately predicted by a stochastic model. Quasi-2D Ikeda (1989) flow solution with Monte Carlo Simulation of Bank Erosion Coefficient.

  7. A Stochastic Diffusion Process for the Dirichlet Distribution

    DOE PAGES

    Bakosi, J.; Ristorcelli, J. R.

    2013-01-01

    The method of potential solutions of Fokker-Planck equations is used to develop a transport equation for the joint probability of N coupled stochastic variables with the Dirichlet distribution as its asymptotic solution. To ensure a bounded sample space, a coupled nonlinear diffusion process is required: the Wiener processes in the equivalent system of stochastic differential equations are multiplicative with coefficients dependent on all the stochastic variables. Individual samples of a discrete ensemble, obtained from the stochastic process, satisfy a unit-sum constraint at all times. The process may be used to represent realizations of a fluctuating ensemble of N variablesmore » subject to a conservation principle. Similar to the multivariate Wright-Fisher process, whose invariant is also Dirichlet, the univariate case yields a process whose invariant is the beta distribution. As a test of the results, Monte Carlo simulations are used to evolve numerical ensembles toward the invariant Dirichlet distribution.« less

  8. Stochastic Simulation Tool for Aerospace Structural Analysis

    NASA Technical Reports Server (NTRS)

    Knight, Norman F.; Moore, David F.

    2006-01-01

    Stochastic simulation refers to incorporating the effects of design tolerances and uncertainties into the design analysis model and then determining their influence on the design. A high-level evaluation of one such stochastic simulation tool, the MSC.Robust Design tool by MSC.Software Corporation, has been conducted. This stochastic simulation tool provides structural analysts with a tool to interrogate their structural design based on their mathematical description of the design problem using finite element analysis methods. This tool leverages the analyst's prior investment in finite element model development of a particular design. The original finite element model is treated as the baseline structural analysis model for the stochastic simulations that are to be performed. A Monte Carlo approach is used by MSC.Robust Design to determine the effects of scatter in design input variables on response output parameters. The tool was not designed to provide a probabilistic assessment, but to assist engineers in understanding cause and effect. It is driven by a graphical-user interface and retains the engineer-in-the-loop strategy for design evaluation and improvement. The application problem for the evaluation is chosen to be a two-dimensional shell finite element model of a Space Shuttle wing leading-edge panel under re-entry aerodynamic loading. MSC.Robust Design adds value to the analysis effort by rapidly being able to identify design input variables whose variability causes the most influence in response output parameters.

  9. Stochastic robustness of linear control systems

    NASA Technical Reports Server (NTRS)

    Stengel, Robert F.; Ryan, Laura E.

    1990-01-01

    A simple numerical procedure for estimating the stochastic robustness of a linear, time-invariant system is described. Monte Carlo evaluation of the system's eigenvalues allows the probability of instability and the related stochastic root locus to be estimated. This definition of robustness is an alternative to existing deterministic definitions that address both structured and unstructured parameter variations directly. This analysis approach treats not only Gaussian parameter uncertainties but non-Gaussian cases, including uncertain-but-bounded variations. Trivial extensions of the procedure admit alternate discriminants to be considered. Thus, the probabilities that stipulated degrees of instability will be exceeded or that closed-loop roots will leave desirable regions also can be estimated. Results are particularly amenable to graphical presentation.

  10. Interaction picture density matrix quantum Monte Carlo.

    PubMed

    Malone, Fionn D; Blunt, N S; Shepherd, James J; Lee, D K K; Spencer, J S; Foulkes, W M C

    2015-07-28

    The recently developed density matrix quantum Monte Carlo (DMQMC) algorithm stochastically samples the N-body thermal density matrix and hence provides access to exact properties of many-particle quantum systems at arbitrary temperatures. We demonstrate that moving to the interaction picture provides substantial benefits when applying DMQMC to interacting fermions. In this first study, we focus on a system of much recent interest: the uniform electron gas in the warm dense regime. The basis set incompleteness error at finite temperature is investigated and extrapolated via a simple Monte Carlo sampling procedure. Finally, we provide benchmark calculations for a four-electron system, comparing our results to previous work where possible. PMID:26233116

  11. A Stochastic Employment Problem

    ERIC Educational Resources Information Center

    Wu, Teng

    2013-01-01

    The Stochastic Employment Problem(SEP) is a variation of the Stochastic Assignment Problem which analyzes the scenario that one assigns balls into boxes. Balls arrive sequentially with each one having a binary vector X = (X[subscript 1], X[subscript 2],...,X[subscript n]) attached, with the interpretation being that if X[subscript i] = 1 the ball…

  12. A data-integrated method for analyzing stochastic biochemical networks

    NASA Astrophysics Data System (ADS)

    Chevalier, Michael W.; El-Samad, Hana

    2011-12-01

    Variability and fluctuations among genetically identical cells under uniform experimental conditions stem from the stochastic nature of biochemical reactions. Understanding network function for endogenous biological systems or designing robust synthetic genetic circuits requires accounting for and analyzing this variability. Stochasticity in biological networks is usually represented using a continuous-time discrete-state Markov formalism, where the chemical master equation (CME) and its kinetic Monte Carlo equivalent, the stochastic simulation algorithm (SSA), are used. These two representations are computationally intractable for many realistic biological problems. Fitting parameters in the context of these stochastic models is particularly challenging and has not been accomplished for any but very simple systems. In this work, we propose that moment equations derived from the CME, when treated appropriately in terms of higher order moment contributions, represent a computationally efficient framework for estimating the kinetic rate constants of stochastic network models and subsequent analysis of their dynamics. To do so, we present a practical data-derived moment closure method for these equations. In contrast to previous work, this method does not rely on any assumptions about the shape of the stochastic distributions or a functional relationship among their moments. We use this method to analyze a stochastic model of a biological oscillator and demonstrate its accuracy through excellent agreement with CME/SSA calculations. By coupling this moment-closure method with a parameter search procedure, we further demonstrate how a model's kinetic parameters can be iteratively determined in order to fit measured distribution data.

  13. A data-integrated method for analyzing stochastic biochemical networks.

    PubMed

    Chevalier, Michael W; El-Samad, Hana

    2011-12-01

    Variability and fluctuations among genetically identical cells under uniform experimental conditions stem from the stochastic nature of biochemical reactions. Understanding network function for endogenous biological systems or designing robust synthetic genetic circuits requires accounting for and analyzing this variability. Stochasticity in biological networks is usually represented using a continuous-time discrete-state Markov formalism, where the chemical master equation (CME) and its kinetic Monte Carlo equivalent, the stochastic simulation algorithm (SSA), are used. These two representations are computationally intractable for many realistic biological problems. Fitting parameters in the context of these stochastic models is particularly challenging and has not been accomplished for any but very simple systems. In this work, we propose that moment equations derived from the CME, when treated appropriately in terms of higher order moment contributions, represent a computationally efficient framework for estimating the kinetic rate constants of stochastic network models and subsequent analysis of their dynamics. To do so, we present a practical data-derived moment closure method for these equations. In contrast to previous work, this method does not rely on any assumptions about the shape of the stochastic distributions or a functional relationship among their moments. We use this method to analyze a stochastic model of a biological oscillator and demonstrate its accuracy through excellent agreement with CME/SSA calculations. By coupling this moment-closure method with a parameter search procedure, we further demonstrate how a model's kinetic parameters can be iteratively determined in order to fit measured distribution data.

  14. Monte Carlo fundamentals

    SciTech Connect

    Brown, F.B.; Sutton, T.M.

    1996-02-01

    This report is composed of the lecture notes from the first half of a 32-hour graduate-level course on Monte Carlo methods offered at KAPL. These notes, prepared by two of the principle developers of KAPL`s RACER Monte Carlo code, cover the fundamental theory, concepts, and practices for Monte Carlo analysis. In particular, a thorough grounding in the basic fundamentals of Monte Carlo methods is presented, including random number generation, random sampling, the Monte Carlo approach to solving transport problems, computational geometry, collision physics, tallies, and eigenvalue calculations. Furthermore, modern computational algorithms for vector and parallel approaches to Monte Carlo calculations are covered in detail, including fundamental parallel and vector concepts, the event-based algorithm, master/slave schemes, parallel scaling laws, and portability issues.

  15. Monte Carlo methods

    NASA Astrophysics Data System (ADS)

    Bardenet, Rémi

    2013-07-01

    Bayesian inference often requires integrating some function with respect to a posterior distribution. Monte Carlo methods are sampling algorithms that allow to compute these integrals numerically when they are not analytically tractable. We review here the basic principles and the most common Monte Carlo algorithms, among which rejection sampling, importance sampling and Monte Carlo Markov chain (MCMC) methods. We give intuition on the theoretical justification of the algorithms as well as practical advice, trying to relate both. We discuss the application of Monte Carlo in experimental physics, and point to landmarks in the literature for the curious reader.

  16. Fire in a Changing Climate: Stochastic versus Threshold-constrained Ignitions in a Dynamic Global Vegetation Model

    NASA Astrophysics Data System (ADS)

    Sheehan, T.; Bachelet, D. M.; Ferschweiler, K.

    2015-12-01

    The MC2 dynamic global vegetation model fire module simulates fire occurrence, area burned, and fire impacts including mortality, biomass burned, and nitrogen volatilization. Fire occurrence is based on fuel load levels and vegetation-specific thresholds for three calculated fire weather indices: fine fuel moisture code (FFMC) for the moisture content of fine fuels; build-up index (BUI) for the total amount of fuel available for combustion; and energy release component (ERC) for the total energy available to fire. Ignitions are assumed (i.e. the probability of an ignition source is 1). The model is run with gridded inputs and the fraction of each grid cell burned is limited by a vegetation-specific fire return period (FRP) and the number of years since the last fire occurred in the grid cell. One consequence of assumed ignitions FRP constraint is that similar fire behavior can take place over large areas with identical vegetation type. In regions where thresholds are often exceeded, fires occur frequently (annually in some instances) with a very low fraction of a cell burned. In areas where fire is infrequent, a single hot, dry climate event can result in intense fire over a large region. Both cases can potentially result in large areas with uniform vegetation type and age. To better reflect realistic fire occurrence, we have developed a stochastic fire occurrence model that: a) uses a map of relative ignition probability and a multiplier to alter overall ignition occurrence; b) adjusts the original fixed fire thresholds with ignition success probabilities based on fire weather indices; and c) calculates spread by using a probability based on slope and wind direction. A Monte Carlo method is used with all three algorithms to determine occurrence. The new stochastic ignition approach yields more variety in fire intensity, a smaller annual total of cells burned, and patchier vegetation.

  17. Solution of the stochastic control problem in unbounded domains.

    NASA Technical Reports Server (NTRS)

    Robinson, P.; Moore, J.

    1973-01-01

    Bellman's dynamic programming equation for the optimal index and control law for stochastic control problems is a parabolic or elliptic partial differential equation frequently defined in an unbounded domain. Existing methods of solution require bounded domain approximations, the application of singular perturbation techniques or Monte Carlo simulation procedures. In this paper, using the fact that Poisson impulse noise tends to a Gaussian process under certain limiting conditions, a method which achieves an arbitrarily good approximate solution to the stochastic control problem is given. The method uses the two iterative techniques of successive approximation and quasi-linearization and is inherently more efficient than existing methods of solution.

  18. Monte Carlo Form-Finding Method for Tensegrity Structures

    NASA Astrophysics Data System (ADS)

    Li, Yue; Feng, Xi-Qiao; Cao, Yan-Ping

    2010-05-01

    In this paper, we propose a Monte Carlo-based approach to solve tensegrity form-finding problems. It uses a stochastic procedure to find the deterministic equilibrium configuration of a tensegrity structure. The suggested Monte Carlo form-finding (MCFF) method is highly efficient because it does not involve complicated matrix operations and symmetry analysis and it works for arbitrary initial configurations. Both regular and non-regular tensegrity problems of large scale can be solved. Some representative examples are presented to demonstrate the efficiency and accuracy of this versatile method.

  19. Microgrid Reliability Modeling and Battery Scheduling Using Stochastic Linear Programming

    SciTech Connect

    Cardoso, Goncalo; Stadler, Michael; Siddiqui, Afzal; Marnay, Chris; DeForest, Nicholas; Barbosa-Povoa, Ana; Ferrao, Paulo

    2013-05-23

    This paper describes the introduction of stochastic linear programming into Operations DER-CAM, a tool used to obtain optimal operating schedules for a given microgrid under local economic and environmental conditions. This application follows previous work on optimal scheduling of a lithium-iron-phosphate battery given the output uncertainty of a 1 MW molten carbonate fuel cell. Both are in the Santa Rita Jail microgrid, located in Dublin, California. This fuel cell has proven unreliable, partially justifying the consideration of storage options. Several stochastic DER-CAM runs are executed to compare different scenarios to values obtained by a deterministic approach. Results indicate that using a stochastic approach provides a conservative yet more lucrative battery schedule. Lower expected energy bills result, given fuel cell outages, in potential savings exceeding 6percent.

  20. Quantum Stochastic Processes

    SciTech Connect

    Spring, William Joseph

    2009-04-13

    We consider quantum analogues of n-parameter stochastic processes, associated integrals and martingale properties extending classical results obtained in [1, 2, 3], and quantum results in [4, 5, 6, 7, 8, 9, 10].

  1. Dynamics of Double Stochastic Operators

    NASA Astrophysics Data System (ADS)

    Saburov, Mansoor

    2016-03-01

    A double stochastic operator is a generalization of a double stochastic matrix. In this paper, we study the dynamics of double stochastic operators. We give a criterion for a regularity of a double stochastic operator in terms of absences of its periodic points. We provide some examples to insure that, in general, a trajectory of a double stochastic operator may converge to any interior point of the simplex.

  2. Stochastic self-assembly of incommensurate clusters

    NASA Astrophysics Data System (ADS)

    D'Orsogna, M. R.; Lakatos, G.; Chou, T.

    2012-02-01

    Nucleation and molecular aggregation are important processes in numerous physical and biological systems. In many applications, these processes often take place in confined spaces, involving a finite number of particles. Analogous to treatments of stochastic chemical reactions, we examine the classic problem of homogeneous nucleation and self-assembly by deriving and analyzing a fully discrete stochastic master equation. We enumerate the highest probability steady states, and derive exact analytical formulae for quenched and equilibrium mean cluster size distributions. Upon comparison with results obtained from the associated mass-action Becker-Döring equations, we find striking differences between the two corresponding equilibrium mean cluster concentrations. These differences depend primarily on the divisibility of the total available mass by the maximum allowed cluster size, and the remainder. When such mass "incommensurability" arises, a single remainder particle can "emulsify" the system by significantly broadening the equilibrium mean cluster size distribution. This discreteness-induced broadening effect is periodic in the total mass of the system but arises even when the system size is asymptotically large, provided the ratio of the total mass to the maximum cluster size is finite. Ironically, classic mass-action equations are fairly accurate in the coarsening regime, before equilibrium is reached, despite the presence of large stochastic fluctuations found via kinetic Monte-Carlo simulations. Our findings define a new scaling regime in which results from classic mass-action theories are qualitatively inaccurate, even in the limit of large total system size.

  3. MORSE Monte Carlo code

    SciTech Connect

    Cramer, S.N.

    1984-01-01

    The MORSE code is a large general-use multigroup Monte Carlo code system. Although no claims can be made regarding its superiority in either theoretical details or Monte Carlo techniques, MORSE has been, since its inception at ORNL in the late 1960s, the most widely used Monte Carlo radiation transport code. The principal reason for this popularity is that MORSE is relatively easy to use, independent of any installation or distribution center, and it can be easily customized to fit almost any specific need. Features of the MORSE code are described.

  4. An adaptive high-dimensional stochastic model representation technique for the solution of stochastic partial differential equations

    SciTech Connect

    Ma Xiang; Zabaras, Nicholas

    2010-05-20

    A computational methodology is developed to address the solution of high-dimensional stochastic problems. It utilizes high-dimensional model representation (HDMR) technique in the stochastic space to represent the model output as a finite hierarchical correlated function expansion in terms of the stochastic inputs starting from lower-order to higher-order component functions. HDMR is efficient at capturing the high-dimensional input-output relationship such that the behavior for many physical systems can be modeled to good accuracy only by the first few lower-order terms. An adaptive version of HDMR is also developed to automatically detect the important dimensions and construct higher-order terms using only the important dimensions. The newly developed adaptive sparse grid collocation (ASGC) method is incorporated into HDMR to solve the resulting sub-problems. By integrating HDMR and ASGC, it is computationally possible to construct a low-dimensional stochastic reduced-order model of the high-dimensional stochastic problem and easily perform various statistic analysis on the output. Several numerical examples involving elementary mathematical functions and fluid mechanics problems are considered to illustrate the proposed method. The cases examined show that the method provides accurate results for stochastic dimensionality as high as 500 even with large-input variability. The efficiency of the proposed method is examined by comparing with Monte Carlo (MC) simulation.

  5. Symbolic implicit Monte Carlo

    SciTech Connect

    Brooks, E.D. III )

    1989-08-01

    We introduce a new implicit Monte Carlo technique for solving time dependent radiation transport problems involving spontaneous emission. In the usual implicit Monte Carlo procedure an effective scattering term in dictated by the requirement of self-consistency between the transport and implicitly differenced atomic populations equations. The effective scattering term, a source of inefficiency for optically thick problems, becomes an impasse for problems with gain where its sign is negative. In our new technique the effective scattering term does not occur and the excecution time for the Monte Carlo portion of the algorithm is independent of opacity. We compare the performance and accuracy of the new symbolic implicit Monte Carlo technique to the usual effective scattering technique for the time dependent description of a two-level system in slab geometry. We also examine the possibility of effectively exploiting multiprocessors on the algorithm, obtaining supercomputer performance using shared memory multiprocessors based on cheap commodity microprocessor technology. {copyright} 1989 Academic Press, Inc.

  6. Stochastic calculus for uncoupled continuous-time random walks.

    PubMed

    Germano, Guido; Politi, Mauro; Scalas, Enrico; Schilling, René L

    2009-06-01

    The continuous-time random walk (CTRW) is a pure-jump stochastic process with several applications not only in physics but also in insurance, finance, and economics. A definition is given for a class of stochastic integrals driven by a CTRW, which includes the Itō and Stratonovich cases. An uncoupled CTRW with zero-mean jumps is a martingale. It is proved that, as a consequence of the martingale transform theorem, if the CTRW is a martingale, the Itō integral is a martingale too. It is shown how the definition of the stochastic integrals can be used to easily compute them by Monte Carlo simulation. The relations between a CTRW, its quadratic variation, its Stratonovich integral, and its Itō integral are highlighted by numerical calculations when the jumps in space of the CTRW have a symmetric Lévy alpha -stable distribution and its waiting times have a one-parameter Mittag-Leffler distribution. Remarkably, these distributions have fat tails and an unbounded quadratic variation. In the diffusive limit of vanishing scale parameters, the probability density of this kind of CTRW satisfies the space-time fractional diffusion equation (FDE) or more in general the fractional Fokker-Planck equation, which generalizes the standard diffusion equation, solved by the probability density of the Wiener process, and thus provides a phenomenologic model of anomalous diffusion. We also provide an analytic expression for the quadratic variation of the stochastic process described by the FDE and check it by Monte Carlo.

  7. On stochastic diffusion equations and stochastic Burgers' equations

    NASA Astrophysics Data System (ADS)

    Truman, A.; Zhao, H. Z.

    1996-01-01

    In this paper we construct a strong solution for the stochastic Hamilton Jacobi equation by using stochastic classical mechanics before the caustics. We thereby obtain the viscosity solution for a certain class of inviscid stochastic Burgers' equations. This viscosity solution is not continuous beyond the caustics of the corresponding Hamilton Jacobi equation. The Hopf-Cole transformation is used to identify the stochastic heat equation and the viscous stochastic Burgers' equation. The exact solutions for the above two equations are given in terms of the stochastic Hamilton Jacobi function under a no-caustic condition. We construct the heat kernel for the stochastic heat equation for zero potentials in hyperbolic space and for harmonic oscillator potentials in Euclidean space thereby obtaining the stochastic Mehler formula.

  8. Stochastically driven genetic circuits

    NASA Astrophysics Data System (ADS)

    Tsimring, L. S.; Volfson, D.; Hasty, J.

    2006-06-01

    Transcriptional regulation in small genetic circuits exhibits large stochastic fluctuations. Recent experiments have shown that a significant fraction of these fluctuations is caused by extrinsic factors. In this paper we review several theoretical and computational approaches to modeling of small genetic circuits driven by extrinsic stochastic processes. We propose a simplified approach to this problem, which can be used in the case when extrinsic fluctuations dominate the stochastic dynamics of the circuit (as appears to be the case in eukaryots). This approach is applied to a model of a single nonregulated gene that is driven by a certain gating process that affects the rate of transcription, and to a simplified version of the galactose utilization circuit in yeast.

  9. Stochastic Thermal Convection

    NASA Astrophysics Data System (ADS)

    Venturi, Daniele

    2005-11-01

    Stochastic bifurcations and stability of natural convective flows in 2d and 3d enclosures are investigated by the multi-element generalized polynomial chaos (ME-gPC) method (Xiu and Karniadakis, SISC, vol. 24, 2002). The Boussinesq approximation for the variation of physical properties is assumed. The stability analysis is first carried out in a deterministic sense, to determine steady state solutions and primary and secondary bifurcations. Stochastic simulations are then conducted around discontinuities and transitional regimes. It is found that these highly non-linear phenomena can be efficiently captured by the ME-gPC method. Finally, the main findings of the stochastic analysis and their implications for heat transfer will be discussed.

  10. Stochastic cooling at Fermilab

    SciTech Connect

    Marriner, J.

    1986-08-01

    The topics discussed are the stochastic cooling systems in use at Fermilab and some of the techniques that have been employed to meet the particular requirements of the anti-proton source. Stochastic cooling at Fermilab became of paramount importance about 5 years ago when the anti-proton source group at Fermilab abandoned the electron cooling ring in favor of a high flux anti-proton source which relied solely on stochastic cooling to achieve the phase space densities necessary for colliding proton and anti-proton beams. The Fermilab systems have constituted a substantial advance in the techniques of cooling including: large pickup arrays operating at microwave frequencies, extensive use of cryogenic techniques to reduce thermal noise, super-conducting notch filters, and the development of tools for controlling and for accurately phasing the system.

  11. MULTILEVEL ACCELERATION OF STOCHASTIC COLLOCATION METHODS FOR PDE WITH RANDOM INPUT DATA

    SciTech Connect

    Webster, Clayton G; Jantsch, Peter A; Teckentrup, Aretha L; Gunzburger, Max D

    2013-01-01

    Stochastic Collocation (SC) methods for stochastic partial differential equa- tions (SPDEs) suffer from the curse of dimensionality, whereby increases in the stochastic dimension cause an explosion of computational effort. To combat these challenges, multilevel approximation methods seek to decrease computational complexity by balancing spatial and stochastic discretization errors. As a form of variance reduction, multilevel techniques have been successfully applied to Monte Carlo (MC) methods, but may be extended to accelerate other methods for SPDEs in which the stochastic and spatial degrees of freedom are de- coupled. This article presents general convergence and computational complexity analysis of a multilevel method for SPDEs, demonstrating its advantages with regard to standard, single level approximation. The numerical results will highlight conditions under which multilevel sparse grid SC is preferable to the more traditional MC and SC approaches.

  12. Convergence rates of finite difference stochastic approximation algorithms part II: implementation via common random numbers

    NASA Astrophysics Data System (ADS)

    Dai, Liyi

    2016-05-01

    Stochastic optimization is a fundamental problem that finds applications in many areas including biological and cognitive sciences. The classical stochastic approximation algorithm for iterative stochastic optimization requires gradient information of the sample object function that is typically difficult to obtain in practice. Recently there has been renewed interests in derivative free approaches to stochastic optimization. In this paper, we examine the rates of convergence for the Kiefer-Wolfowitz algorithm and the mirror descent algorithm, by approximating gradient using finite differences generated through common random numbers. It is shown that the convergence of these algorithms can be accelerated by controlling the implementation of the finite differences. Particularly, it is shown that the rate can be increased to n-2/5 in general and to n-1/2, the best possible rate of stochastic approximation, in Monte Carlo optimization for a broad class of problems, in the iteration number n.

  13. Stochastic optical active rheology

    NASA Astrophysics Data System (ADS)

    Lee, Hyungsuk; Shin, Yongdae; Kim, Sun Taek; Reinherz, Ellis L.; Lang, Matthew J.

    2012-07-01

    We demonstrate a stochastic based method for performing active rheology using optical tweezers. By monitoring the displacement of an embedded particle in response to stochastic optical forces, a rapid estimate of the frequency dependent shear moduli of a sample is achieved in the range of 10-1-103 Hz. We utilize the method to probe linear viscoelastic properties of hydrogels at varied cross-linker concentrations. Combined with fluorescence imaging, our method demonstrates non-linear changes of bond strength between T cell receptors and an antigenic peptide due to force-induced cell activation.

  14. Stochastic Gauss equations

    NASA Astrophysics Data System (ADS)

    Pierret, Frédéric

    2016-02-01

    We derived the equations of Celestial Mechanics governing the variation of the orbital elements under a stochastic perturbation, thereby generalizing the classical Gauss equations. Explicit formulas are given for the semimajor axis, the eccentricity, the inclination, the longitude of the ascending node, the pericenter angle, and the mean anomaly, which are expressed in term of the angular momentum vector H per unit of mass and the energy E per unit of mass. Together, these formulas are called the stochastic Gauss equations, and they are illustrated numerically on an example from satellite dynamics.

  15. Structures and stochastic methods

    SciTech Connect

    Cakmak, A.S.

    1987-01-01

    Studies and research on structures and stochastic methods in the soil dynamics and earthquake engineering filed are covered in this book. The first section is on structures and includes studies on bridges, loaded tanks, sliding structures and wood-framed houses. The second section covers dams, retaining walls and slopes. The third section on underground structures covers pipelines, water supply, fire loss, buried lifeline, and underground transmission lines. The final section is on stochastic methods and includes applications in earthquake response spectra, lifeline aqueduct systems, and various other areas.

  16. STOCHASTIC COOLING FOR BUNCHED BEAMS.

    SciTech Connect

    BLASKIEWICZ, M.

    2005-05-16

    Problems associated with bunched beam stochastic cooling are reviewed. A longitudinal stochastic cooling system for RHIC is under construction and has been partially commissioned. The state of the system and future plans are discussed.

  17. Stochastic entrainment of a stochastic oscillator.

    PubMed

    Wang, Guanyu; Peskin, Charles S

    2015-01-01

    In this work, we consider a stochastic oscillator described by a discrete-state continuous-time Markov chain, in which the states are arranged in a circle, and there is a constant probability per unit time of jumping from one state to the next in a specified direction around the circle. At each of a sequence of equally spaced times, the oscillator has a specified probability of being reset to a particular state. The focus of this work is the entrainment of the oscillator by this periodic but stochastic stimulus. We consider a distinguished limit, in which (i) the number of states of the oscillator approaches infinity, as does the probability per unit time of jumping from one state to the next, so that the natural mean period of the oscillator remains constant, (ii) the resetting probability approaches zero, and (iii) the period of the resetting signal approaches a multiple, by a ratio of small integers, of the natural mean period of the oscillator. In this distinguished limit, we use analytic and numerical methods to study the extent to which entrainment occurs.

  18. Vectorized Monte Carlo

    SciTech Connect

    Brown, F.B.

    1981-01-01

    Examination of the global algorithms and local kernels of conventional general-purpose Monte Carlo codes shows that multigroup Monte Carlo methods have sufficient structure to permit efficient vectorization. A structured multigroup Monte Carlo algorithm for vector computers is developed in which many particle events are treated at once on a cell-by-cell basis. Vectorization of kernels for tracking and variance reduction is described, and a new method for discrete sampling is developed to facilitate the vectorization of collision analysis. To demonstrate the potential of the new method, a vectorized Monte Carlo code for multigroup radiation transport analysis was developed. This code incorporates many features of conventional general-purpose production codes, including general geometry, splitting and Russian roulette, survival biasing, variance estimation via batching, a number of cutoffs, and generalized tallies of collision, tracklength, and surface crossing estimators with response functions. Predictions of vectorized performance characteristics for the CYBER-205 were made using emulated coding and a dynamic model of vector instruction timing. Computation rates were examined for a variety of test problems to determine sensitivities to batch size and vector lengths. Significant speedups are predicted for even a few hundred particles per batch, and asymptotic speedups by about 40 over equivalent Amdahl 470V/8 scalar codes arepredicted for a few thousand particles per batch. The principal conclusion is that vectorization of a general-purpose multigroup Monte Carlo code is well worth the significant effort required for stylized coding and major algorithmic changes.

  19. Stochastic finite-difference time-domain

    NASA Astrophysics Data System (ADS)

    Smith, Steven Michael

    2011-12-01

    This dissertation presents the derivation of an approximate method to determine the mean and the variance of electro-magnetic fields in the body using the Finite-Difference Time-Domain (FDTD) method. Unlike Monte Carlo analysis, which requires repeated FDTD simulations, this method directly computes the variance of the fields at every point in space at every sample of time in the simulation. This Stochastic FDTD simulation (S-FDTD) has at its root a new wave called the Variance wave, which is computed in the time domain along with the mean properties of the model space in the FDTD simulation. The Variance wave depends on the electro-magnetic fields, the reflections and transmission though the different dielectrics, and the variances of the electrical properties of the surrounding materials. Like the electro-magnetic fields, the Variance wave begins at zero (there is no variance before the source is turned on) and is computed in the time domain until all fields reach steady state. This process is performed in a fraction of the time of a Monte Carlo simulation and yields the first two statistical parameters (mean and variance). The mean of the field is computed using the traditional FDTD equations. Variance is computed by approximating the correlation coefficients between the constituitive properties and the use of the S-FDTD equations. The impetus for this work was the simulation time it takes to perform 3D Specific Absorption Rate (SAR) FDTD analysis of the human head model for cell phone power absorption in the human head due to the proximity of a cell phone being used. In many instances, Monte Carlo analysis is not performed due to the lengthy simulation times required. With the development of S-FDTD, these statistical analyses could be performed providing valuable statistical information with this information being provided in a small fraction of the time it would take to perform a Monte Carlo analysis.

  20. Stochastic Models of Human Growth.

    ERIC Educational Resources Information Center

    Goodrich, Robert L.

    Stochastic difference equations of the Box-Jenkins form provide an adequate family of models on which to base the stochastic theory of human growth processes, but conventional time series identification methods do not apply to available data sets. A method to identify structure and parameters of stochastic difference equation models of human…

  1. Distributed parallel computing in stochastic modeling of groundwater systems.

    PubMed

    Dong, Yanhui; Li, Guomin; Xu, Haizhen

    2013-03-01

    Stochastic modeling is a rapidly evolving, popular approach to the study of the uncertainty and heterogeneity of groundwater systems. However, the use of Monte Carlo-type simulations to solve practical groundwater problems often encounters computational bottlenecks that hinder the acquisition of meaningful results. To improve the computational efficiency, a system that combines stochastic model generation with MODFLOW-related programs and distributed parallel processing is investigated. The distributed computing framework, called the Java Parallel Processing Framework, is integrated into the system to allow the batch processing of stochastic models in distributed and parallel systems. As an example, the system is applied to the stochastic delineation of well capture zones in the Pinggu Basin in Beijing. Through the use of 50 processing threads on a cluster with 10 multicore nodes, the execution times of 500 realizations are reduced to 3% compared with those of a serial execution. Through this application, the system demonstrates its potential in solving difficult computational problems in practical stochastic modeling.

  2. Stochastic Synapses Enable Efficient Brain-Inspired Learning Machines.

    PubMed

    Neftci, Emre O; Pedroni, Bruno U; Joshi, Siddharth; Al-Shedivat, Maruan; Cauwenberghs, Gert

    2016-01-01

    Recent studies have shown that synaptic unreliability is a robust and sufficient mechanism for inducing the stochasticity observed in cortex. Here, we introduce Synaptic Sampling Machines (S2Ms), a class of neural network models that uses synaptic stochasticity as a means to Monte Carlo sampling and unsupervised learning. Similar to the original formulation of Boltzmann machines, these models can be viewed as a stochastic counterpart of Hopfield networks, but where stochasticity is induced by a random mask over the connections. Synaptic stochasticity plays the dual role of an efficient mechanism for sampling, and a regularizer during learning akin to DropConnect. A local synaptic plasticity rule implementing an event-driven form of contrastive divergence enables the learning of generative models in an on-line fashion. S2Ms perform equally well using discrete-timed artificial units (as in Hopfield networks) or continuous-timed leaky integrate and fire neurons. The learned representations are remarkably sparse and robust to reductions in bit precision and synapse pruning: removal of more than 75% of the weakest connections followed by cursory re-learning causes a negligible performance loss on benchmark classification tasks. The spiking neuron-based S2Ms outperform existing spike-based unsupervised learners, while potentially offering substantial advantages in terms of power and complexity, and are thus promising models for on-line learning in brain-inspired hardware. PMID:27445650

  3. Stochastic analysis of transport in tubes with rough walls

    SciTech Connect

    Tartakovsky, Daniel M. . E-mail: dmt@lanl.gov; Xiu Dongbin . E-mail: dxiu@math.purdue.edu

    2006-09-01

    Flow and transport in tubes with rough surfaces play an important role in a variety of applications. Often the topology of such surfaces cannot be accurately described in all of its relevant details due to either insufficient data or measurement errors or both. In such cases, this topological uncertainty can be efficiently handled by treating rough boundaries as random fields, so that an underlying physical phenomenon is described by deterministic or stochastic differential equations in random domains. To deal with this class of problems, we use a computational framework, which is based on stochastic mappings to transform the original deterministic/stochastic problem in a random domain into a stochastic problem in a deterministic domain. The latter problem has been studied more extensively and existing analytical/numerical techniques can be readily applied. In this paper, we employ both a generalized polynomial chaos and Monte Carlo simulations to solve the transformed stochastic problem. We use our approach to describe transport of a passive scalar in Stokes' flow and to quantify the corresponding predictive uncertainty.

  4. Synchronizing stochastic circadian oscillators in single cells of Neurospora crassa

    PubMed Central

    Deng, Zhaojie; Arsenault, Sam; Caranica, Cristian; Griffith, James; Zhu, Taotao; Al-Omari, Ahmad; Schüttler, Heinz-Bernd; Arnold, Jonathan; Mao, Leidong

    2016-01-01

    The synchronization of stochastic coupled oscillators is a central problem in physics and an emerging problem in biology, particularly in the context of circadian rhythms. Most measurements on the biological clock are made at the macroscopic level of millions of cells. Here measurements are made on the oscillators in single cells of the model fungal system, Neurospora crassa, with droplet microfluidics and the use of a fluorescent recorder hooked up to a promoter on a clock controlled gene-2 (ccg-2). The oscillators of individual cells are stochastic with a period near 21 hours (h), and using a stochastic clock network ensemble fitted by Markov Chain Monte Carlo implemented on general-purpose graphical processing units (or GPGPUs) we estimated that >94% of the variation in ccg-2 expression was stochastic (as opposed to experimental error). To overcome this stochasticity at the macroscopic level, cells must synchronize their oscillators. Using a classic measure of similarity in cell trajectories within droplets, the intraclass correlation (ICC), the synchronization surface ICC is measured on >25,000 cells as a function of the number of neighboring cells within a droplet and of time. The synchronization surface provides evidence that cells communicate, and synchronization varies with genotype. PMID:27786253

  5. Stochastic Synapses Enable Efficient Brain-Inspired Learning Machines

    PubMed Central

    Neftci, Emre O.; Pedroni, Bruno U.; Joshi, Siddharth; Al-Shedivat, Maruan; Cauwenberghs, Gert

    2016-01-01

    Recent studies have shown that synaptic unreliability is a robust and sufficient mechanism for inducing the stochasticity observed in cortex. Here, we introduce Synaptic Sampling Machines (S2Ms), a class of neural network models that uses synaptic stochasticity as a means to Monte Carlo sampling and unsupervised learning. Similar to the original formulation of Boltzmann machines, these models can be viewed as a stochastic counterpart of Hopfield networks, but where stochasticity is induced by a random mask over the connections. Synaptic stochasticity plays the dual role of an efficient mechanism for sampling, and a regularizer during learning akin to DropConnect. A local synaptic plasticity rule implementing an event-driven form of contrastive divergence enables the learning of generative models in an on-line fashion. S2Ms perform equally well using discrete-timed artificial units (as in Hopfield networks) or continuous-timed leaky integrate and fire neurons. The learned representations are remarkably sparse and robust to reductions in bit precision and synapse pruning: removal of more than 75% of the weakest connections followed by cursory re-learning causes a negligible performance loss on benchmark classification tasks. The spiking neuron-based S2Ms outperform existing spike-based unsupervised learners, while potentially offering substantial advantages in terms of power and complexity, and are thus promising models for on-line learning in brain-inspired hardware. PMID:27445650

  6. On Stochastic Approximation.

    ERIC Educational Resources Information Center

    Wolff, Hans

    This paper deals with a stochastic process for the approximation of the root of a regression equation. This process was first suggested by Robbins and Monro. The main result here is a necessary and sufficient condition on the iteration coefficients for convergence of the process (convergence with probability one and convergence in the quadratic…

  7. Focus on stochastic thermodynamics

    NASA Astrophysics Data System (ADS)

    Van den Broeck, Christian; Sasa, Shin-ichi; Seifert, Udo

    2016-02-01

    We introduce the thirty papers collected in this ‘focus on’ issue. The contributions explore conceptual issues within and around stochastic thermodynamics, use this framework for the theoretical modeling and experimental investigation of specific systems, and provide further perspectives on and for this active field.

  8. Chemical application of diffusion quantum Monte Carlo

    NASA Technical Reports Server (NTRS)

    Reynolds, P. J.; Lester, W. A., Jr.

    1984-01-01

    The diffusion quantum Monte Carlo (QMC) method gives a stochastic solution to the Schroedinger equation. This approach is receiving increasing attention in chemical applications as a result of its high accuracy. However, reducing statistical uncertainty remains a priority because chemical effects are often obtained as small differences of large numbers. As an example, the single-triplet splitting of the energy of the methylene molecule CH sub 2 is given. The QMC algorithm was implemented on the CYBER 205, first as a direct transcription of the algorithm running on the VAX 11/780, and second by explicitly writing vector code for all loops longer than a crossover length C. The speed of the codes relative to one another as a function of C, and relative to the VAX, are discussed. The computational time dependence obtained versus the number of basis functions is discussed and this is compared with that obtained from traditional quantum chemistry codes and that obtained from traditional computer architectures.

  9. INSTRUCTIONAL CONFERENCE ON THE THEORY OF STOCHASTIC PROCESSES: Operator stochastic differential equations and stochastic semigroups

    NASA Astrophysics Data System (ADS)

    Skorokhod, A. V.

    1982-12-01

    CONTENTSIntroduction § 1. The finite-dimensional case § 2. Stochastic semigroups in the L2-strong theory § 3. Homogeneous strongly continuous semigroups with the group of the first moments § 4. Stochastic equations of diffusion type with constant coefficients § 5. Continuous homogeneous stochastic semigroups in the presence of two moments References

  10. Adaptive stochastic cellular automata: Applications

    NASA Astrophysics Data System (ADS)

    Qian, S.; Lee, Y. C.; Jones, R. D.; Barnes, C. W.; Flake, G. W.; O'Rourke, M. K.; Lee, K.; Chen, H. H.; Sun, G. Z.; Zhang, Y. Q.; Chen, D.; Giles, C. L.

    1990-09-01

    The stochastic learning cellular automata model has been applied to the problem of controlling unstable systems. Two example unstable systems studied are controlled by an adaptive stochastic cellular automata algorithm with an adaptive critic. The reinforcement learning algorithm and the architecture of the stochastic CA controller are presented. Learning to balance a single pole is discussed in detail. Balancing an inverted double pendulum highlights the power of the stochastic CA approach. The stochastic CA model is compared to conventional adaptive control and artificial neural network approaches.

  11. Stochastic computing with biomolecular automata

    NASA Astrophysics Data System (ADS)

    Adar, Rivka; Benenson, Yaakov; Linshiz, Gregory; Rosner, Amit; Tishby, Naftali; Shapiro, Ehud

    2004-07-01

    Stochastic computing has a broad range of applications, yet electronic computers realize its basic step, stochastic choice between alternative computation paths, in a cumbersome way. Biomolecular computers use a different computational paradigm and hence afford novel designs. We constructed a stochastic molecular automaton in which stochastic choice is realized by means of competition between alternative biochemical pathways, and choice probabilities are programmed by the relative molar concentrations of the software molecules coding for the alternatives. Programmable and autonomous stochastic molecular automata have been shown to perform direct analysis of disease-related molecular indicators in vitro and may have the potential to provide in situ medical diagnosis and cure.

  12. Stochastic Inversion of 2D Magnetotelluric Data

    SciTech Connect

    Chen, Jinsong

    2010-07-01

    The algorithm is developed to invert 2D magnetotelluric (MT) data based on sharp boundary parametrization using a Bayesian framework. Within the algorithm, we consider the locations and the resistivity of regions formed by the interfaces are as unknowns. We use a parallel, adaptive finite-element algorithm to forward simulate frequency-domain MT responses of 2D conductivity structure. Those unknown parameters are spatially correlated and are described by a geostatistical model. The joint posterior probability distribution function is explored by Markov Chain Monte Carlo (MCMC) sampling methods. The developed stochastic model is effective for estimating the interface locations and resistivity. Most importantly, it provides details uncertainty information on each unknown parameter. Hardware requirements: PC, Supercomputer, Multi-platform, Workstation; Software requirements C and Fortan; Operation Systems/version is Linux/Unix or Windows

  13. Stochastic Inversion of 2D Magnetotelluric Data

    2010-07-01

    The algorithm is developed to invert 2D magnetotelluric (MT) data based on sharp boundary parametrization using a Bayesian framework. Within the algorithm, we consider the locations and the resistivity of regions formed by the interfaces are as unknowns. We use a parallel, adaptive finite-element algorithm to forward simulate frequency-domain MT responses of 2D conductivity structure. Those unknown parameters are spatially correlated and are described by a geostatistical model. The joint posterior probability distribution function ismore » explored by Markov Chain Monte Carlo (MCMC) sampling methods. The developed stochastic model is effective for estimating the interface locations and resistivity. Most importantly, it provides details uncertainty information on each unknown parameter. Hardware requirements: PC, Supercomputer, Multi-platform, Workstation; Software requirements C and Fortan; Operation Systems/version is Linux/Unix or Windows« less

  14. Edgeworth expansions of stochastic trading time

    NASA Astrophysics Data System (ADS)

    Decamps, Marc; De Schepper, Ann

    2010-08-01

    Under most local and stochastic volatility models the underlying forward is assumed to be a positive function of a time-changed Brownian motion. It relates nicely the implied volatility smile to the so-called activity rate in the market. Following Young and DeWitt-Morette (1986) [8], we propose to apply the Duru-Kleinert process-cum-time transformation in path integral to formulate the transition density of the forward. The method leads to asymptotic expansions of the transition density around a Gaussian kernel corresponding to the average activity in the market conditional on the forward value. The approximation is numerically illustrated for pricing vanilla options under the CEV model and the popular normal SABR model. The asymptotics can also be used for Monte Carlo simulations or backward integration schemes.

  15. A non-stochastic iterative computational method to model light propagation in turbid media

    NASA Astrophysics Data System (ADS)

    McIntyre, Thomas J.; Zemp, Roger J.

    2015-03-01

    Monte Carlo models are widely used to model light transport in turbid media, however their results implicitly contain stochastic variations. These fluctuations are not ideal, especially for inverse problems where Jacobian matrix errors can lead to large uncertainties upon matrix inversion. Yet Monte Carlo approaches are more computationally favorable than solving the full Radiative Transport Equation. Here, a non-stochastic computational method of estimating fluence distributions in turbid media is proposed, which is called the Non-Stochastic Propagation by Iterative Radiance Evaluation method (NSPIRE). Rather than using stochastic means to determine a random walk for each photon packet, the propagation of light from any element to all other elements in a grid is modelled simultaneously. For locally homogeneous anisotropic turbid media, the matrices used to represent scattering and projection are shown to be block Toeplitz, which leads to computational simplifications via convolution operators. To evaluate the accuracy of the algorithm, 2D simulations were done and compared against Monte Carlo models for the cases of an isotropic point source and a pencil beam incident on a semi-infinite turbid medium. The model was shown to have a mean percent error less than 2%. The algorithm represents a new paradigm in radiative transport modelling and may offer a non-stochastic alternative to modeling light transport in anisotropic scattering media for applications where the diffusion approximation is insufficient.

  16. Monte Carlo neutrino oscillations

    SciTech Connect

    Kneller, James P.; McLaughlin, Gail C.

    2006-03-01

    We demonstrate that the effects of matter upon neutrino propagation may be recast as the scattering of the initial neutrino wave function. Exchanging the differential, Schrodinger equation for an integral equation for the scattering matrix S permits a Monte Carlo method for the computation of S that removes many of the numerical difficulties associated with direct integration techniques.

  17. Baseball Monte Carlo Style.

    ERIC Educational Resources Information Center

    Houser, Larry L.

    1981-01-01

    Monte Carlo methods are used to simulate activities in baseball such as a team's "hot streak" and a hitter's "batting slump." Student participation in such simulations is viewed as a useful method of giving pupils a better understanding of the probability concepts involved. (MP)

  18. Phylogenetic Stochastic Mapping Without Matrix Exponentiation

    PubMed Central

    Irvahn, Jan; Minin, Vladimir N.

    2014-01-01

    Abstract Phylogenetic stochastic mapping is a method for reconstructing the history of trait changes on a phylogenetic tree relating species/organism carrying the trait. State-of-the-art methods assume that the trait evolves according to a continuous-time Markov chain (CTMC) and works well for small state spaces. The computations slow down considerably for larger state spaces (e.g., space of codons), because current methodology relies on exponentiating CTMC infinitesimal rate matrices—an operation whose computational complexity grows as the size of the CTMC state space cubed. In this work, we introduce a new approach, based on a CTMC technique called uniformization, which does not use matrix exponentiation for phylogenetic stochastic mapping. Our method is based on a new Markov chain Monte Carlo (MCMC) algorithm that targets the distribution of trait histories conditional on the trait data observed at the tips of the tree. The computational complexity of our MCMC method grows as the size of the CTMC state space squared. Moreover, in contrast to competing matrix exponentiation methods, if the rate matrix is sparse, we can leverage this sparsity and increase the computational efficiency of our algorithm further. Using simulated data, we illustrate advantages of our MCMC algorithm and investigate how large the state space needs to be for our method to outperform matrix exponentiation approaches. We show that even on the moderately large state space of codons our MCMC method can be significantly faster than currently used matrix exponentiation methods. PMID:24918812

  19. Patchwork sampling of stochastic differential equations.

    PubMed

    Kürsten, Rüdiger; Behn, Ulrich

    2016-03-01

    We propose a method to sample stationary properties of solutions of stochastic differential equations, which is accurate and efficient if there are rarely visited regions or rare transitions between distinct regions of the state space. The method is based on a complete, nonoverlapping partition of the state space into patches on which the stochastic process is ergodic. On each of these patches we run simulations of the process strictly truncated to the corresponding patch, which allows effective simulations also in rarely visited regions. The correct weight for each patch is obtained by counting the attempted transitions between all different patches. The results are patchworked to cover the whole state space. We extend the concept of truncated Markov chains which is originally formulated for processes which obey detailed balance to processes not fulfilling detailed balance. The method is illustrated by three examples, describing the one-dimensional diffusion of an overdamped particle in a double-well potential, a system of many globally coupled overdamped particles in double-well potentials subject to additive Gaussian white noise, and the overdamped motion of a particle on the circle in a periodic potential subject to a deterministic drift and additive noise. In an appendix we explain how other well-known Markov chain Monte Carlo algorithms can be related to truncated Markov chains. PMID:27078484

  20. Correlation functions in conformal invariant stochastic processes

    NASA Astrophysics Data System (ADS)

    Alcaraz, Francisco C.; Rittenberg, Vladimir

    2015-11-01

    We consider the problem of correlation functions in the stationary states of one-dimensional stochastic models having conformal invariance. If one considers the space dependence of the correlators, the novel aspect is that although one considers systems with periodic boundary conditions, the observables are described by boundary operators. From our experience with equilibrium problems one would have expected bulk operators. Boundary operators have correlators having critical exponents being half of those of bulk operators. If one studies the space-time dependence of the two-point function, one has to consider one boundary and one bulk operators. The Raise and Peel model has conformal invariance as can be shown in the spin 1/2 basis of the Hamiltonian which gives the time evolution of the system. This is an XXZ quantum chain with twisted boundary condition and local interactions. This Hamiltonian is integrable and the spectrum is known in the finite-size scaling limit. In the stochastic base in which the process is defined, the Hamiltonian is not local anymore. The mapping into an SOS model, helps to define new local operators. As a byproduct some new properties of the SOS model are conjectured. The predictions of conformal invariance are discussed in the new framework and compared with Monte Carlo simulations.

  1. Stochastic Event-Driven Molecular Dynamics

    SciTech Connect

    Donev, Aleksandar Garcia, Alejandro L.; Alder, Berni J.

    2008-02-01

    A novel Stochastic Event-Driven Molecular Dynamics (SEDMD) algorithm is developed for the simulation of polymer chains suspended in a solvent. SEDMD combines event-driven molecular dynamics (EDMD) with the Direct Simulation Monte Carlo (DSMC) method. The polymers are represented as chains of hard-spheres tethered by square wells and interact with the solvent particles with hard-core potentials. The algorithm uses EDMD for the simulation of the polymer chain and the interactions between the chain beads and the surrounding solvent particles. The interactions between the solvent particles themselves are not treated deterministically as in EDMD, rather, the momentum and energy exchange in the solvent is determined stochastically using DSMC. The coupling between the solvent and the solute is consistently represented at the particle level retaining hydrodynamic interactions and thermodynamic fluctuations. However, unlike full MD simulations of both the solvent and the solute, in SEDMD the spatial structure of the solvent is ignored. The SEDMD algorithm is described in detail and applied to the study of the dynamics of a polymer chain tethered to a hard-wall subjected to uniform shear. SEDMD closely reproduces results obtained using traditional EDMD simulations with two orders of magnitude greater efficiency. Results question the existence of periodic (cycling) motion of the polymer chain.

  2. Multiple-time-stepping generalized hybrid Monte Carlo methods

    SciTech Connect

    Escribano, Bruno; Akhmatskaya, Elena; Reich, Sebastian; Azpiroz, Jon M.

    2015-01-01

    Performance of the generalized shadow hybrid Monte Carlo (GSHMC) method [1], which proved to be superior in sampling efficiency over its predecessors [2–4], molecular dynamics and hybrid Monte Carlo, can be further improved by combining it with multi-time-stepping (MTS) and mollification of slow forces. We demonstrate that the comparatively simple modifications of the method not only lead to better performance of GSHMC itself but also allow for beating the best performed methods, which use the similar force splitting schemes. In addition we show that the same ideas can be successfully applied to the conventional generalized hybrid Monte Carlo method (GHMC). The resulting methods, MTS-GHMC and MTS-GSHMC, provide accurate reproduction of thermodynamic and dynamical properties, exact temperature control during simulation and computational robustness and efficiency. MTS-GHMC uses a generalized momentum update to achieve weak stochastic stabilization to the molecular dynamics (MD) integrator. MTS-GSHMC adds the use of a shadow (modified) Hamiltonian to filter the MD trajectories in the HMC scheme. We introduce a new shadow Hamiltonian formulation adapted to force-splitting methods. The use of such Hamiltonians improves the acceptance rate of trajectories and has a strong impact on the sampling efficiency of the method. Both methods were implemented in the open-source MD package ProtoMol and were tested on a water and a protein systems. Results were compared to those obtained using a Langevin Molly (LM) method [5] on the same systems. The test results demonstrate the superiority of the new methods over LM in terms of stability, accuracy and sampling efficiency. This suggests that putting the MTS approach in the framework of hybrid Monte Carlo and using the natural stochasticity offered by the generalized hybrid Monte Carlo lead to improving stability of MTS and allow for achieving larger step sizes in the simulation of complex systems.

  3. Beamlets from stochastic acceleration.

    PubMed

    Perri, Silvia; Carbone, Vincenzo

    2008-09-01

    We investigate the dynamics of a realization of the stochastic Fermi acceleration mechanism. The model consists of test particles moving between two oscillating magnetic clouds and differs from the usual Fermi-Ulam model in two ways. (i) Particles can penetrate inside clouds before being reflected. (ii) Particles can radiate a fraction of their energy during the process. Since the Fermi mechanism is at work, particles are stochastically accelerated, even in the presence of the radiated energy. Furthermore, due to a kind of resonance between particles and oscillating clouds, the probability density function of particles is strongly modified, thus generating beams of accelerated particles rather than a translation of the whole distribution function to higher energy. This simple mechanism could account for the presence of beamlets in some space plasma physics situations.

  4. Stochastic ice stream dynamics

    NASA Astrophysics Data System (ADS)

    Mantelli, Elisa; Bertagni, Matteo Bernard; Ridolfi, Luca

    2016-08-01

    Ice streams are narrow corridors of fast-flowing ice that constitute the arterial drainage network of ice sheets. Therefore, changes in ice stream flow are key to understanding paleoclimate, sea level changes, and rapid disintegration of ice sheets during deglaciation. The dynamics of ice flow are tightly coupled to the climate system through atmospheric temperature and snow recharge, which are known exhibit stochastic variability. Here we focus on the interplay between stochastic climate forcing and ice stream temporal dynamics. Our work demonstrates that realistic climate fluctuations are able to (i) induce the coexistence of dynamic behaviors that would be incompatible in a purely deterministic system and (ii) drive ice stream flow away from the regime expected in a steady climate. We conclude that environmental noise appears to be crucial to interpreting the past behavior of ice sheets, as well as to predicting their future evolution.

  5. STOCHASTIC COOLING FOR RHIC.

    SciTech Connect

    BLASKIEWICZ,M.BRENNAN,J.M.CAMERON,P.WEI,J.

    2003-05-12

    Emittance growth due to Intra-Beam Scattering significantly reduces the heavy ion luminosity lifetime in RHIC. Stochastic cooling of the stored beam could improve things considerably by counteracting IBS and preventing particles from escaping the rf bucket [1]. High frequency bunched-beam stochastic cooling is especially challenging but observations of Schottky signals in the 4-8 GHz band indicate that conditions are favorable in RHIC [2]. We report here on measurements of the longitudinal beam transfer function carried out with a pickup kicker pair on loan from FNAL TEVATRON. Results imply that for ions a coasting beam description is applicable and we outline some general features of a viable momentum cooling system for RHIC.

  6. Stochastic ice stream dynamics.

    PubMed

    Mantelli, Elisa; Bertagni, Matteo Bernard; Ridolfi, Luca

    2016-08-01

    Ice streams are narrow corridors of fast-flowing ice that constitute the arterial drainage network of ice sheets. Therefore, changes in ice stream flow are key to understanding paleoclimate, sea level changes, and rapid disintegration of ice sheets during deglaciation. The dynamics of ice flow are tightly coupled to the climate system through atmospheric temperature and snow recharge, which are known exhibit stochastic variability. Here we focus on the interplay between stochastic climate forcing and ice stream temporal dynamics. Our work demonstrates that realistic climate fluctuations are able to (i) induce the coexistence of dynamic behaviors that would be incompatible in a purely deterministic system and (ii) drive ice stream flow away from the regime expected in a steady climate. We conclude that environmental noise appears to be crucial to interpreting the past behavior of ice sheets, as well as to predicting their future evolution. PMID:27457960

  7. Stochastic multiscale modeling of polycrystalline materials

    NASA Astrophysics Data System (ADS)

    Wen, Bin

    Mechanical properties of engineering materials are sensitive to the underlying random microstructure. Quantification of mechanical property variability induced by microstructure variation is essential for the prediction of extreme properties and microstructure-sensitive design of materials. Recent advances in high throughput characterization of polycrystalline microstructures have resulted in huge data sets of microstructural descriptors and image snapshots. To utilize these large scale experimental data for computing the resulting variability of macroscopic properties, appropriate mathematical representation of microstructures is needed. By exploring the space containing all admissible microstructures that are statistically similar to the available data, one can estimate the distribution/envelope of possible properties by employing efficient stochastic simulation methodologies along with robust physics-based deterministic simulators. The focus of this thesis is on the construction of low-dimensional representations of random microstructures and the development of efficient physics-based simulators for polycrystalline materials. By adopting appropriate stochastic methods, such as Monte Carlo and Adaptive Sparse Grid Collocation methods, the variability of microstructure-sensitive properties of polycrystalline materials is investigated. The primary outcomes of this thesis include: (1) Development of data-driven reduced-order representations of microstructure variations to construct the admissible space of random polycrystalline microstructures. (2) Development of accurate and efficient physics-based simulators for the estimation of material properties based on mesoscale microstructures. (3) Investigating property variability of polycrystalline materials using efficient stochastic simulation methods in combination with the above two developments. The uncertainty quantification framework developed in this work integrates information science and materials science, and

  8. Entropy of stochastic flows

    SciTech Connect

    Dorogovtsev, Andrei A

    2010-06-29

    For sets in a Hilbert space the concept of quadratic entropy is introduced. It is shown that this entropy is finite for the range of a stochastic flow of Brownian particles on R. This implies, in particular, the fact that the total time of the free travel in the Arratia flow of all particles that started from a bounded interval is finite. Bibliography: 10 titles.

  9. Stochastic lag time in nucleated linear self-assembly

    NASA Astrophysics Data System (ADS)

    Tiwari, Nitin S.; van der Schoot, Paul

    2016-06-01

    Protein aggregation is of great importance in biology, e.g., in amyloid fibrillation. The aggregation processes that occur at the cellular scale must be highly stochastic in nature because of the statistical number fluctuations that arise on account of the small system size at the cellular scale. We study the nucleated reversible self-assembly of monomeric building blocks into polymer-like aggregates using the method of kinetic Monte Carlo. Kinetic Monte Carlo, being inherently stochastic, allows us to study the impact of fluctuations on the polymerization reactions. One of the most important characteristic features in this kind of problem is the existence of a lag phase before self-assembly takes off, which is what we focus attention on. We study the associated lag time as a function of system size and kinetic pathway. We find that the leading order stochastic contribution to the lag time before polymerization commences is inversely proportional to the system volume for large-enough system size for all nine reaction pathways tested. Finite-size corrections to this do depend on the kinetic pathway.

  10. Ultimate open pit stochastic optimization

    NASA Astrophysics Data System (ADS)

    Marcotte, Denis; Caron, Josiane

    2013-02-01

    Classical open pit optimization (maximum closure problem) is made on block estimates, without directly considering the block grades uncertainty. We propose an alternative approach of stochastic optimization. The stochastic optimization is taken as the optimal pit computed on the block expected profits, rather than expected grades, computed from a series of conditional simulations. The stochastic optimization generates, by construction, larger ore and waste tonnages than the classical optimization. Contrary to the classical approach, the stochastic optimization is conditionally unbiased for the realized profit given the predicted profit. A series of simulated deposits with different variograms are used to compare the stochastic approach, the classical approach and the simulated approach that maximizes expected profit among simulated designs. Profits obtained with the stochastic optimization are generally larger than the classical or simulated pit. The main factor controlling the relative gain of stochastic optimization compared to classical approach and simulated pit is shown to be the information level as measured by the boreholes spacing/range ratio. The relative gains of the stochastic approach over the classical approach increase with the treatment costs but decrease with mining costs. The relative gains of the stochastic approach over the simulated pit approach increase both with the treatment and mining costs. At early stages of an open pit project, when uncertainty is large, the stochastic optimization approach appears preferable to the classical approach or the simulated pit approach for fair comparison of the values of alternative projects and for the initial design and planning of the open pit.

  11. Characterizing model uncertainties in the life cycle of lignocellulose-based ethanol fuels.

    PubMed

    Spatari, Sabrina; MacLean, Heather L

    2010-11-15

    Renewable and low carbon fuel standards being developed at federal and state levels require an estimation of the life cycle carbon intensity (LCCI) of candidate fuels that can substitute for gasoline, such as second generation bioethanol. Estimating the LCCI of such fuels with a high degree of confidence requires the use of probabilistic methods to account for known sources of uncertainty. We construct life cycle models for the bioconversion of agricultural residue (corn stover) and energy crops (switchgrass) and explicitly examine uncertainty using Monte Carlo simulation. Using statistical methods to identify significant model variables from public data sets and Aspen Plus chemical process models,we estimate stochastic life cycle greenhouse gas (GHG) emissions for the two feedstocks combined with two promising fuel conversion technologies. The approach can be generalized to other biofuel systems. Our results show potentially high and uncertain GHG emissions for switchgrass-ethanol due to uncertain CO₂ flux from land use change and N₂O flux from N fertilizer. However, corn stover-ethanol,with its low-in-magnitude, tight-in-spread LCCI distribution, shows considerable promise for reducing life cycle GHG emissions relative to gasoline and corn-ethanol. Coproducts are important for reducing the LCCI of all ethanol fuels we examine.

  12. An Advanced Neutronic Analysis Toolkit with Inline Monte Carlo capability for BHTR Analysis

    SciTech Connect

    William R. Martin; John C. Lee

    2009-12-30

    Monte Carlo capability has been combined with a production LWR lattice physics code to allow analysis of high temperature gas reactor configurations, accounting for the double heterogeneity due to the TRISO fuel. The Monte Carlo code MCNP5 has been used in conjunction with CPM3, which was the testbench lattice physics code for this project. MCNP5 is used to perform two calculations for the geometry of interest, one with homogenized fuel compacts and the other with heterogeneous fuel compacts, where the TRISO fuel kernels are resolved by MCNP5.

  13. Markov Chain Monte-Carlo Models of Starburst Clusters

    NASA Astrophysics Data System (ADS)

    Melnick, Jorge

    2015-01-01

    There are a number of stochastic effects that must be considered when comparing models to observations of starburst clusters: the IMF is never fully populated; the stars can never be strictly coeval; stars rotate and their photometric properties depend on orientation; a significant fraction of massive stars are in interacting binaries; and the extinction varies from star to star. The probability distributions of each of these effects are not a priori known, but must be extracted from the observations. Markov Chain Monte-Carlo methods appear to provide the best statistical approach. Here I present an example of stochastic age effects upon the upper mass limit of the IMF of the Arches cluster as derived from near-IR photometry.

  14. Hybrid stochastic simulations of intracellular reaction-diffusion systems

    PubMed Central

    Kalantzis, Georgios

    2009-01-01

    With the observation that stochasticity is important in biological systems, chemical kinetics have begun to receive wider interest. While the use of Monte Carlo discrete event simulations most accurately capture the variability of molecular species, they become computationally costly for complex reaction-diffusion systems with large populations of molecules. On the other hand, continuous time models are computationally efficient but they fail to capture any variability in the molecular species. In this study a novel hybrid stochastic approach is introduced for simulating reaction-diffusion systems. We developed a dynamic partitioning strategy using fractional propensities. In that way processes with high frequency are simulated mostly with deterministic rate-based equations, and those with low frequency mostly with the exact stochastic algorithm of Gillespie. In this way we preserve the stochastic behavior of cellular pathways while being able to apply it to large populations of molecules. In this article we describe this hybrid algorithmic approach, and we demonstrate its accuracy and efficiency compared with the Gillespie algorithm for two different systems. First, a model of intracellular viral kinetics with two steady states and second, a compartmental model of the postsynaptic spine head for studying the dynamics of Ca+2 and NMDA receptors. PMID:19414282

  15. A cavitation model based on Eulerian stochastic fields

    NASA Astrophysics Data System (ADS)

    Magagnato, F.; Dumond, J.

    2013-12-01

    Non-linear phenomena can often be described using probability density functions (pdf) and pdf transport models. Traditionally the simulation of pdf transport requires Monte-Carlo codes based on Lagrangian "particles" or prescribed pdf assumptions including binning techniques. Recently, in the field of combustion, a novel formulation called the stochastic-field method solving pdf transport based on Eulerian fields has been proposed which eliminates the necessity to mix Eulerian and Lagrangian techniques or prescribed pdf assumptions. In the present work, for the first time the stochastic-field method is applied to multi-phase flow and in particular to cavitating flow. To validate the proposed stochastic-field cavitation model, two applications are considered. Firstly, sheet cavitation is simulated in a Venturi-type nozzle. The second application is an innovative fluidic diode which exhibits coolant flashing. Agreement with experimental results is obtained for both applications with a fixed set of model constants. The stochastic-field cavitation model captures the wide range of pdf shapes present at different locations.

  16. Multi-scenario modelling of uncertainty in stochastic chemical systems

    SciTech Connect

    Evans, R. David; Ricardez-Sandoval, Luis A.

    2014-09-15

    Uncertainty analysis has not been well studied at the molecular scale, despite extensive knowledge of uncertainty in macroscale systems. The ability to predict the effect of uncertainty allows for robust control of small scale systems such as nanoreactors, surface reactions, and gene toggle switches. However, it is difficult to model uncertainty in such chemical systems as they are stochastic in nature, and require a large computational cost. To address this issue, a new model of uncertainty propagation in stochastic chemical systems, based on the Chemical Master Equation, is proposed in the present study. The uncertain solution is approximated by a composite state comprised of the averaged effect of samples from the uncertain parameter distributions. This model is then used to study the effect of uncertainty on an isomerization system and a two gene regulation network called a repressilator. The results of this model show that uncertainty in stochastic systems is dependent on both the uncertain distribution, and the system under investigation. -- Highlights: •A method to model uncertainty on stochastic systems was developed. •The method is based on the Chemical Master Equation. •Uncertainty in an isomerization reaction and a gene regulation network was modelled. •Effects were significant and dependent on the uncertain input and reaction system. •The model was computationally more efficient than Kinetic Monte Carlo.

  17. A rigorous framework for multiscale simulation of stochastic cellular networks

    PubMed Central

    Chevalier, Michael W.; El-Samad, Hana

    2009-01-01

    Noise and stochasticity are fundamental to biology and derive from the very nature of biochemical reactions where thermal motion of molecules translates into randomness in the sequence and timing of reactions. This randomness leads to cell-cell variability even in clonal populations. Stochastic biochemical networks are modeled as continuous time discrete state Markov processes whose probability density functions evolve according to a chemical master equation (CME). The CME is not solvable but for the simplest cases, and one has to resort to kinetic Monte Carlo techniques to simulate the stochastic trajectories of the biochemical network under study. A commonly used such algorithm is the stochastic simulation algorithm (SSA). Because it tracks every biochemical reaction that occurs in a given system, the SSA presents computational difficulties especially when there is a vast disparity in the timescales of the reactions or in the number of molecules involved in these reactions. This is common in cellular networks, and many approximation algorithms have evolved to alleviate the computational burdens of the SSA. Here, we present a rigorously derived modified CME framework based on the partition of a biochemically reacting system into restricted and unrestricted reactions. Although this modified CME decomposition is as analytically difficult as the original CME, it can be naturally used to generate a hierarchy of approximations at different levels of accuracy. Most importantly, some previously derived algorithms are demonstrated to be limiting cases of our formulation. We apply our methods to biologically relevant test systems to demonstrate their accuracy and efficiency. PMID:19673546

  18. Stochastic Optimal Scheduling of Residential Appliances with Renewable Energy Sources

    SciTech Connect

    Wu, Hongyu; Pratt, Annabelle; Chakraborty, Sudipta

    2015-07-03

    This paper proposes a stochastic, multi-objective optimization model within a Model Predictive Control (MPC) framework, to determine the optimal operational schedules of residential appliances operating in the presence of renewable energy source (RES). The objective function minimizes the weighted sum of discomfort, energy cost, total and peak electricity consumption, and carbon footprint. A heuristic method is developed for combining different objective components. The proposed stochastic model utilizes Monte Carlo simulation (MCS) for representing uncertainties in electricity price, outdoor temperature, RES generation, water usage, and non-controllable loads. The proposed model is solved using a mixed integer linear programming (MILP) solver and numerical results show the validity of the model. Case studies show the benefit of using the proposed optimization model.

  19. Stochastic Flow Cascades

    NASA Astrophysics Data System (ADS)

    Eliazar, Iddo I.; Shlesinger, Michael F.

    2012-01-01

    We introduce and explore a Stochastic Flow Cascade (SFC) model: A general statistical model for the unidirectional flow through a tandem array of heterogeneous filters. Examples include the flow of: (i) liquid through heterogeneous porous layers; (ii) shocks through tandem shot noise systems; (iii) signals through tandem communication filters. The SFC model combines together the Langevin equation, convolution filters and moving averages, and Poissonian randomizations. A comprehensive analysis of the SFC model is carried out, yielding closed-form results. Lévy laws are shown to universally emerge from the SFC model, and characterize both heavy tailed retention times (Noah effect) and long-ranged correlations (Joseph effect).

  20. Stochastic thermodynamics of resetting

    NASA Astrophysics Data System (ADS)

    Fuchs, Jaco; Goldt, Sebastian; Seifert, Udo

    2016-03-01

    Stochastic dynamics with random resetting leads to a non-equilibrium steady state. Here, we consider the thermodynamics of resetting by deriving the first and second law for resetting processes far from equilibrium. We identify the contributions to the entropy production of the system which arise due to resetting and show that they correspond to the rate with which information is either erased or created. Using Landauer's principle, we derive a bound on the amount of work that is required to maintain a resetting process. We discuss different regimes of resetting, including a Maxwell demon scenario where heat is extracted from a bath at constant temperature.

  1. Stochastic ontogenetic growth model

    NASA Astrophysics Data System (ADS)

    West, B. J.; West, D.

    2012-02-01

    An ontogenetic growth model (OGM) for a thermodynamically closed system is generalized to satisfy both the first and second law of thermodynamics. The hypothesized stochastic ontogenetic growth model (SOGM) is shown to entail the interspecies allometry relation by explicitly averaging the basal metabolic rate and the total body mass over the steady-state probability density for the total body mass (TBM). This is the first derivation of the interspecies metabolic allometric relation from a dynamical model and the asymptotic steady-state distribution of the TBM is fit to data and shown to be inverse power law.

  2. Angular biasing in implicit Monte-Carlo

    SciTech Connect

    Zimmerman, G.B.

    1994-10-20

    Calculations of indirect drive Inertial Confinement Fusion target experiments require an integrated approach in which laser irradiation and radiation transport in the hohlraum are solved simultaneously with the symmetry, implosion and burn of the fuel capsule. The Implicit Monte Carlo method has proved to be a valuable tool for the two dimensional radiation transport within the hohlraum, but the impact of statistical noise on the symmetric implosion of the small fuel capsule is difficult to overcome. We present an angular biasing technique in which an increased number of low weight photons are directed at the imploding capsule. For typical parameters this reduces the required computer time for an integrated calculation by a factor of 10. An additional factor of 5 can also be achieved by directing even smaller weight photons at the polar regions of the capsule where small mass zones are most sensitive to statistical noise.

  3. Krylov-Projected Quantum Monte Carlo Method.

    PubMed

    Blunt, N S; Alavi, Ali; Booth, George H

    2015-07-31

    We present an approach to the calculation of arbitrary spectral, thermal, and excited state properties within the full configuration interaction quzantum Monte Carlo framework. This is achieved via an unbiased projection of the Hamiltonian eigenvalue problem into a space of stochastically sampled Krylov vectors, thus, enabling the calculation of real-frequency spectral and thermal properties and avoiding explicit analytic continuation. We use this approach to calculate temperature-dependent properties and one- and two-body spectral functions for various Hubbard models, as well as isolated excited states in ab initio systems. PMID:26274406

  4. Stochastic analysis of complex reaction networks using binomial moment equations

    NASA Astrophysics Data System (ADS)

    Barzel, Baruch; Biham, Ofer

    2012-09-01

    The stochastic analysis of complex reaction networks is a difficult problem because the number of microscopic states in such systems increases exponentially with the number of reactive species. Direct integration of the master equation is thus infeasible and is most often replaced by Monte Carlo simulations. While Monte Carlo simulations are a highly effective tool, equation-based formulations are more amenable to analytical treatment and may provide deeper insight into the dynamics of the network. Here, we present a highly efficient equation-based method for the analysis of stochastic reaction networks. The method is based on the recently introduced binomial moment equations [Barzel and Biham, Phys. Rev. Lett.PRLTAO0031-900710.1103/PhysRevLett.106.150602 106, 150602 (2011)]. The binomial moments are linear combinations of the ordinary moments of the probability distribution function of the population sizes of the interacting species. They capture the essential combinatorics of the reaction processes reflecting their stoichiometric structure. This leads to a simple and transparent form of the equations, and allows a highly efficient and surprisingly simple truncation scheme. Unlike ordinary moment equations, in which the inclusion of high order moments is prohibitively complicated, the binomial moment equations can be easily constructed up to any desired order. The result is a set of equations that enables the stochastic analysis of complex reaction networks under a broad range of conditions. The number of equations is dramatically reduced from the exponential proliferation of the master equation to a polynomial (and often quadratic) dependence on the number of reactive species in the binomial moment equations. The aim of this paper is twofold: to present a complete derivation of the binomial moment equations; to demonstrate the applicability of the moment equations for a representative set of example networks, in which stochastic effects play an important role.

  5. Stochastic analysis of complex reaction networks using binomial moment equations.

    PubMed

    Barzel, Baruch; Biham, Ofer

    2012-09-01

    The stochastic analysis of complex reaction networks is a difficult problem because the number of microscopic states in such systems increases exponentially with the number of reactive species. Direct integration of the master equation is thus infeasible and is most often replaced by Monte Carlo simulations. While Monte Carlo simulations are a highly effective tool, equation-based formulations are more amenable to analytical treatment and may provide deeper insight into the dynamics of the network. Here, we present a highly efficient equation-based method for the analysis of stochastic reaction networks. The method is based on the recently introduced binomial moment equations [Barzel and Biham, Phys. Rev. Lett. 106, 150602 (2011)]. The binomial moments are linear combinations of the ordinary moments of the probability distribution function of the population sizes of the interacting species. They capture the essential combinatorics of the reaction processes reflecting their stoichiometric structure. This leads to a simple and transparent form of the equations, and allows a highly efficient and surprisingly simple truncation scheme. Unlike ordinary moment equations, in which the inclusion of high order moments is prohibitively complicated, the binomial moment equations can be easily constructed up to any desired order. The result is a set of equations that enables the stochastic analysis of complex reaction networks under a broad range of conditions. The number of equations is dramatically reduced from the exponential proliferation of the master equation to a polynomial (and often quadratic) dependence on the number of reactive species in the binomial moment equations. The aim of this paper is twofold: to present a complete derivation of the binomial moment equations; to demonstrate the applicability of the moment equations for a representative set of example networks, in which stochastic effects play an important role. PMID:23030885

  6. Advanced interacting sequential Monte Carlo sampling for inverse scattering

    NASA Astrophysics Data System (ADS)

    Giraud, F.; Minvielle, P.; Del Moral, P.

    2013-09-01

    The following electromagnetism (EM) inverse problem is addressed. It consists in estimating the local radioelectric properties of materials recovering an object from global EM scattering measurements, at various incidences and wave frequencies. This large scale ill-posed inverse problem is explored by an intensive exploitation of an efficient 2D Maxwell solver, distributed on high performance computing machines. Applied to a large training data set, a statistical analysis reduces the problem to a simpler probabilistic metamodel, from which Bayesian inference can be performed. Considering the radioelectric properties as a hidden dynamic stochastic process that evolves according to the frequency, it is shown how advanced Markov chain Monte Carlo methods—called sequential Monte Carlo or interacting particles—can take benefit of the structure and provide local EM property estimates.

  7. Nuclear pairing within a configuration-space Monte Carlo approach

    NASA Astrophysics Data System (ADS)

    Lingle, Mark; Volya, Alexander

    2015-06-01

    Pairing correlations in nuclei play a decisive role in determining nuclear drip lines, binding energies, and many collective properties. In this work a new configuration-space Monte Carlo (CSMC) method for treating nuclear pairing correlations is developed, implemented, and demonstrated. In CSMC the Hamiltonian matrix is stochastically generated in Krylov subspace, resulting in the Monte Carlo version of Lanczos-like diagonalization. The advantages of this approach over other techniques are discussed; the absence of the fermionic sign problem, probabilistic interpretation of quantum-mechanical amplitudes, and ability to handle truly large-scale problems with defined precision and error control are noteworthy merits of CSMC. The features of our CSMC approach are shown using models and realistic examples. Special attention is given to difficult limits: situations with nonconstant pairing strengths, cases with nearly degenerate excited states, limits when pairing correlations in finite systems are weak, and problems when the relevant configuration space is large.

  8. Fuel flexible fuel injector

    DOEpatents

    Tuthill, Richard S; Davis, Dustin W; Dai, Zhongtao

    2015-02-03

    A disclosed fuel injector provides mixing of fuel with airflow by surrounding a swirled fuel flow with first and second swirled airflows that ensures mixing prior to or upon entering the combustion chamber. Fuel tubes produce a central fuel flow along with a central airflow through a plurality of openings to generate the high velocity fuel/air mixture along the axis of the fuel injector in addition to the swirled fuel/air mixture.

  9. Stochastic blind motion deblurring.

    PubMed

    Xiao, Lei; Gregson, James; Heide, Felix; Heidrich, Wolfgang

    2015-10-01

    Blind motion deblurring from a single image is a highly under-constrained problem with many degenerate solutions. A good approximation of the intrinsic image can, therefore, only be obtained with the help of prior information in the form of (often nonconvex) regularization terms for both the intrinsic image and the kernel. While the best choice of image priors is still a topic of ongoing investigation, this research is made more complicated by the fact that historically each new prior requires the development of a custom optimization method. In this paper, we develop a stochastic optimization method for blind deconvolution. Since this stochastic solver does not require the explicit computation of the gradient of the objective function and uses only efficient local evaluation of the objective, new priors can be implemented and tested very quickly. We demonstrate that this framework, in combination with different image priors produces results with Peak Signal-to-Noise Ratio (PSNR) values that match or exceed the results obtained by much more complex state-of-the-art blind motion deblurring algorithms. PMID:25974941

  10. The influence of Stochastic perturbation of Geotechnical media On Electromagnetic tomography

    NASA Astrophysics Data System (ADS)

    Song, Lei; Yang, Weihao; Huangsonglei, Jiahui; Li, HaiPeng

    2015-04-01

    Electromagnetic tomography (CT) are commonly utilized in Civil engineering to detect the structure defects or geological anomalies. CT are generally recognized as a high precision geophysical method and the accuracy of CT are expected to be several centimeters and even to be several millimeters. Then, high frequency antenna with short wavelength are utilized commonly in Civil Engineering. As to the geotechnical media, stochastic perturbation of the EM parameters are inevitably exist in geological scales, in structure scales and in local scales, et al. In those cases, the geometric dimensionings of the target body, the EM wavelength and the accuracy expected might be of the same order. When the high frequency EM wave propagated in the stochastic geotechnical media, the GPR signal would be reflected not only from the target bodies but also from the stochastic perturbation of the background media. To detect the karst caves in dissolution fracture rock, one need to assess the influence of the stochastic distributed dissolution holes and fractures; to detect the void in a concrete structure, one should master the influence of the stochastic distributed stones, et al. In this paper, on the base of stochastic media discrete realizations, the authors try to evaluate quantificationally the influence of the stochastic perturbation of Geotechnical media by Radon/Iradon Transfer through full-combined Monte Carlo numerical simulation. It is found the stochastic noise is related with transfer angle, perturbing strength, angle interval, autocorrelation length, et al. And the quantitative formula of the accuracy of the electromagnetic tomography is also established, which could help on the precision estimation of GPR tomography in stochastic perturbation Geotechnical media. Key words: Stochastic Geotechnical Media; Electromagnetic Tomography; Radon/Iradon Transfer.

  11. Monte Carlo fluorescence microtomography

    NASA Astrophysics Data System (ADS)

    Cong, Alexander X.; Hofmann, Matthias C.; Cong, Wenxiang; Xu, Yong; Wang, Ge

    2011-07-01

    Fluorescence microscopy allows real-time monitoring of optical molecular probes for disease characterization, drug development, and tissue regeneration. However, when a biological sample is thicker than 1 mm, intense scattering of light would significantly degrade the spatial resolution of fluorescence microscopy. In this paper, we develop a fluorescence microtomography technique that utilizes the Monte Carlo method to image fluorescence reporters in thick biological samples. This approach is based on an l0-regularized tomography model and provides an excellent solution. Our studies on biomimetic tissue scaffolds have demonstrated that the proposed approach is capable of localizing and quantifying the distribution of optical molecular probe accurately and reliably.

  12. Variance decomposition in stochastic simulators

    NASA Astrophysics Data System (ADS)

    Le Maître, O. P.; Knio, O. M.; Moraes, A.

    2015-06-01

    This work aims at the development of a mathematical and computational approach that enables quantification of the inherent sources of stochasticity and of the corresponding sensitivities in stochastic simulations of chemical reaction networks. The approach is based on reformulating the system dynamics as being generated by independent standardized Poisson processes. This reformulation affords a straightforward identification of individual realizations for the stochastic dynamics of each reaction channel, and consequently a quantitative characterization of the inherent sources of stochasticity in the system. By relying on the Sobol-Hoeffding decomposition, the reformulation enables us to perform an orthogonal decomposition of the solution variance. Thus, by judiciously exploiting the inherent stochasticity of the system, one is able to quantify the variance-based sensitivities associated with individual reaction channels, as well as the importance of channel interactions. Implementation of the algorithms is illustrated in light of simulations of simplified systems, including the birth-death, Schlögl, and Michaelis-Menten models.

  13. Variance decomposition in stochastic simulators.

    PubMed

    Le Maître, O P; Knio, O M; Moraes, A

    2015-06-28

    This work aims at the development of a mathematical and computational approach that enables quantification of the inherent sources of stochasticity and of the corresponding sensitivities in stochastic simulations of chemical reaction networks. The approach is based on reformulating the system dynamics as being generated by independent standardized Poisson processes. This reformulation affords a straightforward identification of individual realizations for the stochastic dynamics of each reaction channel, and consequently a quantitative characterization of the inherent sources of stochasticity in the system. By relying on the Sobol-Hoeffding decomposition, the reformulation enables us to perform an orthogonal decomposition of the solution variance. Thus, by judiciously exploiting the inherent stochasticity of the system, one is able to quantify the variance-based sensitivities associated with individual reaction channels, as well as the importance of channel interactions. Implementation of the algorithms is illustrated in light of simulations of simplified systems, including the birth-death, Schlögl, and Michaelis-Menten models.

  14. On the efficacy of stochastic collocation, stochastic Galerkin, and stochastic reduced order models for solving stochastic problems

    DOE PAGES

    Richard V. Field, Jr.; Emery, John M.; Grigoriu, Mircea Dan

    2015-05-19

    The stochastic collocation (SC) and stochastic Galerkin (SG) methods are two well-established and successful approaches for solving general stochastic problems. A recently developed method based on stochastic reduced order models (SROMs) can also be used. Herein we provide a comparison of the three methods for some numerical examples; our evaluation only holds for the examples considered in the paper. The purpose of the comparisons is not to criticize the SC or SG methods, which have proven very useful for a broad range of applications, nor is it to provide overall ratings of these methods as compared to the SROM method.more » Furthermore, our objectives are to present the SROM method as an alternative approach to solving stochastic problems and provide information on the computational effort required by the implementation of each method, while simultaneously assessing their performance for a collection of specific problems.« less

  15. Variance decomposition in stochastic simulators

    SciTech Connect

    Le Maître, O. P.; Knio, O. M.; Moraes, A.

    2015-06-28

    This work aims at the development of a mathematical and computational approach that enables quantification of the inherent sources of stochasticity and of the corresponding sensitivities in stochastic simulations of chemical reaction networks. The approach is based on reformulating the system dynamics as being generated by independent standardized Poisson processes. This reformulation affords a straightforward identification of individual realizations for the stochastic dynamics of each reaction channel, and consequently a quantitative characterization of the inherent sources of stochasticity in the system. By relying on the Sobol-Hoeffding decomposition, the reformulation enables us to perform an orthogonal decomposition of the solution variance. Thus, by judiciously exploiting the inherent stochasticity of the system, one is able to quantify the variance-based sensitivities associated with individual reaction channels, as well as the importance of channel interactions. Implementation of the algorithms is illustrated in light of simulations of simplified systems, including the birth-death, Schlögl, and Michaelis-Menten models.

  16. Stochastic simulation of transport phenomena

    SciTech Connect

    Wedgewood, L.E.; Geurts, K.R.

    1995-10-01

    In this paper, four examples are given to demonstrate how stochastic simulations can be used as a method to obtain numerical solutions to transport problems. The problems considered are two-dimensional heat conduction, mass diffusion with reaction, the start-up of Poiseuille flow, and Couette flow of a suspension of Hookean dumbbells. The first three examples are standard problems with well-known analytic solutions which can be used to verify the results of the stochastic simulation. The fourth example combines a Brownian dynamics simulation for Hookean dumbbells, a crude model of a dilute polymer suspension, and a stochastic simulation for the suspending, Newtonian fluid. These examples illustrate appropriate methods for handling source/sink terms and initial and boundary conditions. The stochastic simulation results compare well with the analytic solutions and other numerical solutions. The goal of this paper is to demonstrate the wide applicability of stochastic simulation as a numerical method for transport problems.

  17. On the efficacy of stochastic collocation, stochastic Galerkin, and stochastic reduced order models for solving stochastic problems

    SciTech Connect

    Richard V. Field, Jr.; Emery, John M.; Grigoriu, Mircea Dan

    2015-05-19

    The stochastic collocation (SC) and stochastic Galerkin (SG) methods are two well-established and successful approaches for solving general stochastic problems. A recently developed method based on stochastic reduced order models (SROMs) can also be used. Herein we provide a comparison of the three methods for some numerical examples; our evaluation only holds for the examples considered in the paper. The purpose of the comparisons is not to criticize the SC or SG methods, which have proven very useful for a broad range of applications, nor is it to provide overall ratings of these methods as compared to the SROM method. Furthermore, our objectives are to present the SROM method as an alternative approach to solving stochastic problems and provide information on the computational effort required by the implementation of each method, while simultaneously assessing their performance for a collection of specific problems.

  18. Stochastic image reconstruction for a dual-particle imaging system

    NASA Astrophysics Data System (ADS)

    Hamel, M. C.; Polack, J. K.; Poitrasson-Rivière, A.; Flaska, M.; Clarke, S. D.; Pozzi, S. A.; Tomanin, A.; Peerani, P.

    2016-02-01

    Stochastic image reconstruction has been applied to a dual-particle imaging system being designed for nuclear safeguards applications. The dual-particle imager (DPI) is a combined Compton-scatter and neutron-scatter camera capable of producing separate neutron and photon images. The stochastic origin ensembles (SOE) method was investigated as an imaging method for the DPI because only a minimal estimation of system response is required to produce images with quality that is comparable to common maximum-likelihood methods. This work contains neutron and photon SOE image reconstructions for a 252Cf point source, two mixed-oxide (MOX) fuel canisters representing point sources, and the MOX fuel canisters representing a distributed source. Simulation of the DPI using MCNPX-PoliMi is validated by comparison of simulated and measured results. Because image quality is dependent on the number of counts and iterations used, the relationship between these quantities is investigated.

  19. Stochastic techno-economic evaluation of cellulosic biofuel pathways.

    PubMed

    Zhao, Xin; Brown, Tristan R; Tyner, Wallace E

    2015-12-01

    This study evaluates the economic feasibility and stochastic dominance rank of eight cellulosic biofuel production pathways (including gasification, pyrolysis, liquefaction, and fermentation) under technological and economic uncertainty. A techno-economic assessment based financial analysis is employed to derive net present values and breakeven prices for each pathway. Uncertainty is investigated and incorporated into fuel prices and techno-economic variables: capital cost, conversion technology yield, hydrogen cost, natural gas price and feedstock cost using @Risk, a Palisade Corporation software. The results indicate that none of the eight pathways would be profitable at expected values under projected energy prices. Fast pyrolysis and hydroprocessing (FPH) has the lowest breakeven fuel price at 3.11$/gallon of gasoline equivalent (0.82$/liter of gasoline equivalent). With the projected energy prices, FPH investors could expect a 59% probability of loss. Stochastic dominance is done based on return on investment. Most risk-averse decision makers would prefer FPH to other pathways.

  20. Stochastic Hard-Sphere Dynamics for Hydrodynamics of Non-Ideal Fluids

    SciTech Connect

    Donev, A; Alder, B J; Garcia, A L

    2008-02-26

    A novel stochastic fluid model is proposed with a nonideal structure factor consistent with compressibility, and adjustable transport coefficients. This stochastic hard-sphere dynamics (SHSD) algorithm is a modification of the direct simulation Monte Carlo algorithm and has several computational advantages over event-driven hard-sphere molecular dynamics. Surprisingly, SHSD results in an equation of state and a pair correlation function identical to that of a deterministic Hamiltonian system of penetrable spheres interacting with linear core pair potentials. The fluctuating hydrodynamic behavior of the SHSD fluid is verified for the Brownian motion of a nanoparticle suspended in a compressible solvent.

  1. Proof of quasi-adaptivity for the m-measurement feedback class of stochastic control policies

    NASA Technical Reports Server (NTRS)

    Bayard, David S.

    1987-01-01

    Bounds on expected performance are established which show that the m-measurement feedback (mM) policy for nonlinear stochastic control performs as well or better than the open-loop optimal control policy, and thus is quasi-adaptive in the sense of Witenhausen (1966). The chain of performance inequalities indicate a tendency for the mM policy performance to improve with increasing m. It is suggested that the present analytical method, based on the construction of artificial control sequences denoted as utility controls, can be used to establish performance bounds on other well-known policies, avoiding the extensive Monte Carlo simulations necessary in comparing stochastic control policies.

  2. Study on high order perturbation-based nonlinear stochastic finite element method for dynamic problems

    NASA Astrophysics Data System (ADS)

    Wang, Qing; Yao, Jing-Zheng

    2010-12-01

    Several algorithms were proposed relating to the development of a framework of the perturbation-based stochastic finite element method (PSFEM) for large variation nonlinear dynamic problems. For this purpose, algorithms and a framework related to SFEM based on the stochastic virtual work principle were studied. To prove the validity and practicality of the algorithms and framework, numerical examples for nonlinear dynamic problems with large variations were calculated and compared with the Monte-Carlo Simulation method. This comparison shows that the proposed approaches are accurate and effective for the nonlinear dynamic analysis of structures with random parameters.

  3. Stochastic reconstruction of sandstones

    PubMed

    Manwart; Torquato; Hilfer

    2000-07-01

    A simulated annealing algorithm is employed to generate a stochastic model for a Berea sandstone and a Fontainebleau sandstone, with each a prescribed two-point probability function, lineal-path function, and "pore size" distribution function, respectively. We find that the temperature decrease of the annealing has to be rather quick to yield isotropic and percolating configurations. A comparison of simple morphological quantities indicates good agreement between the reconstructions and the original sandstones. Also, the mean survival time of a random walker in the pore space is reproduced with good accuracy. However, a more detailed investigation by means of local porosity theory shows that there may be significant differences of the geometrical connectivity between the reconstructed and the experimental samples.

  4. Stochastic patch exploitation model

    PubMed Central

    Rita, H.; Ranta, E.

    1998-01-01

    A solitary animal is foraging in a patch consisting of discrete prey items. We develop a stochastic model for the accumulation of gain as a function of elapsed time in the patch. The model is based on the waiting times between subsequent encounters with the prey items. The novelty of the model is in that it renders possible–via parameterization of the waiting time distributions: the incorporation of different foraging situations and patch structures into the gain process. The flexibility of the model is demonstrated with different foraging scenarios. Dependence of gain expectation and variance of the parameters of the waiting times is studied under these conditions. The model allows us to comment upon some of the basic concepts in contemporary foraging theory.

  5. Adaptive hybrid simulations for multiscale stochastic reaction networks

    SciTech Connect

    Hepp, Benjamin; Gupta, Ankit; Khammash, Mustafa

    2015-01-21

    The probability distribution describing the state of a Stochastic Reaction Network (SRN) evolves according to the Chemical Master Equation (CME). It is common to estimate its solution using Monte Carlo methods such as the Stochastic Simulation Algorithm (SSA). In many cases, these simulations can take an impractical amount of computational time. Therefore, many methods have been developed that approximate sample paths of the underlying stochastic process and estimate the solution of the CME. A prominent class of these methods include hybrid methods that partition the set of species and the set of reactions into discrete and continuous subsets. Such a partition separates the dynamics into a discrete and a continuous part. Simulating such a stochastic process can be computationally much easier than simulating the exact discrete stochastic process with SSA. Moreover, the quasi-stationary assumption to approximate the dynamics of fast subnetworks can be applied for certain classes of networks. However, as the dynamics of a SRN evolves, these partitions may have to be adapted during the simulation. We develop a hybrid method that approximates the solution of a CME by automatically partitioning the reactions and species sets into discrete and continuous components and applying the quasi-stationary assumption on identifiable fast subnetworks. Our method does not require any user intervention and it adapts to exploit the changing timescale separation between reactions and/or changing magnitudes of copy-numbers of constituent species. We demonstrate the efficiency of the proposed method by considering examples from systems biology and showing that very good approximations to the exact probability distributions can be achieved in significantly less computational time. This is especially the case for systems with oscillatory dynamics, where the system dynamics change considerably throughout the time-period of interest.

  6. Frost in Charitum Montes

    NASA Technical Reports Server (NTRS)

    2003-01-01

    MGS MOC Release No. MOC2-387, 10 June 2003

    This is a Mars Global Surveyor (MGS) Mars Orbiter Camera (MOC) wide angle view of the Charitum Montes, south of Argyre Planitia, in early June 2003. The seasonal south polar frost cap, composed of carbon dioxide, has been retreating southward through this area since spring began a month ago. The bright features toward the bottom of this picture are surfaces covered by frost. The picture is located near 57oS, 43oW. North is at the top, south is at the bottom. Sunlight illuminates the scene from the upper left. The area shown is about 217 km (135 miles) wide.

  7. MCMini: Monte Carlo on GPGPU

    SciTech Connect

    Marcus, Ryan C.

    2012-07-25

    MCMini is a proof of concept that demonstrates the possibility for Monte Carlo neutron transport using OpenCL with a focus on performance. This implementation, written in C, shows that tracing particles and calculating reactions on a 3D mesh can be done in a highly scalable fashion. These results demonstrate a potential path forward for MCNP or other Monte Carlo codes.

  8. A stochastic multi-symplectic scheme for stochastic Maxwell equations with additive noise

    SciTech Connect

    Hong, Jialin; Zhang, Liying

    2014-07-01

    In this paper we investigate a stochastic multi-symplectic method for stochastic Maxwell equations with additive noise. Based on the stochastic version of variational principle, we find a way to obtain the stochastic multi-symplectic structure of three-dimensional (3-D) stochastic Maxwell equations with additive noise. We propose a stochastic multi-symplectic scheme and show that it preserves the stochastic multi-symplectic conservation law and the local and global stochastic energy dissipative properties, which the equations themselves possess. Numerical experiments are performed to verify the numerical behaviors of the stochastic multi-symplectic scheme.

  9. Improved diffusion Monte Carlo and the Brownian fan

    NASA Astrophysics Data System (ADS)

    Weare, J.; Hairer, M.

    2012-12-01

    Diffusion Monte Carlo (DMC) is a workhorse of stochastic computing. It was invented forty years ago as the central component in a Monte Carlo technique for estimating various characteristics of quantum mechanical systems. Since then it has been used in applied in a huge number of fields, often as a central component in sequential Monte Carlo techniques (e.g. the particle filter). DMC computes averages of some underlying stochastic dynamics weighted by a functional of the path of the process. The weight functional could represent the potential term in a Feynman-Kac representation of a partial differential equation (as in quantum Monte Carlo) or it could represent the likelihood of a sequence of noisy observations of the underlying system (as in particle filtering). DMC alternates between an evolution step in which a collection of samples of the underlying system are evolved for some short time interval, and a branching step in which, according to the weight functional, some samples are copied and some samples are eliminated. Unfortunately for certain choices of the weight functional DMC fails to have a meaningful limit as one decreases the evolution time interval between branching steps. We propose a modification of the standard DMC algorithm. The new algorithm has a lower variance per workload, regardless of the regime considered. In particular, it makes it feasible to use DMC in situations where the ``naive'' generalization of the standard algorithm would be impractical, due to an exponential explosion of its variance. We numerically demonstrate the effectiveness of the new algorithm on a standard rare event simulation problem (probability of an unlikely transition in a Lennard-Jones cluster), as well as a high-frequency data assimilation problem. We then provide a detailed heuristic explanation of why, in the case of rare event simulation, the new algorithm is expected to converge to a limiting process as the underlying stepsize goes to 0. This is shown

  10. Stochastic roots of growth phenomena

    NASA Astrophysics Data System (ADS)

    De Lauro, E.; De Martino, S.; De Siena, S.; Giorno, V.

    2014-05-01

    We show that the Gompertz equation describes the evolution in time of the median of a geometric stochastic process. Therefore, we induce that the process itself generates the growth. This result allows us further to exploit a stochastic variational principle to take account of self-regulation of growth through feedback of relative density variations. The conceptually well defined framework so introduced shows its usefulness by suggesting a form of control of growth by exploiting external actions.

  11. Stochastic superparameterization in quasigeostrophic turbulence

    SciTech Connect

    Grooms, Ian; Majda, Andrew J.

    2014-08-15

    In this article we expand and develop the authors' recent proposed methodology for efficient stochastic superparameterization algorithms for geophysical turbulence. Geophysical turbulence is characterized by significant intermittent cascades of energy from the unresolved to the resolved scales resulting in complex patterns of waves, jets, and vortices. Conventional superparameterization simulates large scale dynamics on a coarse grid in a physical domain, and couples these dynamics to high-resolution simulations on periodic domains embedded in the coarse grid. Stochastic superparameterization replaces the nonlinear, deterministic eddy equations on periodic embedded domains by quasilinear stochastic approximations on formally infinite embedded domains. The result is a seamless algorithm which never uses a small scale grid and is far cheaper than conventional SP, but with significant success in difficult test problems. Various design choices in the algorithm are investigated in detail here, including decoupling the timescale of evolution on the embedded domains from the length of the time step used on the coarse grid, and sensitivity to certain assumed properties of the eddies (e.g. the shape of the assumed eddy energy spectrum). We present four closures based on stochastic superparameterization which elucidate the properties of the underlying framework: a ‘null hypothesis’ stochastic closure that uncouples the eddies from the mean, a stochastic closure with nonlinearly coupled eddies and mean, a nonlinear deterministic closure, and a stochastic closure based on energy conservation. The different algorithms are compared and contrasted on a stringent test suite for quasigeostrophic turbulence involving two-layer dynamics on a β-plane forced by an imposed background shear. The success of the algorithms developed here suggests that they may be fruitfully applied to more realistic situations. They are expected to be particularly useful in providing accurate and

  12. Stochastic Evolution of Halo Spin

    NASA Astrophysics Data System (ADS)

    Kim, Juhan

    2015-08-01

    We will introduce an excursion set model for the evolution of halo spin from cosmological N-body simulations. A stochastic differential equation is derived from the definition of halo spin and the distribution of angular momentum changes are measured from simulations. The log-normal distribution of halo spin is found to be a natural consequence of the stochastic differential equation and the resulting spin distribution is found be a function of local environments, halo mass, and redshift.

  13. Stochastic superparameterization in quasigeostrophic turbulence

    NASA Astrophysics Data System (ADS)

    Grooms, Ian; Majda, Andrew J.

    2014-08-01

    In this article we expand and develop the authors' recent proposed methodology for efficient stochastic superparameterization algorithms for geophysical turbulence. Geophysical turbulence is characterized by significant intermittent cascades of energy from the unresolved to the resolved scales resulting in complex patterns of waves, jets, and vortices. Conventional superparameterization simulates large scale dynamics on a coarse grid in a physical domain, and couples these dynamics to high-resolution simulations on periodic domains embedded in the coarse grid. Stochastic superparameterization replaces the nonlinear, deterministic eddy equations on periodic embedded domains by quasilinear stochastic approximations on formally infinite embedded domains. The result is a seamless algorithm which never uses a small scale grid and is far cheaper than conventional SP, but with significant success in difficult test problems. Various design choices in the algorithm are investigated in detail here, including decoupling the timescale of evolution on the embedded domains from the length of the time step used on the coarse grid, and sensitivity to certain assumed properties of the eddies (e.g. the shape of the assumed eddy energy spectrum). We present four closures based on stochastic superparameterization which elucidate the properties of the underlying framework: a ‘null hypothesis' stochastic closure that uncouples the eddies from the mean, a stochastic closure with nonlinearly coupled eddies and mean, a nonlinear deterministic closure, and a stochastic closure based on energy conservation. The different algorithms are compared and contrasted on a stringent test suite for quasigeostrophic turbulence involving two-layer dynamics on a β-plane forced by an imposed background shear. The success of the algorithms developed here suggests that they may be fruitfully applied to more realistic situations. They are expected to be particularly useful in providing accurate and

  14. Novel Quantum Monte Carlo Approaches for Quantum Liquids

    NASA Astrophysics Data System (ADS)

    Rubenstein, Brenda M.

    Quantum Monte Carlo methods are a powerful suite of techniques for solving the quantum many-body problem. By using random numbers to stochastically sample quantum properties, QMC methods are capable of studying low-temperature quantum systems well beyond the reach of conventional deterministic techniques. QMC techniques have likewise been indispensible tools for augmenting our current knowledge of superfluidity and superconductivity. In this thesis, I present two new quantum Monte Carlo techniques, the Monte Carlo Power Method and Bose-Fermi Auxiliary-Field Quantum Monte Carlo, and apply previously developed Path Integral Monte Carlo methods to explore two new phases of quantum hard spheres and hydrogen. I lay the foundation for a subsequent description of my research by first reviewing the physics of quantum liquids in Chapter One and the mathematics behind Quantum Monte Carlo algorithms in Chapter Two. I then discuss the Monte Carlo Power Method, a stochastic way of computing the first several extremal eigenvalues of a matrix too memory-intensive to be stored and therefore diagonalized. As an illustration of the technique, I demonstrate how it can be used to determine the second eigenvalues of the transition matrices of several popular Monte Carlo algorithms. This information may be used to quantify how rapidly a Monte Carlo algorithm is converging to the equilibrium probability distribution it is sampling. I next present the Bose-Fermi Auxiliary-Field Quantum Monte Carlo algorithm. This algorithm generalizes the well-known Auxiliary-Field Quantum Monte Carlo algorithm for fermions to bosons and Bose-Fermi mixtures. Despite some shortcomings, the Bose-Fermi Auxiliary-Field Quantum Monte Carlo algorithm represents the first exact technique capable of studying Bose-Fermi mixtures of any size in any dimension. In Chapter Six, I describe a new Constant Stress Path Integral Monte Carlo algorithm for the study of quantum mechanical systems under high pressures. While

  15. Numerical solution of the Stratonovich- and Ito–Euler equations: Application to the stochastic piston problem

    SciTech Connect

    Zhang, Zhongqiang; Yang, Xiu; Lin, Guang; Karniadakis, George Em

    2013-03-01

    We consider a piston with a velocity perturbed by Brownian motion moving into a straight tube filled with a perfect gas at rest. The shock generated ahead of the piston can be located by solving the one-dimensional Euler equations driven by white noise using the Stratonovich or Ito formulations. We approximate the Brownian motion with its spectral truncation and subsequently apply stochastic collocation using either sparse grid or the quasi-Monte Carlo (QMC) method. In particular, we first transform the Euler equations with an unsteady stochastic boundary into stochastic Euler equations over a fixed domain with a time-dependent stochastic source term. We then solve the transformed equations by splitting them up into two parts, i.e., a ‘deterministic part’ and a ‘stochastic part’. Numerical results verify the Stratonovich–Euler and Ito–Euler models against stochastic perturbation results, and demonstrate the efficiency of sparse grid and QMC for small and large random piston motions, respectively. The variance of shock location of the piston grows cubically in the case of white noise in contrast to colored noise reported in [1], where the variance of shock location grows quadratically with time for short times and linearly for longer times.

  16. Delayed stochastic control

    NASA Astrophysics Data System (ADS)

    Hosaka, Tadaaki; Ohira, Toru; Lucian, Christian; Milton, John

    2005-03-01

    Time-delayed feedback control becomes problematic in situations in which the time constant of the system is fast compared to the feedback reaction time. In particular, when perturbations are unpredictable, traditional feedback or feed-forward control schemes can be insufficient. Nonethless a human can balance a stick at their fingertip in the presence of fluctuations that occur on time scales shorter than their neural reaction times. Here we study a simple model of a repulsive delayed random walk and demonstrate that the interplay between noise and delay can transiently stabilize an unstable fixed-point. This observation leads to the concept of ``delayed stochastic control,'' i.e. stabilization of tasks, such as stick balancing at the fingertip, by optimally tuning the noise level with respect to the feedback delay time. References:(1)J.L.Cabrera and J.G.Milton, PRL 89 158702 (2002);(2) T. Ohira and J.G.Milton, PRE 52 3277 (1995);(3)T.Hosaka, T.Ohira, C.Lucian, J.L.Cabrera, and J.G.Milton, Prog. Theor. Phys. (to appear).

  17. Turbulence and Stochastic Processes

    NASA Astrophysics Data System (ADS)

    Celani, Antonio; Mazzino, Andrea; Pumir, Alain

    sec:08-1In 1931 the monograph Analytical Methods in Probability Theory appeared, in which A.N. Kolmogorov laid the foundations for the modern theory of Markov processes [1]. According to Gnedenko: "In the history of probability theory it is difficult to find other works that changed the established points of view and basic trends in research work in such a decisive way". Ten years later, his article on fully developed turbulence provided the framework within which most, if not all, of the subsequent theoretical investigations have been conducted [2] (see e.g. the review by Biferale et al. in this volume [3]. Remarkably, the greatest advances made in the last few years towards a thorough understanding of turbulence developed from the successful marriage between the theory of stochastic processes and the phenomenology of turbulent transport of scalar fields. In this article we will summarize these recent developments which expose the direct link between the intermittency of transported fields and the statistical properties of particle trajectories advected by the turbulent flow (see also [4], and, for a more thorough review, [5]. We also discuss the perspectives of the Lagrangian approach beyond passive scalars, especially for the modeling of hydrodynamic turbulence.

  18. A stochastic model for solute transport in macroporous soils

    SciTech Connect

    Bruggeman, A.C.; Mostaghimi, S.; Brannan, K.M.

    1999-12-01

    A stochastic, physically based, finite element model for simulating flow and solute transport in soils with macropores (MICMAC) was developed. The MICMAC model simulates preferential movement of water and solutes using a cylindrical macropore located in the center of a soil column. MICMAC uses Monte Carlo simulation to represent the stochastic processes inherent to the soil-water system. The model simulates a field as a collection of non-interacting soil columns. The random soil properties are assumed to be stationary in the horizontal direction, and ergodic over the field. A routine for the generation of correlated, non-normal random variates was developed for MICMAC's stochastic component. The model was applied to fields located in Nomini Creek Watershed, Virginia. Extensive field data were collected in fields that use either conventional or no-tillage for the evaluation of the MICMAC model. The field application suggested that the model underestimated the fast leaching of water and solutes from the root zone. However, the computed results were substantially better than the results obtained when no preferential flow component was included in the model.

  19. Fluorescence Correlation Spectroscopy and Nonlinear Stochastic Reaction-Diffusion

    SciTech Connect

    Del Razo, Mauricio; Pan, Wenxiao; Qian, Hong; Lin, Guang

    2014-05-30

    The currently existing theory of fluorescence correlation spectroscopy (FCS) is based on the linear fluctuation theory originally developed by Einstein, Onsager, Lax, and others as a phenomenological approach to equilibrium fluctuations in bulk solutions. For mesoscopic reaction-diffusion systems with nonlinear chemical reactions among a small number of molecules, a situation often encountered in single-cell biochemistry, it is expected that FCS time correlation functions of a reaction-diffusion system can deviate from the classic results of Elson and Magde [Biopolymers (1974) 13:1-27]. We first discuss this nonlinear effect for reaction systems without diffusion. For nonlinear stochastic reaction-diffusion systems there are no closed solutions; therefore, stochastic Monte-Carlo simulations are carried out. We show that the deviation is small for a simple bimolecular reaction; the most significant deviations occur when the number of molecules is small and of the same order. Extending Delbrück-Gillespie’s theory for stochastic nonlinear reactions with rapidly stirring to reaction-diffusion systems provides a mesoscopic model for chemical and biochemical reactions at nanometric and mesoscopic level such as a single biological cell.

  20. A stochastic transcriptional switch model for single cell imaging data

    PubMed Central

    Hey, Kirsty L.; Momiji, Hiroshi; Featherstone, Karen; Davis, Julian R.E.; White, Michael R.H.; Rand, David A.; Finkenstädt, Bärbel

    2015-01-01

    Gene expression is made up of inherently stochastic processes within single cells and can be modeled through stochastic reaction networks (SRNs). In particular, SRNs capture the features of intrinsic variability arising from intracellular biochemical processes. We extend current models for gene expression to allow the transcriptional process within an SRN to follow a random step or switch function which may be estimated using reversible jump Markov chain Monte Carlo (MCMC). This stochastic switch model provides a generic framework to capture many different dynamic features observed in single cell gene expression. Inference for such SRNs is challenging due to the intractability of the transition densities. We derive a model-specific birth–death approximation and study its use for inference in comparison with the linear noise approximation where both approximations are considered within the unifying framework of state-space models. The methodology is applied to synthetic as well as experimental single cell imaging data measuring expression of the human prolactin gene in pituitary cells. PMID:25819987

  1. Global parameter estimation methods for stochastic biochemical systems

    PubMed Central

    2010-01-01

    Background The importance of stochasticity in cellular processes having low number of molecules has resulted in the development of stochastic models such as chemical master equation. As in other modelling frameworks, the accompanying rate constants are important for the end-applications like analyzing system properties (e.g. robustness) or predicting the effects of genetic perturbations. Prior knowledge of kinetic constants is usually limited and the model identification routine typically includes parameter estimation from experimental data. Although the subject of parameter estimation is well-established for deterministic models, it is not yet routine for the chemical master equation. In addition, recent advances in measurement technology have made the quantification of genetic substrates possible to single molecular levels. Thus, the purpose of this work is to develop practical and effective methods for estimating kinetic model parameters in the chemical master equation and other stochastic models from single cell and cell population experimental data. Results Three parameter estimation methods are proposed based on the maximum likelihood and density function distance, including probability and cumulative density functions. Since stochastic models such as chemical master equations are typically solved using a Monte Carlo approach in which only a finite number of Monte Carlo realizations are computationally practical, specific considerations are given to account for the effect of finite sampling in the histogram binning of the state density functions. Applications to three practical case studies showed that while maximum likelihood method can effectively handle low replicate measurements, the density function distance methods, particularly the cumulative density function distance estimation, are more robust in estimating the parameters with consistently higher accuracy, even for systems showing multimodality. Conclusions The parameter estimation methodologies

  2. Quantum Gibbs ensemble Monte Carlo

    SciTech Connect

    Fantoni, Riccardo; Moroni, Saverio

    2014-09-21

    We present a path integral Monte Carlo method which is the full quantum analogue of the Gibbs ensemble Monte Carlo method of Panagiotopoulos to study the gas-liquid coexistence line of a classical fluid. Unlike previous extensions of Gibbs ensemble Monte Carlo to include quantum effects, our scheme is viable even for systems with strong quantum delocalization in the degenerate regime of temperature. This is demonstrated by an illustrative application to the gas-superfluid transition of {sup 4}He in two dimensions.

  3. Stochastic BER estimation for coherent QPSK transmission systems with digital carrier phase recovery.

    PubMed

    Zhang, Fan; Gao, Yan; Luo, Yazhi; Chen, Zhangyuan; Xu, Anshi

    2010-04-26

    We propose a stochastic bit error ratio estimation approach based on a statistical analysis of the retrieved signal phase for coherent optical QPSK systems with digital carrier phase recovery. A family of the generalized exponential function is applied to fit the probability density function of the signal samples. The method provides reasonable performance estimation in presence of both linear and nonlinear transmission impairments while reduces the computational intensity greatly compared to Monte Carlo simulation.

  4. Stochastic uncertainty analysis for unconfined flow systems

    USGS Publications Warehouse

    Liu, Gaisheng; Zhang, Dongxiao; Lu, Zhiming

    2006-01-01

    A new stochastic approach proposed by Zhang and Lu (2004), called the Karhunen-Loeve decomposition-based moment equation (KLME), has been extended to solving nonlinear, unconfined flow problems in randomly heterogeneous aquifers. This approach is on the basis of an innovative combination of Karhunen-Loeve decomposition, polynomial expansion, and perturbation methods. The random log-transformed hydraulic conductivity field (InKS) is first expanded into a series in terms of orthogonal Gaussian standard random variables with their coefficients obtained as the eigenvalues and eigenfunctions of the covariance function of InKS- Next, head h is decomposed as a perturbation expansion series ??A(m), where A(m) represents the mth-order head term with respect to the standard deviation of InKS. Then A(m) is further expanded into a polynomial series of m products of orthogonal Gaussian standard random variables whose coefficients Ai1,i2(m)...,im are deterministic and solved sequentially from low to high expansion orders using MODFLOW-2000. Finally, the statistics of head and flux are computed using simple algebraic operations on Ai1,i2(m)...,im. A series of numerical test results in 2-D and 3-D unconfined flow systems indicated that the KLME approach is effective in estimating the mean and (co)variance of both heads and fluxes and requires much less computational effort as compared to the traditional Monte Carlo simulation technique. Copyright 2006 by the American Geophysical Union.

  5. Parallel stochastic systems biology in the cloud.

    PubMed

    Aldinucci, Marco; Torquati, Massimo; Spampinato, Concetto; Drocco, Maurizio; Misale, Claudia; Calcagno, Cristina; Coppo, Mario

    2014-09-01

    The stochastic modelling of biological systems, coupled with Monte Carlo simulation of models, is an increasingly popular technique in bioinformatics. The simulation-analysis workflow may result computationally expensive reducing the interactivity required in the model tuning. In this work, we advocate the high-level software design as a vehicle for building efficient and portable parallel simulators for the cloud. In particular, the Calculus of Wrapped Components (CWC) simulator for systems biology, which is designed according to the FastFlow pattern-based approach, is presented and discussed. Thanks to the FastFlow framework, the CWC simulator is designed as a high-level workflow that can simulate CWC models, merge simulation results and statistically analyse them in a single parallel workflow in the cloud. To improve interactivity, successive phases are pipelined in such a way that the workflow begins to output a stream of analysis results immediately after simulation is started. Performance and effectiveness of the CWC simulator are validated on the Amazon Elastic Compute Cloud.

  6. Wormhole Hamiltonian Monte Carlo

    PubMed Central

    Lan, Shiwei; Streets, Jeffrey; Shahbaba, Babak

    2015-01-01

    In machine learning and statistics, probabilistic inference involving multimodal distributions is quite difficult. This is especially true in high dimensional problems, where most existing algorithms cannot easily move from one mode to another. To address this issue, we propose a novel Bayesian inference approach based on Markov Chain Monte Carlo. Our method can effectively sample from multimodal distributions, especially when the dimension is high and the modes are isolated. To this end, it exploits and modifies the Riemannian geometric properties of the target distribution to create wormholes connecting modes in order to facilitate moving between them. Further, our proposed method uses the regeneration technique in order to adapt the algorithm by identifying new modes and updating the network of wormholes without affecting the stationary distribution. To find new modes, as opposed to redis-covering those previously identified, we employ a novel mode searching algorithm that explores a residual energy function obtained by subtracting an approximate Gaussian mixture density (based on previously discovered modes) from the target density function. PMID:25861551

  7. A method for stochastic constrained optimization using derivative-free surrogate pattern search and collocation

    SciTech Connect

    Sankaran, Sethuraman; Audet, Charles; Marsden, Alison L.

    2010-06-20

    Recent advances in coupling novel optimization methods to large-scale computing problems have opened the door to tackling a diverse set of physically realistic engineering design problems. A large computational overhead is associated with computing the cost function for most practical problems involving complex physical phenomena. Such problems are also plagued with uncertainties in a diverse set of parameters. We present a novel stochastic derivative-free optimization approach for tackling such problems. Our method extends the previously developed surrogate management framework (SMF) to allow for uncertainties in both simulation parameters and design variables. The stochastic collocation scheme is employed for stochastic variables whereas Kriging based surrogate functions are employed for the cost function. This approach is tested on four numerical optimization problems and is shown to have significant improvement in efficiency over traditional Monte-Carlo schemes. Problems with multiple probabilistic constraints are also discussed.

  8. Stochastic response and bifurcation of periodically driven nonlinear oscillators by the generalized cell mapping method

    NASA Astrophysics Data System (ADS)

    Han, Qun; Xu, Wei; Sun, Jian-Qiao

    2016-09-01

    The stochastic response of nonlinear oscillators under periodic and Gaussian white noise excitations is studied with the generalized cell mapping based on short-time Gaussian approximation (GCM/STGA) method. The solutions of the transition probability density functions over a small fraction of the period are constructed by the STGA scheme in order to construct the GCM over one complete period. Both the transient and steady-state probability density functions (PDFs) of a smooth and discontinuous (SD) oscillator are computed to illustrate the application of the method. The accuracy of the results is verified by direct Monte Carlo simulations. The transient responses show the evolution of the PDFs from being Gaussian to non-Gaussian. The effect of a chaotic saddle on the stochastic response is also studied. The stochastic P-bifurcation in terms of the steady-state PDFs occurs with the decrease of the smoothness parameter, which corresponds to the deterministic pitchfork bifurcation.

  9. A Stochastic-Variational Model for Soft Mumford-Shah Segmentation

    PubMed Central

    2006-01-01

    In contemporary image and vision analysis, stochastic approaches demonstrate great flexibility in representing and modeling complex phenomena, while variational-PDE methods gain enormous computational advantages over Monte Carlo or other stochastic algorithms. In combination, the two can lead to much more powerful novel models and efficient algorithms. In the current work, we propose a stochastic-variational model for soft (or fuzzy) Mumford-Shah segmentation of mixture image patterns. Unlike the classical hard Mumford-Shah segmentation, the new model allows each pixel to belong to each image pattern with some probability. Soft segmentation could lead to hard segmentation, and hence is more general. The modeling procedure, mathematical analysis on the existence of optimal solutions, and computational implementation of the new model are explored in detail, and numerical examples of both synthetic and natural images are presented. PMID:23165059

  10. Investigation of stochastic radiation transport methods in random heterogeneous mixtures

    NASA Astrophysics Data System (ADS)

    Reinert, Dustin Ray

    Among the most formidable challenges facing our world is the need for safe, clean, affordable energy sources. Growing concerns over global warming induced climate change and the rising costs of fossil fuels threaten conventional means of electricity production and are driving the current nuclear renaissance. One concept at the forefront of international development efforts is the High Temperature Gas-Cooled Reactor (HTGR). With numerous passive safety features and a meltdown-proof design capable of attaining high thermodynamic efficiencies for electricity generation as well as high temperatures useful for the burgeoning hydrogen economy, the HTGR is an extremely promising technology. Unfortunately, the fundamental understanding of neutron behavior within HTGR fuels lags far behind that of more conventional water-cooled reactors. HTGRs utilize a unique heterogeneous fuel element design consisting of thousands of tiny fissile fuel kernels randomly mixed with a non-fissile graphite matrix. Monte Carlo neutron transport simulations of the HTGR fuel element geometry in its full complexity are infeasible and this has motivated the development of more approximate computational techniques. A series of MATLAB codes was written to perform Monte Carlo simulations within HTGR fuel pebbles to establish a comprehensive understanding of the parameters under which the accuracy of the approximate techniques diminishes. This research identified the accuracy of the chord length sampling method to be a function of the matrix scattering optical thickness, the kernel optical thickness, and the kernel packing density. Two new Monte Carlo methods designed to focus the computational effort upon the parameter conditions shown to contribute most strongly to the overall computational error were implemented and evaluated. An extended memory chord length sampling routine that recalls a neutron's prior material traversals was demonstrated to be effective in fixed source calculations containing

  11. Isotropic Monte Carlo Grain Growth

    2013-04-25

    IMCGG performs Monte Carlo simulations of normal grain growth in metals on a hexagonal grid in two dimensions with periodic boundary conditions. This may be performed with either an isotropic or a misorientation - and incliantion-dependent grain boundary energy.

  12. Stochastic generation of hourly rainstorm events in Johor

    SciTech Connect

    Nojumuddin, Nur Syereena; Yusof, Fadhilah; Yusop, Zulkifli

    2015-02-03

    Engineers and researchers in water-related studies are often faced with the problem of having insufficient and long rainfall record. Practical and effective methods must be developed to generate unavailable data from limited available data. Therefore, this paper presents a Monte-Carlo based stochastic hourly rainfall generation model to complement the unavailable data. The Monte Carlo simulation used in this study is based on the best fit of storm characteristics. Hence, by using the Maximum Likelihood Estimation (MLE) and Anderson Darling goodness-of-fit test, lognormal appeared to be the best rainfall distribution. Therefore, the Monte Carlo simulation based on lognormal distribution was used in the study. The proposed model was verified by comparing the statistical moments of rainstorm characteristics from the combination of the observed rainstorm events under 10 years and simulated rainstorm events under 30 years of rainfall records with those under the entire 40 years of observed rainfall data based on the hourly rainfall data at the station J1 in Johor over the period of 1972–2011. The absolute percentage error of the duration-depth, duration-inter-event time and depth-inter-event time will be used as the accuracy test. The results showed the first four product-moments of the observed rainstorm characteristics were close with the simulated rainstorm characteristics. The proposed model can be used as a basis to derive rainfall intensity-duration frequency in Johor.

  13. Simulated Stochastic Approximation Annealing for Global Optimization with a Square-Root Cooling Schedule

    SciTech Connect

    Liang, Faming; Cheng, Yichen; Lin, Guang

    2014-06-13

    Simulated annealing has been widely used in the solution of optimization problems. As known by many researchers, the global optima cannot be guaranteed to be located by simulated annealing unless a logarithmic cooling schedule is used. However, the logarithmic cooling schedule is so slow that no one can afford to have such a long CPU time. This paper proposes a new stochastic optimization algorithm, the so-called simulated stochastic approximation annealing algorithm, which is a combination of simulated annealing and the stochastic approximation Monte Carlo algorithm. Under the framework of stochastic approximation Markov chain Monte Carlo, it is shown that the new algorithm can work with a cooling schedule in which the temperature can decrease much faster than in the logarithmic cooling schedule, e.g., a square-root cooling schedule, while guaranteeing the global optima to be reached when the temperature tends to zero. The new algorithm has been tested on a few benchmark optimization problems, including feed-forward neural network training and protein-folding. The numerical results indicate that the new algorithm can significantly outperform simulated annealing and other competitors.

  14. A fully coupled Monte Carlo/discrete ordinates solution to the neutron transport equation. Final report

    SciTech Connect

    Filippone, W.L.; Baker, R.S.

    1990-12-31

    The neutron transport equation is solved by a hybrid method that iteratively couples regions where deterministic (S{sub N}) and stochastic (Monte Carlo) methods are applied. Unlike previous hybrid methods, the Monte Carlo and S{sub N} regions are fully coupled in the sense that no assumption is made about geometrical separation or decoupling. The hybrid method provides a new means of solving problems involving both optically thick and optically thin regions that neither Monte Carlo nor S{sub N} is well suited for by themselves. The fully coupled Monte Carlo/S{sub N} technique consists of defining spatial and/or energy regions of a problem in which either a Monte Carlo calculation or an S{sub N} calculation is to be performed. The Monte Carlo region may comprise the entire spatial region for selected energy groups, or may consist of a rectangular area that is either completely or partially embedded in an arbitrary S{sub N} region. The Monte Carlo and S{sub N} regions are then connected through the common angular boundary fluxes, which are determined iteratively using the response matrix technique, and volumetric sources. The hybrid method has been implemented in the S{sub N} code TWODANT by adding special-purpose Monte Carlo subroutines to calculate the response matrices and volumetric sources, and linkage subrountines to carry out the interface flux iterations. The common angular boundary fluxes are included in the S{sub N} code as interior boundary sources, leaving the logic for the solution of the transport flux unchanged, while, with minor modifications, the diffusion synthetic accelerator remains effective in accelerating S{sub N} calculations. The special-purpose Monte Carlo routines used are essentially analog, with few variance reduction techniques employed. However, the routines have been successfully vectorized, with approximately a factor of five increase in speed over the non-vectorized version.

  15. A Stochastic Collocation Algorithm for Uncertainty Analysis

    NASA Technical Reports Server (NTRS)

    Mathelin, Lionel; Hussaini, M. Yousuff; Zang, Thomas A. (Technical Monitor)

    2003-01-01

    This report describes a stochastic collocation method to adequately handle a physically intrinsic uncertainty in the variables of a numerical simulation. For instance, while the standard Galerkin approach to Polynomial Chaos requires multi-dimensional summations over the stochastic basis functions, the stochastic collocation method enables to collapse those summations to a one-dimensional summation only. This report furnishes the essential algorithmic details of the new stochastic collocation method and provides as a numerical example the solution of the Riemann problem with the stochastic collocation method used for the discretization of the stochastic parameters.

  16. Enhanced algorithms for stochastic programming

    SciTech Connect

    Krishna, A.S.

    1993-09-01

    In this dissertation, we present some of the recent advances made in solving two-stage stochastic linear programming problems of large size and complexity. Decomposition and sampling are two fundamental components of techniques to solve stochastic optimization problems. We describe improvements to the current techniques in both these areas. We studied different ways of using importance sampling techniques in the context of Stochastic programming, by varying the choice of approximation functions used in this method. We have concluded that approximating the recourse function by a computationally inexpensive piecewise-linear function is highly efficient. This reduced the problem from finding the mean of a computationally expensive functions to finding that of a computationally inexpensive function. Then we implemented various variance reduction techniques to estimate the mean of a piecewise-linear function. This method achieved similar variance reductions in orders of magnitude less time than, when we directly applied variance-reduction techniques directly on the given problem. In solving a stochastic linear program, the expected value problem is usually solved before a stochastic solution and also to speed-up the algorithm by making use of the information obtained from the solution of the expected value problem. We have devised a new decomposition scheme to improve the convergence of this algorithm.

  17. Modeling bacterial population growth from stochastic single-cell dynamics.

    PubMed

    Alonso, Antonio A; Molina, Ignacio; Theodoropoulos, Constantinos

    2014-09-01

    A few bacterial cells may be sufficient to produce a food-borne illness outbreak, provided that they are capable of adapting and proliferating on a food matrix. This is why any quantitative health risk assessment policy must incorporate methods to accurately predict the growth of bacterial populations from a small number of pathogens. In this aim, mathematical models have become a powerful tool. Unfortunately, at low cell concentrations, standard deterministic models fail to predict the fate of the population, essentially because the heterogeneity between individuals becomes relevant. In this work, a stochastic differential equation (SDE) model is proposed to describe variability within single-cell growth and division and to simulate population growth from a given initial number of individuals. We provide evidence of the model ability to explain the observed distributions of times to division, including the lag time produced by the adaptation to the environment, by comparing model predictions with experiments from the literature for Escherichia coli, Listeria innocua, and Salmonella enterica. The model is shown to accurately predict experimental growth population dynamics for both small and large microbial populations. The use of stochastic models for the estimation of parameters to successfully fit experimental data is a particularly challenging problem. For instance, if Monte Carlo methods are employed to model the required distributions of times to division, the parameter estimation problem can become numerically intractable. We overcame this limitation by converting the stochastic description to a partial differential equation (backward Kolmogorov) instead, which relates to the distribution of division times. Contrary to previous stochastic formulations based on random parameters, the present model is capable of explaining the variability observed in populations that result from the growth of a small number of initial cells as well as the lack of it compared to

  18. Criticality of spent reactor fuel

    SciTech Connect

    Harris, D.R.

    1987-01-01

    The storage capacity of spent reactor fuel pools can be greatly increased by consolidation. In this process, the fuel rods are removed from reactor fuel assemblies and are stored in close-packed arrays in a canister or skeleton. An earlier study examined criticality consideration for consolidation of Westinghouse fuel, assumed to be fresh, in canisters at the Millstone-2 spent-fuel pool and in the General Electric IF-300 shipping cask. The conclusions were that the fuel rods in the canister are so deficient in water that they are adequately subcritical, both in normal and in off-normal conditions. One potential accident, the water spill event, remained unresolved in the earlier study. A methodology is developed here for spent-fuel criticality and is applied to the water spill event. The methodology utilizes LEOPARD to compute few-group cross sections for the diffusion code PDQ7, which then is used to compute reactivity. These codes give results for fresh fuel that are in good agreement with KENO IV-NITAWL Monte Carlo results, which themselves are in good agreement with continuous energy Monte Carlo calculations. These methodologies are in reasonable agreement with critical measurements for undepleted fuel.

  19. Enhanced physics design with hexagonal repeated structure tools using Monte Carlo methods

    SciTech Connect

    Carter, L L; Lan, J S; Schwarz, R A

    1991-01-01

    This report discusses proposed new missions for the Fast Flux Test Facility (FFTF) reactor which involve the use of target assemblies containing local hydrogenous moderation within this otherwise fast reactor. Parametric physics design studies with Monte Carlo methods are routinely utilized to analyze the rapidly changing neutron spectrum. An extensive utilization of the hexagonal lattice within lattice capabilities of the Monte Carlo Neutron Photon (MCNP) continuous energy Monte Carlo computer code is applied here to solving such problems. Simpler examples that use the lattice capability to describe fuel pins within a brute force'' description of the hexagonal assemblies are also given.

  20. Detector-selection technique for Monte Carlo transport in azimuthally symmetric geometries

    SciTech Connect

    Hoffman, T.J.; Tang, J.S.; Parks, C.V.

    1982-01-01

    Many radiation transport problems contain geometric symmetries which are not exploited in obtaining their Monte Carlo solutions. An important class of problems is that in which the geometry is symmetric about an axis. These problems arise in the analyses of a reactor core or shield, spent fuel shipping casks, tanks containing radioactive solutions, radiation transport in the atmosphere (air-over-ground problems), etc. Although amenable to deterministic solution, such problems can often be solved more efficiently and accurately with the Monte Carlo method. For this class of problems, a technique is described in this paper which significantly reduces the variance of the Monte Carlo-calculated effect of interest at point detectors.

  1. Discrete Diffusion Monte Carlo for grey Implicit Monte Carlo simulations.

    SciTech Connect

    Densmore, J. D.; Urbatsch, T. J.; Evans, T. M.; Buksas, M. W.

    2005-01-01

    Discrete Diffusion Monte Carlo (DDMC) is a hybrid transport-diffusion method for Monte Carlo simulations in diffusive media. In DDMC, particles take discrete steps between spatial cells according to a discretized diffusion equation. Thus, DDMC produces accurate solutions while increasing the efficiency of the Monte Carlo calculation. In this paper, we extend previously developed DDMC techniques in several ways that improve the accuracy and utility of DDMC for grey Implicit Monte Carlo calculations. First, we employ a diffusion equation that is discretized in space but is continuous time. Not only is this methodology theoretically more accurate than temporally discretized DDMC techniques, but it also has the benefit that a particle's time is always known. Thus, there is no ambiguity regarding what time to assign a particle that leaves an optically thick region (where DDMC is used) and begins transporting by standard Monte Carlo in an optically thin region. In addition, we treat particles incident on an optically thick region using the asymptotic diffusion-limit boundary condition. This interface technique can produce accurate solutions even if the incident particles are distributed anisotropically in angle. Finally, we develop a method for estimating radiation momentum deposition during the DDMC simulation. With a set of numerical examples, we demonstrate the accuracy and efficiency of our improved DDMC method.

  2. Crossing the mesoscale no-mans land via parallel kinetic Monte Carlo.

    SciTech Connect

    Garcia Cardona, Cristina; Webb, Edmund Blackburn, III; Wagner, Gregory John; Tikare, Veena; Holm, Elizabeth Ann; Plimpton, Steven James; Thompson, Aidan Patrick; Slepoy, Alexander; Zhou, Xiao Wang; Battaile, Corbett Chandler; Chandross, Michael Evan

    2009-10-01

    The kinetic Monte Carlo method and its variants are powerful tools for modeling materials at the mesoscale, meaning at length and time scales in between the atomic and continuum. We have completed a 3 year LDRD project with the goal of developing a parallel kinetic Monte Carlo capability and applying it to materials modeling problems of interest to Sandia. In this report we give an overview of the methods and algorithms developed, and describe our new open-source code called SPPARKS, for Stochastic Parallel PARticle Kinetic Simulator. We also highlight the development of several Monte Carlo models in SPPARKS for specific materials modeling applications, including grain growth, bubble formation, diffusion in nanoporous materials, defect formation in erbium hydrides, and surface growth and evolution.

  3. Nonlinear optimization for stochastic simulations.

    SciTech Connect

    Johnson, Michael M.; Yoshimura, Ann S.; Hough, Patricia Diane; Ammerlahn, Heidi R.

    2003-12-01

    This report describes research targeting development of stochastic optimization algorithms and their application to mission-critical optimization problems in which uncertainty arises. The first section of this report covers the enhancement of the Trust Region Parallel Direct Search (TRPDS) algorithm to address stochastic responses and the incorporation of the algorithm into the OPT++ optimization library. The second section describes the Weapons of Mass Destruction Decision Analysis Center (WMD-DAC) suite of systems analysis tools and motivates the use of stochastic optimization techniques in such non-deterministic simulations. The third section details a batch programming interface designed to facilitate criteria-based or algorithm-driven execution of system-of-system simulations. The fourth section outlines the use of the enhanced OPT++ library and batch execution mechanism to perform systems analysis and technology trade-off studies in the WMD detection and response problem domain.

  4. Stochastic determination of matrix determinants.

    PubMed

    Dorn, Sebastian; Ensslin, Torsten A

    2015-07-01

    Matrix determinants play an important role in data analysis, in particular when Gaussian processes are involved. Due to currently exploding data volumes, linear operations-matrices-acting on the data are often not accessible directly but are only represented indirectly in form of a computer routine. Such a routine implements the transformation a data vector undergoes under matrix multiplication. While efficient probing routines to estimate a matrix's diagonal or trace, based solely on such computationally affordable matrix-vector multiplications, are well known and frequently used in signal inference, there is no stochastic estimate for its determinant. We introduce a probing method for the logarithm of a determinant of a linear operator. Our method rests upon a reformulation of the log-determinant by an integral representation and the transformation of the involved terms into stochastic expressions. This stochastic determinant determination enables large-size applications in Bayesian inference, in particular evidence calculations, model comparison, and posterior determination.

  5. Mechanical autonomous stochastic heat engines

    NASA Astrophysics Data System (ADS)

    Serra-Garcia, Marc; Foehr, Andre; Moleron, Miguel; Lydon, Joseph; Chong, Christopher; Daraio, Chiara; . Team

    Stochastic heat engines extract work from the Brownian motion of a set of particles out of equilibrium. So far, experimental demonstrations of stochastic heat engines have required extreme operating conditions or nonautonomous external control systems. In this talk, we will present a simple, purely classical, autonomous stochastic heat engine that uses the well-known tension induced nonlinearity in a string. Our engine operates between two heat baths out of equilibrium, and transfers energy from the hot bath to a work reservoir. This energy transfer occurs even if the work reservoir is at a higher temperature than the hot reservoir. The talk will cover a theoretical investigation and experimental results on a macroscopic setup subject to external noise excitations. This system presents an opportunity for the study of non equilibrium thermodynamics and is an interesting candidate for innovative energy conversion devices.

  6. Principal axes for stochastic dynamics

    NASA Astrophysics Data System (ADS)

    Vasconcelos, V. V.; Raischel, F.; Haase, M.; Peinke, J.; Wächter, M.; Lind, P. G.; Kleinhans, D.

    2011-09-01

    We introduce a general procedure for directly ascertaining how many independent stochastic sources exist in a complex system modeled through a set of coupled Langevin equations of arbitrary dimension. The procedure is based on the computation of the eigenvalues and the corresponding eigenvectors of local diffusion matrices. We demonstrate our algorithm by applying it to two examples of systems showing Hopf bifurcation. We argue that computing the eigenvectors associated to the eigenvalues of the diffusion matrix at local mesh points in the phase space enables one to define vector fields of stochastic eigendirections. In particular, the eigenvector associated to the lowest eigenvalue defines the path of minimum stochastic forcing in phase space, and a transform to a new coordinate system aligned with the eigenvectors can increase the predictability of the system.

  7. Equivalence of on-Lattice Stochastic Chemical Kinetics with the Well-Mixed Chemical Master Equation in the Limit of Fast Diffusion.

    PubMed

    Stamatakis, Michail; Vlachos, Dionisios G

    2011-12-14

    Well-mixed and lattice-based descriptions of stochastic chemical kinetics have been extensively used in the literature. Realizations of the corresponding stochastic processes are obtained by the Gillespie stochastic simulation algorithm and lattice kinetic Monte Carlo algorithms, respectively. However, the two frameworks have remained disconnected. We show the equivalence of these frameworks whereby the stochastic lattice kinetics reduces to effective well-mixed kinetics in the limit of fast diffusion. In the latter, the lattice structure appears implicitly, as the lumped rate of bimolecular reactions depends on the number of neighbors of a site on the lattice. Moreover, we propose a mapping between the stochastic propensities and the deterministic rates of the well-mixed vessel and lattice dynamics that illustrates the hierarchy of models and the key parameters that enable model reduction.

  8. Stochastic Simulation of Turing Patterns

    NASA Astrophysics Data System (ADS)

    Fu, Zheng-Ping; Xu, Xin-Hang; Wang, Hong-Li; Ouyang, Qi

    2008-04-01

    We investigate the effects of intrinsic noise on Turing pattern formation near the onset of bifurcation from the homogeneous state to Turing pattern in the reaction-diffusion Brusselator. By performing stochastic simulations of the master equation and using Gillespie's algorithm, we check the spatiotemporal behaviour influenced by internal noises. We demonstrate that the patterns of occurrence frequency for the reaction and diffusion processes are also spatially ordered and temporally stable. Turing patterns are found to be robust against intrinsic fluctuations. Stochastic simulations also reveal that under the influence of intrinsic noises, the onset of Turing instability is advanced in comparison to that predicted deterministically.

  9. Partial ASL extensions for stochastic programming.

    SciTech Connect

    Gay, David

    2010-03-31

    partially completed extensions for stochastic programming to the AMPL/solver interface library (ASL).modeling and experimenting with stochastic recourse problems. This software is not primarily for military applications

  10. Theory, technology, and technique of stochastic cooling

    SciTech Connect

    Marriner, J.

    1993-10-01

    The theory and technological implementation of stochastic cooling is described. Theoretical and technological limitations are discussed. Data from existing stochastic cooling systems are shown to illustrate some useful techniques.

  11. The Hamiltonian Mechanics of Stochastic Acceleration

    SciTech Connect

    Burby, J. W.

    2013-07-17

    We show how to nd the physical Langevin equation describing the trajectories of particles un- dergoing collisionless stochastic acceleration. These stochastic di erential equations retain not only one-, but two-particle statistics, and inherit the Hamiltonian nature of the underlying microscopic equations. This opens the door to using stochastic variational integrators to perform simulations of stochastic interactions such as Fermi acceleration. We illustrate the theory by applying it to two example problems.

  12. Transport in a stochastic magnetic field

    SciTech Connect

    White, R.B.; Wu, Yanlin . Plasma Physics Lab.); Rax, J.M. . Dept. de Recherches sur la Fusion Controlee)

    1992-01-01

    Collisional heat transport in a stochastic magnetic field configuration is investigated. Well above stochastic threshold, a numerical solution of a Chirikov-Taylor model shows a short-time nonlocal regime, but at large time the Rechester-Rosenbluth effective diffusion is confirmed. Near stochastic threshold, subdiffusive behavior is observed for short mean free paths. The nature of this subdiffusive behavior is understood in terms of the spectrum of islands in the stochastic sea.

  13. Transport in a stochastic magnetic field

    SciTech Connect

    White, R.B.; Wu, Yanlin; Rax, J.M.

    1992-09-01

    Collisional heat transport in a stochastic magnetic field configuration is investigated. Well above stochastic threshold, a numerical solution of a Chirikov-Taylor model shows a short-time nonlocal regime, but at large time the Rechester-Rosenbluth effective diffusion is confirmed. Near stochastic threshold, subdiffusive behavior is observed for short mean free paths. The nature of this subdiffusive behavior is understood in terms of the spectrum of islands in the stochastic sea.

  14. On the forward-backward-in-time approach for Monte Carlo solution of Parker's transport equation: One-dimensional case

    NASA Astrophysics Data System (ADS)

    Bobik, P.; Boschini, M. J.; Della Torre, S.; Gervasi, M.; Grandi, D.; La Vacca, G.; Pensotti, S.; Putis, M.; Rancoita, P. G.; Rozza, D.; Tacconi, M.; Zannoni, M.

    2016-05-01

    The cosmic rays propagation inside the heliosphere is well described by a transport equation introduced by Parker in 1965. To solve this equation, several approaches were followed in the past. Recently, a Monte Carlo approach became widely used in force of its advantages with respect to other numerical methods. In this approach the transport equation is associated to a fully equivalent set of stochastic differential equations (SDE). This set is used to describe the stochastic path of quasi-particle from a source, e.g., the interstellar space, to a specific target, e.g., a detector at Earth. We present a comparison of forward-in-time and backward-in-time methods to solve the cosmic rays transport equation in the heliosphere. The Parker equation and the related set of SDE in the several formulations are treated in this paper. For the sake of clarity, this work is focused on the one-dimensional solutions. Results were compared with an alternative numerical solution, namely, Crank-Nicolson method, specifically developed for the case under study. The methods presented are fully consistent each others for energy greater than 400 MeV. The comparison between stochastic integrations and Crank-Nicolson allows us to estimate the systematic uncertainties of Monte Carlo methods. The forward-in-time stochastic integrations method showed a systematic uncertainty <5%, while backward-in-time stochastic integrations method showed a systematic uncertainty <1% in the studied energy range.

  15. Stochastic architecture for Hopfield neural nets

    NASA Technical Reports Server (NTRS)

    Pavel, Sandy

    1992-01-01

    An expandable stochastic digital architecture for recurrent (Hopfield like) neural networks is proposed. The main features and basic principles of stochastic processing are presented. The stochastic digital architecture is based on a chip with n full interconnected neurons with a pipeline, bit processing structure. For large applications, a flexible way to interconnect many such chips is provided.

  16. Estimating stepwise debromination pathways of polybrominated diphenyl ethers with an analogue Markov Chain Monte Carlo algorithm.

    PubMed

    Zou, Yonghong; Christensen, Erik R; Zheng, Wei; Wei, Hua; Li, An

    2014-11-01

    A stochastic process was developed to simulate the stepwise debromination pathways for polybrominated diphenyl ethers (PBDEs). The stochastic process uses an analogue Markov Chain Monte Carlo (AMCMC) algorithm to generate PBDE debromination profiles. The acceptance or rejection of the randomly drawn stepwise debromination reactions was determined by a maximum likelihood function. The experimental observations at certain time points were used as target profiles; therefore, the stochastic processes are capable of presenting the effects of reaction conditions on the selection of debromination pathways. The application of the model is illustrated by adopting the experimental results of decabromodiphenyl ether (BDE209) in hexane exposed to sunlight. Inferences that were not obvious from experimental data were suggested by model simulations. For example, BDE206 has much higher accumulation at the first 30 min of sunlight exposure. By contrast, model simulation suggests that, BDE206 and BDE207 had comparable yields from BDE209. The reason for the higher BDE206 level is that BDE207 has the highest depletion in producing octa products. Compared to a previous version of the stochastic model based on stochastic reaction sequences (SRS), the AMCMC approach was determined to be more efficient and robust. Due to the feature of only requiring experimental observations as input, the AMCMC model is expected to be applicable to a wide range of PBDE debromination processes, e.g. microbial, photolytic, or joint effects in natural environments.

  17. Estimating stepwise debromination pathways of polybrominated diphenyl ethers with an analogue Markov Chain Monte Carlo algorithm.

    PubMed

    Zou, Yonghong; Christensen, Erik R; Zheng, Wei; Wei, Hua; Li, An

    2014-11-01

    A stochastic process was developed to simulate the stepwise debromination pathways for polybrominated diphenyl ethers (PBDEs). The stochastic process uses an analogue Markov Chain Monte Carlo (AMCMC) algorithm to generate PBDE debromination profiles. The acceptance or rejection of the randomly drawn stepwise debromination reactions was determined by a maximum likelihood function. The experimental observations at certain time points were used as target profiles; therefore, the stochastic processes are capable of presenting the effects of reaction conditions on the selection of debromination pathways. The application of the model is illustrated by adopting the experimental results of decabromodiphenyl ether (BDE209) in hexane exposed to sunlight. Inferences that were not obvious from experimental data were suggested by model simulations. For example, BDE206 has much higher accumulation at the first 30 min of sunlight exposure. By contrast, model simulation suggests that, BDE206 and BDE207 had comparable yields from BDE209. The reason for the higher BDE206 level is that BDE207 has the highest depletion in producing octa products. Compared to a previous version of the stochastic model based on stochastic reaction sequences (SRS), the AMCMC approach was determined to be more efficient and robust. Due to the feature of only requiring experimental observations as input, the AMCMC model is expected to be applicable to a wide range of PBDE debromination processes, e.g. microbial, photolytic, or joint effects in natural environments. PMID:25113201

  18. Implementation of Chord Length Sampling for Transport Through a Binary Stochastic Mixture

    SciTech Connect

    T.J. Donovan; T.M. Sutton; Y. Danon

    2002-11-18

    Neutron transport through a special case stochastic mixture is examined, in which spheres of constant radius are uniformly mixed in a matrix material. A Monte Carlo algorithm previously proposed and examined in 2-D has been implemented in a test version of MCNP. The Limited Chord Length Sampling (LCLS) technique provides a means for modeling a binary stochastic mixture as a cell in MCNP. When inside a matrix cell, LCLS uses chord-length sampling to sample the distance to the next stochastic sphere. After a surface crossing into a stochastic sphere, transport is treated explicitly until the particle exits or is killed. Results were computed for a simple model with two different fixed neutron source distributions and three sets of material number densities. Stochastic spheres were modeled as black absorbers and varying degrees of scattering were introduced in the matrix material. Tallies were computed using the LCLS capability and by averaging results obtained from multiple realizations of the random geometry. Results were compared for accuracy and figures of merit were compared to indicate the efficiency gain of the LCLS method over the benchmark method. Results show that LCLS provides very good accuracy if the scattering optical thickness of the matrix is small ({le} 1). Comparisons of figures of merit show an advantage to LCLS varying between factors of 141 and 5. LCLS efficiency and accuracy relative to the benchmark both decrease as scattering is increased in the matrix.

  19. A Survey of Stochastic Simulation and Optimization Methods in Signal Processing

    NASA Astrophysics Data System (ADS)

    Pereyra, Marcelo; Schniter, Philip; Chouzenoux, Emilie; Pesquet, Jean-Christophe; Tourneret, Jean-Yves; Hero, Alfred O.; McLaughlin, Steve

    2016-03-01

    Modern signal processing (SP) methods rely very heavily on probability and statistics to solve challenging SP problems. SP methods are now expected to deal with ever more complex models, requiring ever more sophisticated computational inference techniques. This has driven the development of statistical SP methods based on stochastic simulation and optimization. Stochastic simulation and optimization algorithms are computationally intensive tools for performing statistical inference in models that are analytically intractable and beyond the scope of deterministic inference methods. They have been recently successfully applied to many difficult problems involving complex statistical models and sophisticated (often Bayesian) statistical inference techniques. This survey paper offers an introduction to stochastic simulation and optimization methods in signal and image processing. The paper addresses a variety of high-dimensional Markov chain Monte Carlo (MCMC) methods as well as deterministic surrogate methods, such as variational Bayes, the Bethe approach, belief and expectation propagation and approximate message passing algorithms. It also discusses a range of optimization methods that have been adopted to solve stochastic problems, as well as stochastic methods for deterministic optimization. Subsequently, areas of overlap between simulation and optimization, in particular optimization-within-MCMC and MCMC-driven optimization are discussed.

  20. Stochastic Energy Deployment System

    2011-11-30

    SEDS is an economy-wide energy model of the U.S. The model captures dynamics between supply, demand, and pricing of the major energy types consumed and produced within the U.S. These dynamics are captured by including: the effects of macroeconomics; the resources and costs of primary energy types such as oil, natural gas, coal, and biomass; the conversion of primary fuels into energy products like petroleum products, electricity, biofuels, and hydrogen; and lastly the end- usemore » consumption attributable to residential and commercial buildings, light and heavy transportation, and industry. Projections from SEDS extend to the year 2050 by one-year time steps and are generally projected at the national level. SEDS differs from other economy-wide energy models in that it explicitly accounts for uncertainty in technology, markets, and policy. SEDS has been specifically developed to avoid the computational burden, and sometimes fruitless labor, that comes from modeling significantly low-level details. Instead, SEDS focuses on the major drivers within the energy economy and evaluates the impact of uncertainty around those drivers.« less

  1. Stochastic resonance on a circle

    SciTech Connect

    Wiesenfeld, K. ); Pierson, D.; Pantazelou, E.; Dames, C.; Moss, F. )

    1994-04-04

    We describe a new realization of stochastic resonance, applicable to a broad class of systems, based on an underlying excitable dynamics with deterministic reinjection. A simple but general theory of such single-trigger'' systems is compared with analog simulations of the Fitzhugh-Nagumo model, as well as experimental data obtained from stimulated sensory neurons in the crayfish.

  2. Universality in Stochastic Exponential Growth

    NASA Astrophysics Data System (ADS)

    Iyer-Biswas, Srividya; Crooks, Gavin E.; Scherer, Norbert F.; Dinner, Aaron R.

    2014-07-01

    Recent imaging data for single bacterial cells reveal that their mean sizes grow exponentially in time and that their size distributions collapse to a single curve when rescaled by their means. An analogous result holds for the division-time distributions. A model is needed to delineate the minimal requirements for these scaling behaviors. We formulate a microscopic theory of stochastic exponential growth as a Master Equation that accounts for these observations, in contrast to existing quantitative models of stochastic exponential growth (e.g., the Black-Scholes equation or geometric Brownian motion). Our model, the stochastic Hinshelwood cycle (SHC), is an autocatalytic reaction cycle in which each molecular species catalyzes the production of the next. By finding exact analytical solutions to the SHC and the corresponding first passage time problem, we uncover universal signatures of fluctuations in exponential growth and division. The model makes minimal assumptions, and we describe how more complex reaction networks can reduce to such a cycle. We thus expect similar scalings to be discovered in stochastic processes resulting in exponential growth that appear in diverse contexts such as cosmology, finance, technology, and population growth.

  3. Path integral approach to closed-form option pricing formulas with applications to stochastic volatility and interest rate models

    NASA Astrophysics Data System (ADS)

    Lemmens, D.; Wouters, M.; Tempere, J.; Foulon, S.

    2008-07-01

    We present a path integral method to derive closed-form solutions for option prices in a stochastic volatility model. The method is explained in detail for the pricing of a plain vanilla option. The flexibility of our approach is demonstrated by extending the realm of closed-form option price formulas to the case where both the volatility and interest rates are stochastic. This flexibility is promising for the treatment of exotic options. Our analytical formulas are tested with numerical Monte Carlo simulations.

  4. Proton Upset Monte Carlo Simulation

    NASA Technical Reports Server (NTRS)

    O'Neill, Patrick M.; Kouba, Coy K.; Foster, Charles C.

    2009-01-01

    The Proton Upset Monte Carlo Simulation (PROPSET) program calculates the frequency of on-orbit upsets in computer chips (for given orbits such as Low Earth Orbit, Lunar Orbit, and the like) from proton bombardment based on the results of heavy ion testing alone. The software simulates the bombardment of modern microelectronic components (computer chips) with high-energy (.200 MeV) protons. The nuclear interaction of the proton with the silicon of the chip is modeled and nuclear fragments from this interaction are tracked using Monte Carlo techniques to produce statistically accurate predictions.

  5. Stochastic Parallel PARticle Kinetic Simulator

    2008-07-01

    SPPARKS is a kinetic Monte Carlo simulator which implements kinetic and Metropolis Monte Carlo solvers in a general way so that they can be hooked to applications of various kinds. Specific applications are implemented in SPPARKS as physical models which generate events (e.g. a diffusive hop or chemical reaction) and execute them one-by-one. Applications can run in paralle so long as the simulation domain can be partitoned spatially so that multiple events can be invokedmore » simultaneously. SPPARKS is used to model various kinds of mesoscale materials science scenarios such as grain growth, surface deposition and growth, and reaction kinetics. It can also be used to develop new Monte Carlo models that hook to the existing solver and paralle infrastructure provided by the code.« less

  6. Monte-Carlo simulations of chemical reactions in molecular crystals

    NASA Astrophysics Data System (ADS)

    Even, J.; Bertault, M.

    1999-01-01

    Chemical reactions in molecular crystals, yielding new entities (dimers, trimers,…, polymers) in the original structure, are simulated for the first time by stochastic Monte Carlo methods. The results are compared with those obtained by deterministic methods. They show that numerical simulation is a tool for understanding the evolution of these mixed systems. They are in kinetic and not in thermodynamic control. Reactive site distributions, x-ray diffuse scattering, and chain length distributions can be simulated. Comparisons are made with deterministic models and experimental results obtained in the case of the solid state dimerization of cinnamic acid in the beta phase and in the case of the solid state polymerization of diacetylenes.

  7. Reactive Monte Carlo sampling with an ab initio potential

    NASA Astrophysics Data System (ADS)

    Leiding, Jeff; Coe, Joshua D.

    2016-05-01

    We present the first application of reactive Monte Carlo in a first-principles context. The algorithm samples in a modified NVT ensemble in which the volume, temperature, and total number of atoms of a given type are held fixed, but molecular composition is allowed to evolve through stochastic variation of chemical connectivity. We discuss general features of the method, as well as techniques needed to enhance the efficiency of Boltzmann sampling. Finally, we compare the results of simulation of NH3 to those of ab initio molecular dynamics (AIMD). We find that there are regions of state space for which RxMC sampling is much more efficient than AIMD due to the "rare-event" character of chemical reactions.

  8. Accelerating particle-in-cell simulations using multilevel Monte Carlo

    NASA Astrophysics Data System (ADS)

    Ricketson, Lee

    2015-11-01

    Particle-in-cell (PIC) simulations have been an important tool in understanding plasmas since the dawn of the digital computer. Much more recently, the multilevel Monte Carlo (MLMC) method has accelerated particle-based simulations of a variety of systems described by stochastic differential equations (SDEs), from financial portfolios to porous media flow. The fundamental idea of MLMC is to perform correlated particle simulations using a hierarchy of different time steps, and to use these correlations for variance reduction on the fine-step result. This framework is directly applicable to the Langevin formulation of Coulomb collisions, as demonstrated in previous work, but in order to apply to PIC simulations of realistic scenarios, MLMC must be generalized to incorporate self-consistent evolution of the electromagnetic fields. We present such a generalization, with rigorous results concerning its accuracy and efficiency. We present examples of the method in the collisionless, electrostatic context, and discuss applications and extensions for the future.

  9. Markov Chain Monte Carlo Bayesian Learning for Neural Networks

    NASA Technical Reports Server (NTRS)

    Goodrich, Michael S.

    2011-01-01

    Conventional training methods for neural networks involve starting al a random location in the solution space of the network weights, navigating an error hyper surface to reach a minimum, and sometime stochastic based techniques (e.g., genetic algorithms) to avoid entrapment in a local minimum. It is further typically necessary to preprocess the data (e.g., normalization) to keep the training algorithm on course. Conversely, Bayesian based learning is an epistemological approach concerned with formally updating the plausibility of competing candidate hypotheses thereby obtaining a posterior distribution for the network weights conditioned on the available data and a prior distribution. In this paper, we developed a powerful methodology for estimating the full residual uncertainty in network weights and therefore network predictions by using a modified Jeffery's prior combined with a Metropolis Markov Chain Monte Carlo method.

  10. Neutron monitor generated data distributions in quantum variational Monte Carlo

    NASA Astrophysics Data System (ADS)

    Kussainov, A. S.; Pya, N.

    2016-08-01

    We have assessed the potential applications of the neutron monitor hardware as random number generator for normal and uniform distributions. The data tables from the acquisition channels with no extreme changes in the signal level were chosen as the retrospective model. The stochastic component was extracted by fitting the raw data with splines and then subtracting the fit. Scaling the extracted data to zero mean and variance of one is sufficient to obtain a stable standard normal random variate. Distributions under consideration pass all available normality tests. Inverse transform sampling is suggested to use as a source of the uniform random numbers. Variational Monte Carlo method for quantum harmonic oscillator was used to test the quality of our random numbers. If the data delivery rate is of importance and the conventional one minute resolution neutron count is insufficient, we could always settle for an efficient seed generator to feed into the faster algorithmic random number generator or create a buffer.

  11. A higher-order numerical framework for stochastic simulation of chemical reaction systems

    PubMed Central

    2012-01-01

    Background In this paper, we present a framework for improving the accuracy of fixed-step methods for Monte Carlo simulation of discrete stochastic chemical kinetics. Stochasticity is ubiquitous in many areas of cell biology, for example in gene regulation, biochemical cascades and cell-cell interaction. However most discrete stochastic simulation techniques are slow. We apply Richardson extrapolation to the moments of three fixed-step methods, the Euler, midpoint and θ-trapezoidal τ-leap methods, to demonstrate the power of stochastic extrapolation. The extrapolation framework can increase the order of convergence of any fixed-step discrete stochastic solver and is very easy to implement; the only condition for its use is knowledge of the appropriate terms of the global error expansion of the solver in terms of its stepsize. In practical terms, a higher-order method with a larger stepsize can achieve the same level of accuracy as a lower-order method with a smaller one, potentially reducing the computational time of the system. Results By obtaining a global error expansion for a general weak first-order method, we prove that extrapolation can increase the weak order of convergence for the moments of the Euler and the midpoint τ-leap methods, from one to two. This is supported by numerical simulations of several chemical systems of biological importance using the Euler, midpoint and θ-trapezoidal τ-leap methods. In almost all cases, extrapolation results in an improvement of accuracy. As in the case of ordinary and stochastic differential equations, extrapolation can be repeated to obtain even higher-order approximations. Conclusions Extrapolation is a general framework for increasing the order of accuracy of any fixed-step stochastic solver. This enables the simulation of complicated systems in less time, allowing for more realistic biochemical problems to be solved. PMID:23256696

  12. Mineralogy of Libya Montes, Mars

    NASA Astrophysics Data System (ADS)

    Perry, K. A.; Bishop, J. L.; McKeown, N. K.

    2009-12-01

    Observations by CRISM (Compact Reconnaissance Imaging Spectrometer for Mars) have revealed a range of minerals in Libya Montes including olivine, pyroxene, and phyllosilicate [1]. Here we extend our spectral analyses of CRISM images in Libya Montes to identify carbonates. We have also performed detailed characterization of the spectral signature of the phyllosilicate- and carbonate-bearing outcrops in order to constrain the types of phyllosilicates and carbonates present. Phyllosilicate-bearing rocks in Libya Montes have spectral bands at 1.42, 2.30 and 2.39 µm, consistent with Fe- and Mg- bearing smectites. The mixture of Fe and Mg in Libya Montes may be within the clay mineral structure or within the CRISM pixel. Because the pixels have 18 meter/pixel spatial resolution, it is possible that the bands observed are due to the mixing of nontronite and saponite rather than a smectite with both Fe and Mg. Carbonates found in Libya Montes are similar to those found in Nili Fossae [2]. The carbonates have bands centered at 2.30 and 2.52 µm. Libya Montes carbonates most closely resemble the Mg-carbonate, magnesite. Olivine spectra are seen throughout Libya Montes, characterized by a positive slope from 1.2-1.8 µm. Large outcrops of olivine are relatively rare on Mars [3]. This implies that fresh bedrock has been recently exposed because olivine weathers readily compared to pyroxene and feldspar. Pyroxene in Libya Montes resembles an Fe-bearing orthopyroxene with a broad band centered at 1.82 µm. The lowermost unit identified in Libya Montes is a clay-bearing unit. Overlying this is a carbonate-bearing unit with a clear unit division visible in at least one CRISM image. An olivine-bearing unit unconformably overlies these two units and may represent a drape related to the Isidis impact, as suggested for Nili Fossae [2]. However, it appears that the carbonate in Libya Montes is an integral portion of the rock underlying the olivine-bearing unit rather than an

  13. MontePython: Implementing Quantum Monte Carlo using Python

    NASA Astrophysics Data System (ADS)

    Nilsen, Jon Kristian

    2007-11-01

    We present a cross-language C++/Python program for simulations of quantum mechanical systems with the use of Quantum Monte Carlo (QMC) methods. We describe a system for which to apply QMC, the algorithms of variational Monte Carlo and diffusion Monte Carlo and we describe how to implement theses methods in pure C++ and C++/Python. Furthermore we check the efficiency of the implementations in serial and parallel cases to show that the overhead using Python can be negligible. Program summaryProgram title: MontePython Catalogue identifier: ADZP_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/ADZP_v1_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: Standard CPC licence, http://cpc.cs.qub.ac.uk/licence/licence.html No. of lines in distributed program, including test data, etc.: 49 519 No. of bytes in distributed program, including test data, etc.: 114 484 Distribution format: tar.gz Programming language: C++, Python Computer: PC, IBM RS6000/320, HP, ALPHA Operating system: LINUX Has the code been vectorised or parallelized?: Yes, parallelized with MPI Number of processors used: 1-96 RAM: Depends on physical system to be simulated Classification: 7.6; 16.1 Nature of problem: Investigating ab initio quantum mechanical systems, specifically Bose-Einstein condensation in dilute gases of 87Rb Solution method: Quantum Monte Carlo Running time: 225 min with 20 particles (with 4800 walkers moved in 1750 time steps) on 1 AMD Opteron TM Processor 2218 processor; Production run for, e.g., 200 particles takes around 24 hours on 32 such processors.

  14. A stochastic model for the analysis of maximum daily temperature

    NASA Astrophysics Data System (ADS)

    Sirangelo, B.; Caloiero, T.; Coscarelli, R.; Ferrari, E.

    2016-08-01

    In this paper, a stochastic model for the analysis of the daily maximum temperature is proposed. First, a deseasonalization procedure based on the truncated Fourier expansion is adopted. Then, the Johnson transformation functions were applied for the data normalization. Finally, the fractionally autoregressive integrated moving average model was used to reproduce both short- and long-memory behavior of the temperature series. The model was applied to the data of the Cosenza gauge (Calabria region) and verified on other four gauges of southern Italy. Through a Monte Carlo simulation procedure based on the proposed model, 105 years of daily maximum temperature have been generated. Among the possible applications of the model, the occurrence probabilities of the annual maximum values have been evaluated. Moreover, the procedure was applied for the estimation of the return periods of long sequences of days with maximum temperature above prefixed thresholds.

  15. Stochastic Particle Real Time Analyzer (SPARTA) Validation and Verification Suite

    SciTech Connect

    Gallis, Michael A.; Koehler, Timothy P.; Plimpton, Steven J.

    2014-10-01

    This report presents the test cases used to verify, validate and demonstrate the features and capabilities of the first release of the 3D Direct Simulation Monte Carlo (DSMC) code SPARTA (Stochastic Real Time Particle Analyzer). The test cases included in this report exercise the most critical capabilities of the code like the accurate representation of physical phenomena (molecular advection and collisions, energy conservation, etc.) and implementation of numerical methods (grid adaptation, load balancing, etc.). Several test cases of simple flow examples are shown to demonstrate that the code can reproduce phenomena predicted by analytical solutions and theory. A number of additional test cases are presented to illustrate the ability of SPARTA to model flow around complicated shapes. In these cases, the results are compared to other well-established codes or theoretical predictions. This compilation of test cases is not exhaustive, and it is anticipated that more cases will be added in the future.

  16. Stochastic simulation algorithm for the quantum linear Boltzmann equation.

    PubMed

    Busse, Marc; Pietrulewicz, Piotr; Breuer, Heinz-Peter; Hornberger, Klaus

    2010-08-01

    We develop a Monte Carlo wave function algorithm for the quantum linear Boltzmann equation, a Markovian master equation describing the quantum motion of a test particle interacting with the particles of an environmental background gas. The algorithm leads to a numerically efficient stochastic simulation procedure for the most general form of this integrodifferential equation, which involves a five-dimensional integral over microscopically defined scattering amplitudes that account for the gas interactions in a nonperturbative fashion. The simulation technique is used to assess various limiting forms of the quantum linear Boltzmann equation, such as the limits of pure collisional decoherence and quantum Brownian motion, the Born approximation, and the classical limit. Moreover, we extend the method to allow for the simulation of the dissipative and decohering dynamics of superpositions of spatially localized wave packets, which enables the study of many physically relevant quantum phenomena, occurring e.g., in the interferometry of massive particles.

  17. Stochastic Simulations of Pattern Formation in Excitable Media

    PubMed Central

    Vigelius, Matthias; Meyer, Bernd

    2012-01-01

    We present a method for mesoscopic, dynamic Monte Carlo simulations of pattern formation in excitable reaction–diffusion systems. Using a two-level parallelization approach, our simulations cover the whole range of the parameter space, from the noise-dominated low-particle number regime to the quasi-deterministic high-particle number limit. Three qualitatively different case studies are performed that stand exemplary for the wide variety of excitable systems. We present mesoscopic stochastic simulations of the Gray-Scott model, of a simplified model for intracellular Ca oscillations and, for the first time, of the Oregonator model. We achieve simulations with up to particles. The software and the model files are freely available and researchers can use the models to reproduce our results or adapt and refine them for further exploration. PMID:22900025

  18. Stochastic many-body perturbation theory for anharmonic molecular vibrations

    NASA Astrophysics Data System (ADS)

    Hermes, Matthew R.; Hirata, So

    2014-08-01

    A new quantum Monte Carlo (QMC) method for anharmonic vibrational zero-point energies and transition frequencies is developed, which combines the diagrammatic vibrational many-body perturbation theory based on the Dyson equation with Monte Carlo integration. The infinite sums of the diagrammatic and thus size-consistent first- and second-order anharmonic corrections to the energy and self-energy are expressed as sums of a few m- or 2m-dimensional integrals of wave functions and a potential energy surface (PES) (m is the vibrational degrees of freedom). Each of these integrals is computed as the integrand (including the value of the PES) divided by the value of a judiciously chosen weight function evaluated on demand at geometries distributed randomly but according to the weight function via the Metropolis algorithm. In this way, the method completely avoids cumbersome evaluation and storage of high-order force constants necessary in the original formulation of the vibrational perturbation theory; it furthermore allows even higher-order force constants essentially up to an infinite order to be taken into account in a scalable, memory-efficient algorithm. The diagrammatic contributions to the frequency-dependent self-energies that are stochastically evaluated at discrete frequencies can be reliably interpolated, allowing the self-consistent solutions to the Dyson equation to be obtained. This method, therefore, can compute directly and stochastically the transition frequencies of fundamentals and overtones as well as their relative intensities as pole strengths, without fixed-node errors that plague some QMC. It is shown that, for an identical PES, the new method reproduces the correct deterministic values of the energies and frequencies within a few cm-1 and pole strengths within a few thousandths. With the values of a PES evaluated on the fly at random geometries, the new method captures a noticeably greater proportion of anharmonic effects.

  19. Stochastic many-body perturbation theory for anharmonic molecular vibrations

    SciTech Connect

    Hermes, Matthew R.; Hirata, So

    2014-08-28

    A new quantum Monte Carlo (QMC) method for anharmonic vibrational zero-point energies and transition frequencies is developed, which combines the diagrammatic vibrational many-body perturbation theory based on the Dyson equation with Monte Carlo integration. The infinite sums of the diagrammatic and thus size-consistent first- and second-order anharmonic corrections to the energy and self-energy are expressed as sums of a few m- or 2m-dimensional integrals of wave functions and a potential energy surface (PES) (m is the vibrational degrees of freedom). Each of these integrals is computed as the integrand (including the value of the PES) divided by the value of a judiciously chosen weight function evaluated on demand at geometries distributed randomly but according to the weight function via the Metropolis algorithm. In this way, the method completely avoids cumbersome evaluation and storage of high-order force constants necessary in the original formulation of the vibrational perturbation theory; it furthermore allows even higher-order force constants essentially up to an infinite order to be taken into account in a scalable, memory-efficient algorithm. The diagrammatic contributions to the frequency-dependent self-energies that are stochastically evaluated at discrete frequencies can be reliably interpolated, allowing the self-consistent solutions to the Dyson equation to be obtained. This method, therefore, can compute directly and stochastically the transition frequencies of fundamentals and overtones as well as their relative intensities as pole strengths, without fixed-node errors that plague some QMC. It is shown that, for an identical PES, the new method reproduces the correct deterministic values of the energies and frequencies within a few cm{sup −1} and pole strengths within a few thousandths. With the values of a PES evaluated on the fly at random geometries, the new method captures a noticeably greater proportion of anharmonic effects.

  20. Multilevel sequential Monte Carlo samplers

    DOE PAGES

    Beskos, Alexandros; Jasra, Ajay; Law, Kody; Tempone, Raul; Zhou, Yan

    2016-08-24

    Here, we study the approximation of expectations w.r.t. probability distributions associated to the solution of partial differential equations (PDEs); this scenario appears routinely in Bayesian inverse problems. In practice, one often has to solve the associated PDE numerically, using, for instance finite element methods and leading to a discretisation bias, with the step-size level hL. In addition, the expectation cannot be computed analytically and one often resorts to Monte Carlo methods. In the context of this problem, it is known that the introduction of the multilevel Monte Carlo (MLMC) method can reduce the amount of computational effort to estimate expectations, for a given level of error. This is achieved via a telescoping identity associated to a Monte Carlo approximation of a sequence of probability distributions with discretisation levelsmore » $${\\infty}$$ >h0>h1 ...>hL. In many practical problems of interest, one cannot achieve an i.i.d. sampling of the associated sequence of probability distributions. A sequential Monte Carlo (SMC) version of the MLMC method is introduced to deal with this problem. In conclusion, it is shown that under appropriate assumptions, the attractive property of a reduction of the amount of computational effort to estimate expectations, for a given level of error, can be maintained within the SMC context.« less

  1. Monte Carlo calculations of nuclei

    SciTech Connect

    Pieper, S.C.

    1997-10-01

    Nuclear many-body calculations have the complication of strong spin- and isospin-dependent potentials. In these lectures the author discusses the variational and Green`s function Monte Carlo techniques that have been developed to address this complication, and presents a few results.

  2. Monte Carlo Methods in ICF (LIRPP Vol. 13)

    NASA Astrophysics Data System (ADS)

    Zimmerman, George B.

    2016-10-01

    Monte Carlo methods appropriate to simulate the transport of x-rays, neutrons, ions and electrons in Inertial Confinement Fusion targets are described and analyzed. The Implicit Monte Carlo method of x-ray transport handles symmetry within indirect drive ICF hohlraums well, but can be improved SOX in efficiency by angular biasing the x-rays towards the fuel capsule. Accurate simulation of thermonuclear burn and burn diagnostics involves detailed particle source spectra, charged particle ranges, inflight reaction kinematics, corrections for bulk and thermal Doppler effects and variance reduction to obtain adequate statistics for rare events. It is found that the effects of angular Coulomb scattering must be included in models of charged particle transport through heterogeneous materials.

  3. Stochastic weighted particle methods for population balance equations

    SciTech Connect

    Patterson, Robert I.A.; Wagner, Wolfgang; Kraft, Markus

    2011-08-10

    Highlights: {yields} Weight transfer functions for Monte Carlo simulation of coagulation. {yields} Efficient support for single-particle growth processes. {yields} Comparisons to analytic solutions and soot formation problems. {yields} Better numerical accuracy for less common particles. - Abstract: A class of coagulation weight transfer functions is constructed, each member of which leads to a stochastic particle algorithm for the numerical treatment of population balance equations. These algorithms are based on systems of weighted computational particles and the weight transfer functions are constructed such that the number of computational particles does not change during coagulation events. The algorithms also facilitate the simulation of physical processes that change single particles, such as growth, or other surface reactions. Four members of the algorithm family have been numerically validated by comparison to analytic solutions to simple problems. Numerical experiments have been performed for complex laminar premixed flame systems in which members of the class of stochastic weighted particle methods were compared to each other and to a direct simulation algorithm. Two of the weighted algorithms have been shown to offer performance advantages over the direct simulation algorithm in situations where interest is focused on the larger particles in a system. The extent of this advantage depends on the particular system and on the quantities of interest.

  4. Stochastic spatio-temporal modelling with PCRaster Python

    NASA Astrophysics Data System (ADS)

    Karssenberg, D.; Schmitz, O.; de Jong, K.

    2012-04-01

    PCRaster Python is a software framework for building spatio-temporal models of land surface processes (Karssenberg, Schmitz, Salamon, De Jong, & Bierkens, 2010; PCRaster, 2012). Building blocks of models are spatial operations on raster maps, including a large suite of operations for water and sediment routing. These operations, developed in C++, are available to model builders as Python functions. Users create models by combining these functions in a Python script. As construction of large iterative models is often difficult and time consuming for non-specialists in programming, the software comes with a set of Python framework classes that provide control flow for static modelling, temporal modelling, stochastic modelling using Monte Carlo simulation, and data assimilation techniques including the Ensemble Kalman filter and the Particle Filter. A framework for integrating model components with different time steps and spatial discretization is currently available as a prototype (Schmitz, de Jong, & Karssenberg, in review). The software includes routines for visualisation of stochastic spatio-temporal data for prompt, interactive, visualisation of model inputs and outputs. Visualisation techniques include animated maps, time series, probability distributions, and animated maps with exceedance probabilities. The PCRaster Python software is used by researchers from a large range of disciplines, including hydrology, ecology, sedimentology, and land use change studies. Applications include global scale hydrological modelling and error propagation in large-scale land use change models. The software runs on MS Windows and Linux operating systems, and OS X (under development).

  5. Evaluation of Electric Power Procurement Strategies by Stochastic Dynamic Programming

    NASA Astrophysics Data System (ADS)

    Saisho, Yuichi; Hayashi, Taketo; Fujii, Yasumasa; Yamaji, Kenji

    In deregulated electricity markets, the role of a distribution company is to purchase electricity from the wholesale electricity market at randomly fluctuating prices and to provide it to its customers at a given fixed price. Therefore the company has to take risk stemming from the uncertainties of electricity prices and/or demand fluctuation instead of the customers. The way to avoid the risk is to make a bilateral contact with generating companies or install its own power generation facility. This entails the necessity to develop a certain method to make an optimal strategy for electric power procurement. In such a circumstance, this research has the purpose for proposing a mathematical method based on stochastic dynamic programming and additionally considering the characteristics of the start-up cost of electric power generation facility to evaluate strategies of combination of the bilateral contract and power auto-generation with its own facility for procuring electric power in deregulated electricity market. In the beginning we proposed two approaches to solve the stochastic dynamic programming, and they are a Monte Carlo simulation method and a finite difference method to derive the solution of a partial differential equation of the total procurement cost of electric power. Finally we discussed the influences of the price uncertainty on optimal strategies of power procurement.

  6. Stochastic models of population extinction.

    PubMed

    Ovaskainen, Otso; Meerson, Baruch

    2010-11-01

    Theoretical ecologists have long sought to understand how the persistence of populations depends on biotic and abiotic factors. Classical work showed that demographic stochasticity causes the mean time to extinction to increase exponentially with population size, whereas variation in environmental conditions can lead to a power-law scaling. Recent work has focused especially on the influence of the autocorrelation structure ('color') of environmental noise. In theoretical physics, there is a burst of research activity in analyzing large fluctuations in stochastic population dynamics. This research provides powerful tools for determining extinction times and characterizing the pathway to extinction. It yields, therefore, sharp insights into extinction processes and has great potential for further applications in theoretical biology.

  7. Stochastic Aspects of Cardiac Arrhythmias

    NASA Astrophysics Data System (ADS)

    Lerma, Claudia; Krogh-Madsen, Trine; Guevara, Michael; Glass, Leon

    2007-07-01

    Abnormal cardiac rhythms (cardiac arrhythmias) often display complex changes over time that can have a random or haphazard appearance. Mathematically, these changes can on occasion be identified with bifurcations in difference or differential equation models of the arrhythmias. One source for the variability of these rhythms is the fluctuating environment. However, in the neighborhood of bifurcation points, the fluctuations induced by the stochastic opening and closing of individual ion channels in the cell membrane, which results in membrane noise, may lead to randomness in the observed dynamics. To illustrate this, we consider the effects of stochastic properties of ion channels on the resetting of pacemaker oscillations and on the generation of early afterdepolarizations. The comparison of the statistical properties of long records showing arrhythmias with the predictions from theoretical models should help in the identification of different mechanisms underlying cardiac arrhythmias.

  8. Wavelet entropy of stochastic processes

    NASA Astrophysics Data System (ADS)

    Zunino, L.; Pérez, D. G.; Garavaglia, M.; Rosso, O. A.

    2007-06-01

    We compare two different definitions for the wavelet entropy associated to stochastic processes. The first one, the normalized total wavelet entropy (NTWS) family [S. Blanco, A. Figliola, R.Q. Quiroga, O.A. Rosso, E. Serrano, Time-frequency analysis of electroencephalogram series, III. Wavelet packets and information cost function, Phys. Rev. E 57 (1998) 932-940; O.A. Rosso, S. Blanco, J. Yordanova, V. Kolev, A. Figliola, M. Schürmann, E. Başar, Wavelet entropy: a new tool for analysis of short duration brain electrical signals, J. Neurosci. Method 105 (2001) 65-75] and a second introduced by Tavares and Lucena [Physica A 357(1) (2005) 71-78]. In order to understand their advantages and disadvantages, exact results obtained for fractional Gaussian noise ( -1<α< 1) and fractional Brownian motion ( 1<α< 3) are assessed. We find out that the NTWS family performs better as a characterization method for these stochastic processes.

  9. Stochastic scanning multiphoton multifocal microscopy.

    PubMed

    Jureller, Justin E; Kim, Hee Y; Scherer, Norbert F

    2006-04-17

    Multiparticle tracking with scanning confocal and multiphoton fluorescence imaging is increasingly important for elucidating biological function, as in the transport of intracellular cargo-carrying vesicles. We demonstrate a simple rapid-sampling stochastic scanning multifocal multiphoton microscopy (SS-MMM) fluorescence imaging technique that enables multiparticle tracking without specialized hardware at rates 1,000 times greater than conventional single point raster scanning. Stochastic scanning of a diffractive optic generated 10x10 hexagonal array of foci with a white noise driven galvanometer yields a scan pattern that is random yet space-filling. SS-MMM creates a more uniformly sampled image with fewer spatio-temporal artifacts than obtained by conventional or multibeam raster scanning. SS-MMM is verified by simulation and experimentally demonstrated by tracking microsphere diffusion in solution. PMID:19516485

  10. Multilevel Monte Carlo for two phase flow and Buckley–Leverett transport in random heterogeneous porous media

    SciTech Connect

    Müller, Florian Jenny, Patrick Meyer, Daniel W.

    2013-10-01

    Monte Carlo (MC) is a well known method for quantifying uncertainty arising for example in subsurface flow problems. Although robust and easy to implement, MC suffers from slow convergence. Extending MC by means of multigrid techniques yields the multilevel Monte Carlo (MLMC) method. MLMC has proven to greatly accelerate MC for several applications including stochastic ordinary differential equations in finance, elliptic stochastic partial differential equations and also hyperbolic problems. In this study, MLMC is combined with a streamline-based solver to assess uncertain two phase flow and Buckley–Leverett transport in random heterogeneous porous media. The performance of MLMC is compared to MC for a two dimensional reservoir with a multi-point Gaussian logarithmic permeability field. The influence of the variance and the correlation length of the logarithmic permeability on the MLMC performance is studied.

  11. Determining Reduced Order Models for Optimal Stochastic Reduced Order Models

    SciTech Connect

    Bonney, Matthew S.; Brake, Matthew R.W.

    2015-08-01

    The use of parameterized reduced order models(PROMs) within the stochastic reduced order model (SROM) framework is a logical progression for both methods. In this report, five different parameterized reduced order models are selected and critiqued against the other models along with truth model for the example of the Brake-Reuss beam. The models are: a Taylor series using finite difference, a proper orthogonal decomposition of the the output, a Craig-Bampton representation of the model, a method that uses Hyper-Dual numbers to determine the sensitivities, and a Meta-Model method that uses the Hyper-Dual results and constructs a polynomial curve to better represent the output data. The methods are compared against a parameter sweep and a distribution propagation where the first four statistical moments are used as a comparison. Each method produces very accurate results with the Craig-Bampton reduction having the least accurate results. The models are also compared based on time requirements for the evaluation of each model where the Meta- Model requires the least amount of time for computation by a significant amount. Each of the five models provided accurate results in a reasonable time frame. The determination of which model to use is dependent on the availability of the high-fidelity model and how many evaluations can be performed. Analysis of the output distribution is examined by using a large Monte-Carlo simulation along with a reduced simulation using Latin Hypercube and the stochastic reduced order model sampling technique. Both techniques produced accurate results. The stochastic reduced order modeling technique produced less error when compared to an exhaustive sampling for the majority of methods.

  12. Stochastic methods for uncertainty quantification in radiation transport

    SciTech Connect

    Fichtl, Erin D; Prinja, Anil K; Warsa, James S

    2009-01-01

    The use of generalized polynomial chaos (gPC) expansions is investigated for uncertainty quantification in radiation transport. The gPC represents second-order random processes in terms of an expansion of orthogonal polynomials of random variables and is used to represent the uncertain input(s) and unknown(s). We assume a single uncertain input-the total macroscopic cross section-although this does not represent a limitation of the approaches considered here. Two solution methods are examined: The Stochastic Finite Element Method (SFEM) and the Stochastic Collocation Method (SCM). The SFEM entails taking Galerkin projections onto the orthogonal basis, which, for fixed source problems, yields a linear system of fully -coupled equations for the PC coefficients of the unknown. For k-eigenvalue calculations, the SFEM system is non-linear and a Newton-Krylov method is employed to solve it. The SCM utilizes a suitable quadrature rule to compute the moments or PC coefficients of the unknown(s), thus the SCM solution involves a series of independent deterministic transport solutions. The accuracy and efficiency of the two methods are compared and contrasted. The PC coefficients are used to compute the moments and probability density functions of the unknown(s), which are shown to be accurate by comparing with Monte Carlo results. Our work demonstrates that stochastic spectral expansions are a viable alternative to sampling-based uncertainty quantification techniques since both provide a complete characterization of the distribution of the flux and the k-eigenvalue. Furthermore, it is demonstrated that, unlike perturbation methods, SFEM and SCM can handle large parameter uncertainty.

  13. Stochastic background of atmospheric cascades

    SciTech Connect

    Wilk, G. ); Wlodarczyk, Z. )

    1993-06-15

    Fluctuations in the atmospheric cascades developing during the propagation of very high energy cosmic rays through the atmosphere are investigated using stochastic branching model of pure birth process with immigration. In particular, we show that the multiplicity distributions of secondaries emerging from gamma families are much narrower than those resulting from hadronic families. We argue that the strong intermittent like behaviour found recently in atmospheric families results from the fluctuations in the cascades themselves and are insensitive to the details of elementary interactions.

  14. Monte Carlo Experiments: Design and Implementation.

    ERIC Educational Resources Information Center

    Paxton, Pamela; Curran, Patrick J.; Bollen, Kenneth A.; Kirby, Jim; Chen, Feinian

    2001-01-01

    Illustrates the design and planning of Monte Carlo simulations, presenting nine steps in planning and performing a Monte Carlo analysis from developing a theoretically derived question of interest through summarizing the results. Uses a Monte Carlo simulation to illustrate many of the relevant points. (SLD)

  15. Monte Carlo Simulation for Perusal and Practice.

    ERIC Educational Resources Information Center

    Brooks, Gordon P.; Barcikowski, Robert S.; Robey, Randall R.

    The meaningful investigation of many problems in statistics can be solved through Monte Carlo methods. Monte Carlo studies can help solve problems that are mathematically intractable through the analysis of random samples from populations whose characteristics are known to the researcher. Using Monte Carlo simulation, the values of a statistic are…

  16. Stochastic response surface methods (SRSMs) for uncertainty propagation: Application to environmental and biological systems

    SciTech Connect

    Isukapalli, S.S.; Roy, A.; Georgopoulos, P.G. |

    1998-06-01

    Comprehensive uncertainty analyses of complex models of environmental and biological systems are essential but often not feasible due to the computational resources they require. Traditional methods, such as standard Monte Carlo and Latin Hypercube Sampling, for propagating uncertainty and developing probability densities of model outputs, may in fact require performing a prohibitive number of model simulations. An alternative is offered, for a wide range of problems, by the computationally efficient Stochastic Response Surface Methods (SRSMs) for uncertainty propagation. These methods extend the classical response surface methodology to systems with stochastic inputs and outputs. This is accomplished by approximating both inputs and outputs of the uncertain system through stochastic series of well behaved standard random variables; the series expansions of the outputs contain unknown coefficients which are calculated by a method that uses the results of a limited number of model simulations. Two case studies are presented here involving (a) a physiologically-based pharmacokinetic (PBPK) model for perchloroethylene (PERC) for humans, and (b) an atmospheric photochemical model, the Reactive Plume Model (RPM-IV). The results obtained agree closely with those of traditional Monte Carlo and Latin Hypercube Sampling methods, while significantly reducing the required number of model simulations.

  17. A stochastic analysis of steady and transient heat conduction in random media using a homogenization approach

    SciTech Connect

    Zhijie Xu

    2014-07-01

    We present a new stochastic analysis for steady and transient one-dimensional heat conduction problem based on the homogenization approach. Thermal conductivity is assumed to be a random field K consisting of random variables of a total number N. Both steady and transient solutions T are expressed in terms of the homogenized solution (symbol) and its spatial derivatives (equation), where homogenized solution (symbol) is obtained by solving the homogenized equation with effective thermal conductivity. Both mean and variance of stochastic solutions can be obtained analytically for K field consisting of independent identically distributed (i.i.d) random variables. The mean and variance of T are shown to be dependent only on the mean and variance of these i.i.d variables, not the particular form of probability distribution function of i.i.d variables. Variance of temperature field T can be separated into two contributions: the ensemble contribution (through the homogenized temperature (symbol)); and the configurational contribution (through the random variable Ln(x)Ln(x)). The configurational contribution is shown to be proportional to the local gradient of (symbol). Large uncertainty of T field was found at locations with large gradient of (symbol) due to the significant configurational contributions at these locations. Numerical simulations were implemented based on a direct Monte Carlo method and good agreement is obtained between numerical Monte Carlo results and the proposed stochastic analysis.

  18. The analysis of a sparse grid stochastic collocation method for partial differential equations with high-dimensional random input data.

    SciTech Connect

    Webster, Clayton; Tempone, Raul; Nobile, Fabio

    2007-12-01

    This work describes the convergence analysis of a Smolyak-type sparse grid stochastic collocation method for the approximation of statistical quantities related to the solution of partial differential equations with random coefficients and forcing terms (input data of the model). To compute solution statistics, the sparse grid stochastic collocation method uses approximate solutions, produced here by finite elements, corresponding to a deterministic set of points in the random input space. This naturally requires solving uncoupled deterministic problems and, as such, the derived strong error estimates for the fully discrete solution are used to compare the computational efficiency of the proposed method with the Monte Carlo method. Numerical examples illustrate the theoretical results and are used to compare this approach with several others, including the standard Monte Carlo.

  19. Monte Carlo source convergence and the Whitesides problem

    SciTech Connect

    Blomquist, R. N.

    2000-02-25

    The issue of fission source convergence in Monte Carlo eigenvalue calculations is of interest because of the potential consequences of erroneous criticality safety calculations. In this work, the authors compare two different techniques to improve the source convergence behavior of standard Monte Carlo calculations applied to challenging source convergence problems. The first method, super-history powering, attempts to avoid discarding important fission sites between generations by delaying stochastic sampling of the fission site bank until after several generations of multiplication. The second method, stratified sampling of the fission site bank, explicitly keeps the important sites even if conventional sampling would have eliminated them. The test problems are variants of Whitesides' Criticality of the World problem in which the fission site phase space was intentionally undersampled in order to induce marginally intolerable variability in local fission site populations. Three variants of the problem were studied, each with a different degree of coupling between fissionable pieces. Both the superhistory powering method and the stratified sampling method were shown to improve convergence behavior, although stratified sampling is more robust for the extreme case of no coupling. Neither algorithm completely eliminates the loss of the most important fissionable piece, and if coupling is absent, the lost piece cannot be recovered unless its sites from earlier generations have been retained. Finally, criteria for measuring source convergence reliability are proposed and applied to the test problems.

  20. Monte Carlo parameter studies and uncertainty analyses with MCNP5

    SciTech Connect

    Brown, F. B.; Sweezy, J. E.; Hayes, R. B.

    2004-01-01

    A software tool called mcnp-pstudy has been developed to automate the setup, execution, and collection of results from a series of MCNPS Monte Carlo calculations. This tool provides a convenient means of performing parameter studies, total uncertainty analyses, parallel job execution on clusters, stochastic geometry modeling, and other types of calculations where a series of MCNPS jobs must be performed with varying problem input specifications. Monte Carlo codes are being used for a wide variety of applications today due to their accurate physical modeling and the speed of today's computers. In most applications for design work, experiment analysis, and benchmark calculations, it is common to run many calculations, not just one, to examine the effects of design tolerances, experimental uncertainties, or variations in modeling features. We have developed a software tool for use with MCNP5 to automate this process. The tool, mcnp-pstudy, is used to automate the operations of preparing a series of MCNP5 input files, running the calculations, and collecting the results. Using this tool, parameter studies, total uncertainty analyses, or repeated (possibly parallel) calculations with MCNP5 can be performed easily. Essentially no extra user setup time is required beyond that of preparing a single MCNP5 input file.

  1. Mechanical Autonomous Stochastic Heat Engine.

    PubMed

    Serra-Garcia, Marc; Foehr, André; Molerón, Miguel; Lydon, Joseph; Chong, Christopher; Daraio, Chiara

    2016-07-01

    Stochastic heat engines are devices that generate work from random thermal motion using a small number of highly fluctuating degrees of freedom. Proposals for such devices have existed for more than a century and include the Maxwell demon and the Feynman ratchet. Only recently have they been demonstrated experimentally, using, e.g., thermal cycles implemented in optical traps. However, recent experimental demonstrations of classical stochastic heat engines are nonautonomous, since they require an external control system that prescribes a heating and cooling cycle and consume more energy than they produce. We present a heat engine consisting of three coupled mechanical resonators (two ribbons and a cantilever) subject to a stochastic drive. The engine uses geometric nonlinearities in the resonating ribbons to autonomously convert a random excitation into a low-entropy, nonpassive oscillation of the cantilever. The engine presents the anomalous heat transport property of negative thermal conductivity, consisting in the ability to passively transfer energy from a cold reservoir to a hot reservoir.

  2. Mechanical Autonomous Stochastic Heat Engine

    NASA Astrophysics Data System (ADS)

    Serra-Garcia, Marc; Foehr, André; Molerón, Miguel; Lydon, Joseph; Chong, Christopher; Daraio, Chiara

    2016-07-01

    Stochastic heat engines are devices that generate work from random thermal motion using a small number of highly fluctuating degrees of freedom. Proposals for such devices have existed for more than a century and include the Maxwell demon and the Feynman ratchet. Only recently have they been demonstrated experimentally, using, e.g., thermal cycles implemented in optical traps. However, recent experimental demonstrations of classical stochastic heat engines are nonautonomous, since they require an external control system that prescribes a heating and cooling cycle and consume more energy than they produce. We present a heat engine consisting of three coupled mechanical resonators (two ribbons and a cantilever) subject to a stochastic drive. The engine uses geometric nonlinearities in the resonating ribbons to autonomously convert a random excitation into a low-entropy, nonpassive oscillation of the cantilever. The engine presents the anomalous heat transport property of negative thermal conductivity, consisting in the ability to passively transfer energy from a cold reservoir to a hot reservoir.

  3. Stochastic resonance in binocular rivalry.

    PubMed

    Kim, Yee-Joon; Grabowecky, Marcia; Suzuki, Satoru

    2006-02-01

    When a different image is presented to each eye, visual awareness spontaneously alternates between the two images--a phenomenon called binocular rivalry. Because binocular rivalry is characterized by two marginally stable perceptual states and spontaneous, apparently stochastic, switching between them, it has been speculated that switches in perceptual awareness reflect a double-well-potential type computational architecture coupled with noise. To characterize this noise-mediated mechanism, we investigated whether stimulus input, neural adaptation, and inhibitory modulations (thought to underlie perceptual switches) interacted with noise in such a way that the system produced stochastic resonance. By subjecting binocular rivalry to weak periodic contrast modulations spanning a range of frequencies, we demonstrated quantitative evidence of stochastic resonance in binocular rivalry. Our behavioral results combined with computational simulations provided insights into the nature of the internal noise (its magnitude, locus, and calibration) that is relevant to perceptual switching, as well as provided novel dynamic constraints on computational models designed to capture the neural mechanisms underlying perceptual switching.

  4. Mechanical Autonomous Stochastic Heat Engine.

    PubMed

    Serra-Garcia, Marc; Foehr, André; Molerón, Miguel; Lydon, Joseph; Chong, Christopher; Daraio, Chiara

    2016-07-01

    Stochastic heat engines are devices that generate work from random thermal motion using a small number of highly fluctuating degrees of freedom. Proposals for such devices have existed for more than a century and include the Maxwell demon and the Feynman ratchet. Only recently have they been demonstrated experimentally, using, e.g., thermal cycles implemented in optical traps. However, recent experimental demonstrations of classical stochastic heat engines are nonautonomous, since they require an external control system that prescribes a heating and cooling cycle and consume more energy than they produce. We present a heat engine consisting of three coupled mechanical resonators (two ribbons and a cantilever) subject to a stochastic drive. The engine uses geometric nonlinearities in the resonating ribbons to autonomously convert a random excitation into a low-entropy, nonpassive oscillation of the cantilever. The engine presents the anomalous heat transport property of negative thermal conductivity, consisting in the ability to passively transfer energy from a cold reservoir to a hot reservoir. PMID:27419553

  5. Fossil fuels -- future fuels

    SciTech Connect

    1998-03-01

    Fossil fuels -- coal, oil, and natural gas -- built America`s historic economic strength. Today, coal supplies more than 55% of the electricity, oil more than 97% of the transportation needs, and natural gas 24% of the primary energy used in the US. Even taking into account increased use of renewable fuels and vastly improved powerplant efficiencies, 90% of national energy needs will still be met by fossil fuels in 2020. If advanced technologies that boost efficiency and environmental performance can be successfully developed and deployed, the US can continue to depend upon its rich resources of fossil fuels.

  6. PRELIMINARY COUPLING OF THE MONTE CARLO CODE OPENMC AND THE MULTIPHYSICS OBJECT-ORIENTED SIMULATION ENVIRONMENT (MOOSE) FOR ANALYZING DOPPLER FEEDBACK IN MONTE CARLO SIMULATIONS

    SciTech Connect

    Matthew Ellis; Derek Gaston; Benoit Forget; Kord Smith

    2011-07-01

    In recent years the use of Monte Carlo methods for modeling reactors has become feasible due to the increasing availability of massively parallel computer systems. One of the primary challenges yet to be fully resolved, however, is the efficient and accurate inclusion of multiphysics feedback in Monte Carlo simulations. The research in this paper presents a preliminary coupling of the open source Monte Carlo code OpenMC with the open source Multiphysics Object-Oriented Simulation Environment (MOOSE). The coupling of OpenMC and MOOSE will be used to investigate efficient and accurate numerical methods needed to include multiphysics feedback in Monte Carlo codes. An investigation into the sensitivity of Doppler feedback to fuel temperature approximations using a two dimensional 17x17 PWR fuel assembly is presented in this paper. The results show a functioning multiphysics coupling between OpenMC and MOOSE. The coupling utilizes Functional Expansion Tallies to accurately and efficiently transfer pin power distributions tallied in OpenMC to unstructured finite element meshes used in MOOSE. The two dimensional PWR fuel assembly case also demonstrates that for a simplified model the pin-by-pin doppler feedback can be adequately replicated by scaling a representative pin based on pin relative powers.

  7. Calibration-constrained Monte Carlo analysis of highly parameterized models using subspace techniques

    NASA Astrophysics Data System (ADS)

    Tonkin, Matthew; Doherty, John

    2009-12-01

    We describe a subspace Monte Carlo (SSMC) technique that reduces the burden of calibration-constrained Monte Carlo when undertaken with highly parameterized models. When Monte Carlo methods are used to evaluate the uncertainty in model outputs, ensuring that parameter realizations reproduce the calibration data requires many model runs to condition each realization. In the new SSMC approach, the model is first calibrated using a subspace regularization method, ideally the hybrid Tikhonov-TSVD "superparameter" approach described by Tonkin and Doherty (2005). Sensitivities calculated with the calibrated model are used to define the calibration null-space, which is spanned by parameter combinations that have no effect on simulated equivalents to available observations. Next, a stochastic parameter generator is used to produce parameter realizations, and for each a difference is formed between the stochastic parameters and the calibrated parameters. This difference is projected onto the calibration null-space and added to the calibrated parameters. If the model is no longer calibrated, parameter combinations that span the calibration solution space are reestimated while retaining the null-space projected parameter differences as additive values. The recalibration can often be undertaken using existing sensitivities, so that conditioning requires only a small number of model runs. Using synthetic and real-world model applications we demonstrate that the SSMC approach is general (it is not limited to any particular model or any particular parameterization scheme) and that it can rapidly produce a large number of conditioned parameter sets.

  8. Sequence specific resonance assignment via Multicanonical Monte Carlo search using an ABACUS approach.

    PubMed

    Lemak, Alexander; Steren, Carlos A; Arrowsmith, Cheryl H; Llinás, Miguel

    2008-05-01

    ABACUS [Grishaev et al. (2005) Proteins 61:36-43] is a novel protocol for automated protein structure determination via NMR. ABACUS starts from molecular fragments defined by unassigned J-coupled spin-systems and involves a Monte Carlo stochastic search in assignment space, probabilistic sequence selection, and assembly of fragments into structures that are used to guide the stochastic search. Here, we report further development of the two main algorithms that increase the flexibility and robustness of the method. Performance of the BACUS [Grishaev and Llinás (2004) J Biomol NMR 28:1-101] algorithm was significantly improved through use of sequential connectivities available from through-bond correlated 3D-NMR experiments, and a new set of likelihood probabilities derived from a database of 56 ultra high resolution X-ray structures. A Multicanonical Monte Carlo procedure, Fragment Monte Carlo (FMC), was developed for sequence-specific assignment of spin-systems. It relies on an enhanced assignment sampling and provides the uncertainty of assignments in a quantitative manner. The efficiency of the protocol was validated on data from four proteins of between 68-116 residues, yielding 100% accuracy in sequence specific assignment of backbone and side chain resonances.

  9. Stochastic Approximation of Dynamical Exponent at Quantum Critical Point

    NASA Astrophysics Data System (ADS)

    Suwa, Hidemaro; Yasuda, Shinya; Todo, Synge

    We have developed a unified finite-size scaling method for quantum phase transitions that requires no prior knowledge of the dynamical exponent z. During a quantum Monte Carlo simulation, the temperature is automatically tuned by the Robbins-Monro stochastic approximation method, being proportional to the lowest gap of the finite-size system. The dynamical exponent is estimated in a straightforward way from the system-size dependence of the temperature. As a demonstration of our novel method, the two-dimensional S = 1 / 2 quantum XY model, or equivalently the hard-core boson system, in uniform and staggered magnetic fields is investigated in the combination of the world-line quantum Monte Carlo worm algorithm. In the absence of a uniform magnetic field, we obtain the fully consistent result with the Lorentz invariance at the quantum critical point, z = 1 . Under a finite uniform magnetic field, on the other hand, the dynamical exponent becomes two, and the mean-field universality with effective dimension (2+2) governs the quantum phase transition. We will discuss also the system with random magnetic fields, or the dirty boson system, bearing a non-trivial dynamical exponent.Reference: S. Yasuda, H. Suwa, and S. Todo Phys. Rev. B 92, 104411 (2015); arXiv:1506.04837

  10. Stochastic approximation of dynamical exponent at quantum critical point

    NASA Astrophysics Data System (ADS)

    Yasuda, Shinya; Suwa, Hidemaro; Todo, Synge

    2015-09-01

    We have developed a unified finite-size scaling method for quantum phase transitions that requires no prior knowledge of the dynamical exponent z . During a quantum Monte Carlo simulation, the temperature is automatically tuned by the Robbins-Monro stochastic approximation method, being proportional to the lowest gap of the finite-size system. The dynamical exponent is estimated in a straightforward way from the system-size dependence of the temperature. As a demonstration of our novel method, the two-dimensional S =1 /2 quantum X Y model in uniform and staggered magnetic fields is investigated in the combination of the world-line quantum Monte Carlo worm algorithm. In the absence of a uniform magnetic field, we obtain the fully consistent result with the Lorentz invariance at the quantum critical point, z =1 , i.e., the three-dimensional classical X Y universality class. Under a finite uniform magnetic field, on the other hand, the dynamical exponent becomes two, and the mean-field universality with effective dimension (2 +2 ) governs the quantum phase transition.

  11. Performance of higher order Monte Carlo wave packet methods for surface science problems: A test for photoinduced desorption

    NASA Astrophysics Data System (ADS)

    Andrianov, I.; Saalfrank, P.

    2003-01-01

    Aiming to treat multidimensional quantum dissipative dynamics of adsorbates at surfaces, we consider application of several variants of the Monte Carlo wave packet method to an exemplary problem, the desorption induced by electronic transitions (DIET) of NO from a Pt(1 1 1) surface with a two-state two-dimensional model. We investigate the convergence of stochastic unravelling schemes of different order for 'rare' observables characteristic for this test system.

  12. Monte Carlo code criticality benchmark comparisons for waste packaging

    SciTech Connect

    Alesso, H.P.; Annese, C.E.; Buck, R.M.; Pearson, J.S.; Lloyd, W.R.

    1992-07-01

    COG is a new point-wise Monte Carlo code being developed and tested at Lawrence Livermore National Laboratory (LLNL). It solves the Boltzmann equation for the transport of neutrons and photons. The objective of this paper is to report on COG results for criticality benchmark experiments both on a Cray mainframe and on a HP 9000 workstation. COG has been recently ported to workstations to improve its accessibility to a wider community of users. COG has some similarities to a number of other computer codes used in the shielding and criticality community. The recently introduced high performance reduced instruction set (RISC) UNIX workstations provide computational power that approach mainframes at a fraction of the cost. A version of COG is currently being developed for the Hewlett Packard 9000/730 computer with a UNIX operating system. Subsequent porting operations will move COG to SUN, DEC, and IBM workstations. In addition, a CAD system for preparation of the geometry input for COG is being developed. In July 1977, Babcock & Wilcox Co. (B&W) was awarded a contract to conduct a series of critical experiments that simulated close-packed storage of LWR-type fuel. These experiments provided data for benchmarking and validating calculational methods used in predicting K-effective of nuclear fuel storage in close-packed, neutron poisoned arrays. Low enriched UO2 fuel pins in water-moderated lattices in fuel storage represent a challenging criticality calculation for Monte Carlo codes particularly when the fuel pins extend out of the water. COG and KENO calculational results of these criticality benchmark experiments are presented.

  13. Monte Carlo code criticality benchmark comparisons for waste packaging

    SciTech Connect

    Alesso, H.P.; Annese, C.E.; Buck, R.M.; Pearson, J.S.; Lloyd, W.R.

    1992-07-01

    COG is a new point-wise Monte Carlo code being developed and tested at Lawrence Livermore National Laboratory (LLNL). It solves the Boltzmann equation for the transport of neutrons and photons. The objective of this paper is to report on COG results for criticality benchmark experiments both on a Cray mainframe and on a HP 9000 workstation. COG has been recently ported to workstations to improve its accessibility to a wider community of users. COG has some similarities to a number of other computer codes used in the shielding and criticality community. The recently introduced high performance reduced instruction set (RISC) UNIX workstations provide computational power that approach mainframes at a fraction of the cost. A version of COG is currently being developed for the Hewlett Packard 9000/730 computer with a UNIX operating system. Subsequent porting operations will move COG to SUN, DEC, and IBM workstations. In addition, a CAD system for preparation of the geometry input for COG is being developed. In July 1977, Babcock Wilcox Co. (B W) was awarded a contract to conduct a series of critical experiments that simulated close-packed storage of LWR-type fuel. These experiments provided data for benchmarking and validating calculational methods used in predicting K-effective of nuclear fuel storage in close-packed, neutron poisoned arrays. Low enriched UO2 fuel pins in water-moderated lattices in fuel storage represent a challenging criticality calculation for Monte Carlo codes particularly when the fuel pins extend out of the water. COG and KENO calculational results of these criticality benchmark experiments are presented.

  14. AESS: Accelerated Exact Stochastic Simulation

    NASA Astrophysics Data System (ADS)

    Jenkins, David D.; Peterson, Gregory D.

    2011-12-01

    The Stochastic Simulation Algorithm (SSA) developed by Gillespie provides a powerful mechanism for exploring the behavior of chemical systems with small species populations or with important noise contributions. Gene circuit simulations for systems biology commonly employ the SSA method, as do ecological applications. This algorithm tends to be computationally expensive, so researchers seek an efficient implementation of SSA. In this program package, the Accelerated Exact Stochastic Simulation Algorithm (AESS) contains optimized implementations of Gillespie's SSA that improve the performance of individual simulation runs or ensembles of simulations used for sweeping parameters or to provide statistically significant results. Program summaryProgram title: AESS Catalogue identifier: AEJW_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AEJW_v1_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: University of Tennessee copyright agreement No. of lines in distributed program, including test data, etc.: 10 861 No. of bytes in distributed program, including test data, etc.: 394 631 Distribution format: tar.gz Programming language: C for processors, CUDA for NVIDIA GPUs Computer: Developed and tested on various x86 computers and NVIDIA C1060 Tesla and GTX 480 Fermi GPUs. The system targets x86 workstations, optionally with multicore processors or NVIDIA GPUs as accelerators. Operating system: Tested under Ubuntu Linux OS and CentOS 5.5 Linux OS Classification: 3, 16.12 Nature of problem: Simulation of chemical systems, particularly with low species populations, can be accurately performed using Gillespie's method of stochastic simulation. Numerous variations on the original stochastic simulation algorithm have been developed, including approaches that produce results with statistics that exactly match the chemical master equation (CME) as well as other approaches that approximate the CME. Solution

  15. Opportunity fuels

    SciTech Connect

    Lutwen, R.C.

    1994-12-31

    Opportunity fuels - fuels that can be converted to other forms of energy at lower cost than standard fossil fuels - are discussed in outline form. The type and source of fuels, types of fuels, combustability, methods of combustion, refinery wastes, petroleum coke, garbage fuels, wood wastes, tires, and economics are discussed.

  16. Neutronic calculations for CANDU thorium systems using Monte Carlo techniques

    NASA Astrophysics Data System (ADS)

    Saldideh, M.; Shayesteh, M.; Eshghi, M.

    2014-08-01

    In this paper, we have investigated the prospects of exploiting the rich world thorium reserves using Canada Deuterium Uranium (CANDU) reactors. The analysis is performed using the Monte Carlo MCNP code in order to understand how much time the reactor is in criticality conduction. Four different fuel compositions have been selected for analysis. We have obtained the infinite multiplication factor, k∞, under full power operation of the reactor over 8 years. The neutronic flux distribution in the full core reactor has already been investigated.

  17. Current status of the PSG Monte Carlo neutron transport code

    SciTech Connect

    Leppaenen, J.

    2006-07-01

    PSG is a new Monte Carlo neutron transport code, developed at the Technical Research Centre of Finland (VTT). The code is mainly intended for fuel assembly-level reactor physics calculations, such as group constant generation for deterministic reactor simulator codes. This paper presents the current status of the project and the essential capabilities of the code. Although the main application of PSG is in lattice calculations, the geometry is not restricted in two dimensions. This paper presents the validation of PSG against the experimental results of the three-dimensional MOX fuelled VENUS-2 reactor dosimetry benchmark. (authors)

  18. Metrics for Diagnosing Undersampling in Monte Carlo Tally Estimates

    SciTech Connect

    Perfetti, Christopher M.; Rearden, Bradley T.

    2015-01-01

    This study explored the potential of using Markov chain convergence diagnostics to predict the prevalence and magnitude of biases due to undersampling in Monte Carlo eigenvalue and flux tally estimates. Five metrics were applied to two models of pressurized water reactor fuel assemblies and their potential for identifying undersampling biases was evaluated by comparing the calculated test metrics with known biases in the tallies. Three of the five undersampling metrics showed the potential to accurately predict the behavior of undersampling biases in the responses examined in this study.

  19. The D0 Monte Carlo

    SciTech Connect

    Womersley, J. . Dept. of Physics)

    1992-10-01

    The D0 detector at the Fermilab Tevatron began its first data taking run in May 1992. For analysis of the expected 25 pb[sup [minus]1] data sample, roughly half a million simulated events will be needed. The GEANT-based Monte Carlo program used to generate these events is described, together with comparisons to test beam data. Some novel techniques used to speed up execution and simplify geometrical input are described.

  20. Stochastic Turing patterns on a network.

    PubMed

    Asslani, Malbor; Di Patti, Francesca; Fanelli, Duccio

    2012-10-01

    The process of stochastic Turing instability on a scale-free network is discussed for a specific case study: the stochastic Brusselator model. The system is shown to spontaneously differentiate into activator-rich and activator-poor nodes outside the region of parameters classically deputed to the deterministic Turing instability. This phenomenon, as revealed by direct stochastic simulations, is explained analytically and eventually traced back to the finite-size corrections stemming from the inherent graininess of the scrutinized medium. PMID:23214650

  1. Stochastic Turing patterns on a network

    NASA Astrophysics Data System (ADS)

    Asslani, Malbor; Di Patti, Francesca; Fanelli, Duccio

    2012-10-01

    The process of stochastic Turing instability on a scale-free network is discussed for a specific case study: the stochastic Brusselator model. The system is shown to spontaneously differentiate into activator-rich and activator-poor nodes outside the region of parameters classically deputed to the deterministic Turing instability. This phenomenon, as revealed by direct stochastic simulations, is explained analytically and eventually traced back to the finite-size corrections stemming from the inherent graininess of the scrutinized medium.

  2. Ant colony optimization and stochastic gradient descent.

    PubMed

    Meuleau, Nicolas; Dorigo, Marco

    2002-01-01

    In this article, we study the relationship between the two techniques known as ant colony optimization (ACO) and stochastic gradient descent. More precisely, we show that some empirical ACO algorithms approximate stochastic gradient descent in the space of pheromones, and we propose an implementation of stochastic gradient descent that belongs to the family of ACO algorithms. We then use this insight to explore the mutual contributions of the two techniques. PMID:12171633

  3. Stochastic Vorticity and Associated Filtering Theory

    SciTech Connect

    Amirdjanova, A.; Kallianpur, G.

    2002-12-19

    The focus of this work is on a two-dimensional stochastic vorticity equation for an incompressible homogeneous viscous fluid. We consider a signed measure-valued stochastic partial differential equation for a vorticity process based on the Skorohod-Ito evolution of a system of N randomly moving point vortices. A nonlinear filtering problem associated with the evolution of the vorticity is considered and a corresponding Fujisaki-Kallianpur-Kunita stochastic differential equation for the optimal filter is derived.

  4. Using the Stochastic Collocation Method for the Uncertainty Quantification of Drug Concentration Due to Depot Shape Variability

    PubMed Central

    Preston, J. Samuel; Tasdizen, Tolga; Terry, Christi M.; Cheung, Alfred K.

    2010-01-01

    Numerical simulations entail modeling assumptions that impact outcomes. Therefore, characterizing, in a probabilistic sense, the relationship between the variability of model selection and the variability of outcomes is important. Under certain assumptions, the stochastic collocation method offers a computationally feasible alternative to traditional Monte Carlo approaches for assessing the impact of model and parameter variability. We propose a framework that combines component shape parameterization with the stochastic collocation method to study the effect of drug depot shape variability on the outcome of drug diffusion simulations in a porcine model. We use realistic geometries segmented from MR images and employ level-set techniques to create two alternative univariate shape parameterizations. We demonstrate that once the underlying stochastic process is characterized, quantification of the introduced variability is quite straightforward and provides an important step in the validation and verification process. PMID:19272865

  5. Energy-optimal path planning by stochastic dynamically orthogonal level-set optimization

    NASA Astrophysics Data System (ADS)

    Subramani, Deepak N.; Lermusiaux, Pierre F. J.

    2016-04-01

    A stochastic optimization methodology is formulated for computing energy-optimal paths from among time-optimal paths of autonomous vehicles navigating in a dynamic flow field. Based on partial differential equations, the methodology rigorously leverages the level-set equation that governs time-optimal reachability fronts for a given relative vehicle-speed function. To set up the energy optimization, the relative vehicle-speed and headings are considered to be stochastic and new stochastic Dynamically Orthogonal (DO) level-set equations are derived. Their solution provides the distribution of time-optimal reachability fronts and corresponding distribution of time-optimal paths. An optimization is then performed on the vehicle's energy-time joint distribution to select the energy-optimal paths for each arrival time, among all stochastic time-optimal paths for that arrival time. Numerical schemes to solve the reduced stochastic DO level-set equations are obtained, and accuracy and efficiency considerations are discussed. These reduced equations are first shown to be efficient at solving the governing stochastic level-sets, in part by comparisons with direct Monte Carlo simulations. To validate the methodology and illustrate its accuracy, comparisons with semi-analytical energy-optimal path solutions are then completed. In particular, we consider the energy-optimal crossing of a canonical steady front and set up its semi-analytical solution using a energy-time nested nonlinear double-optimization scheme. We then showcase the inner workings and nuances of the energy-optimal path planning, considering different mission scenarios. Finally, we study and discuss results of energy-optimal missions in a wind-driven barotropic quasi-geostrophic double-gyre ocean circulation.

  6. Attainability analysis in the stochastic sensitivity control

    NASA Astrophysics Data System (ADS)

    Bashkirtseva, Irina

    2015-02-01

    For nonlinear dynamic stochastic control system, we construct a feedback regulator that stabilises an equilibrium and synthesises a required dispersion of random states around this equilibrium. Our approach is based on the stochastic sensitivity functions technique. We focus on the investigation of attainability sets for 2-D systems. A detailed parametric description of the attainability domains for various types of control inputs for stochastic Brusselator is presented. It is shown that the new regulator provides a low level of stochastic sensitivity and can suppress oscillations of large amplitude.

  7. Solving the master equation without kinetic Monte Carlo: Tensor train approximations for a CO oxidation model

    NASA Astrophysics Data System (ADS)

    Gelß, Patrick; Matera, Sebastian; Schütte, Christof

    2016-06-01

    In multiscale modeling of heterogeneous catalytic processes, one crucial point is the solution of a Markovian master equation describing the stochastic reaction kinetics. Usually, this is too high-dimensional to be solved with standard numerical techniques and one has to rely on sampling approaches based on the kinetic Monte Carlo method. In this study we break the curse of dimensionality for the direct solution of the Markovian master equation by exploiting the Tensor Train Format for this purpose. The performance of the approach is demonstrated on a first principles based, reduced model for the CO oxidation on the RuO2(110) surface. We investigate the complexity for increasing system size and for various reaction conditions. The advantage over the stochastic simulation approach is illustrated by a problem with increased stiffness.

  8. GPU accelerated Monte Carlo simulation of Brownian motors dynamics with CUDA

    NASA Astrophysics Data System (ADS)

    Spiechowicz, J.; Kostur, M.; Machura, L.

    2015-06-01

    This work presents an updated and extended guide on methods of a proper acceleration of the Monte Carlo integration of stochastic differential equations with the commonly available NVIDIA Graphics Processing Units using the CUDA programming environment. We outline the general aspects of the scientific computing on graphics cards and demonstrate them with two models of a well known phenomenon of the noise induced transport of Brownian motors in periodic structures. As a source of fluctuations in the considered systems we selected the three most commonly occurring noises: the Gaussian white noise, the white Poissonian noise and the dichotomous process also known as a random telegraph signal. The detailed discussion on various aspects of the applied numerical schemes is also presented. The measured speedup can be of the astonishing order of about 3000 when compared to a typical CPU. This number significantly expands the range of problems solvable by use of stochastic simulations, allowing even an interactive research in some cases.

  9. Extension of the fully coupled Monte Carlo/S sub N response matrix method to problems including upscatter and fission

    SciTech Connect

    Baker, R.S.; Filippone, W.F. . Dept. of Nuclear and Energy Engineering); Alcouffe, R.E. )

    1991-01-01

    The neutron transport equation is solved by a hybrid method that iteratively couples regions where deterministic (S{sub N}) and stochastic (Monte Carlo) methods are applied. Unlike previous hybrid methods, the Monte Carlo and S{sub N} regions are fully coupled in the sense that no assumption is made about geometrical separation of decoupling. The fully coupled Monte Carlo/S{sub N} technique consists of defining spatial and/or energy regions of a problem in which either a Monte Carlo calculation or an S{sub N} calculation is to be performed. The Monte Carlo and S{sub N} regions are then connected through the common angular boundary fluxes, which are determined iteratively using the response matrix technique, and group sources. The hybrid method provides a new method of solving problems involving both optically thick and optically thin regions that neither Monte Carlo nor S{sub N} is well suited for by itself. The fully coupled Monte Carlo/S{sub N} method has been implemented in the S{sub N} code TWODANT by adding special-purpose Monte Carlo subroutines to calculate the response matrices and group sources, and linkage subroutines to carry out the interface flux iterations. The common angular boundary fluxes are included in the S{sub N} code as interior boundary sources, leaving the logic for the solution of the transport flux unchanged, while, with minor modifications, the diffusion synthetic accelerator remains effective in accelerating the S{sub N} calculations. The Monte Carlo routines have been successfully vectorized, with approximately a factor of five increases in speed over the nonvectorized version. The hybrid method is capable of solving forward, inhomogeneous source problems in X-Y and R-Z geometries. This capability now includes mulitigroup problems involving upscatter and fission in non-highly multiplying systems. 8 refs., 8 figs., 1 tab.

  10. Neural Dynamics as Sampling: A Model for Stochastic Computation in Recurrent Networks of Spiking Neurons

    PubMed Central

    Buesing, Lars; Bill, Johannes; Nessler, Bernhard; Maass, Wolfgang

    2011-01-01

    The organization of computations in networks of spiking neurons in the brain is still largely unknown, in particular in view of the inherently stochastic features of their firing activity and the experimentally observed trial-to-trial variability of neural systems in the brain. In principle there exists a powerful computational framework for stochastic computations, probabilistic inference by sampling, which can explain a large number of macroscopic experimental data in neuroscience and cognitive science. But it has turned out to be surprisingly difficult to create a link between these abstract models for stochastic computations and more detailed models of the dynamics of networks of spiking neurons. Here we create such a link and show that under some conditions the stochastic firing activity of networks of spiking neurons can be interpreted as probabilistic inference via Markov chain Monte Carlo (MCMC) sampling. Since common methods for MCMC sampling in distributed systems, such as Gibbs sampling, are inconsistent with the dynamics of spiking neurons, we introduce a different approach based on non-reversible Markov chains that is able to reflect inherent temporal processes of spiking neuronal activity through a suitable choice of random variables. We propose a neural network model and show by a rigorous theoretical analysis that its neural activity implements MCMC sampling of a given distribution, both for the case of discrete and continuous time. This provides a step towards closing the gap between abstract functional models of cortical computation and more detailed models of networks of spiking neurons. PMID:22096452

  11. Incorporating Wind Power Forecast Uncertainties Into Stochastic Unit Commitment Using Neural Network-Based Prediction Intervals.

    PubMed

    Quan, Hao; Srinivasan, Dipti; Khosravi, Abbas

    2015-09-01

    Penetration of renewable energy resources, such as wind and solar power, into power systems significantly increases the uncertainties on system operation, stability, and reliability in smart grids. In this paper, the nonparametric neural network-based prediction intervals (PIs) are implemented for forecast uncertainty quantification. Instead of a single level PI, wind power forecast uncertainties are represented in a list of PIs. These PIs are then decomposed into quantiles of wind power. A new scenario generation method is proposed to handle wind power forecast uncertainties. For each hour, an empirical cumulative distribution function (ECDF) is fitted to these quantile points. The Monte Carlo simulation method is used to generate scenarios from the ECDF. Then the wind power scenarios are incorporated into a stochastic security-constrained unit commitment (SCUC) model. The heuristic genetic algorithm is utilized to solve the stochastic SCUC problem. Five deterministic and four stochastic case studies incorporated with interval forecasts of wind power are implemented. The results of these cases are presented and discussed together. Generation costs, and the scheduled and real-time economic dispatch reserves of different unit commitment strategies are compared. The experimental results show that the stochastic model is more robust than deterministic ones and, thus, decreases the risk in system operations of smart grids. PMID:25532191

  12. Analysis of stochastic effects in chemically amplified poly(4-hydroxystyrene-co-t-butyl methacrylate) resist

    NASA Astrophysics Data System (ADS)

    Kozawa, Takahiro; Santillan, Julius Joseph; Itani, Toshiro

    2016-07-01

    Understanding of stochastic phenomena is essential to the development of a highly sensitive resist for nanofabrication. In this study, we investigated the stochastic effects in a chemically amplified resist consisting of poly(4-hydroxystyrene-co-t-butyl methacrylate), triphenylsulfonium nonafluorobutanesulfonate (acid generator), and tri-n-octylamine (quencher). Scanning electron microscopy (SEM) images of resist patterns were analyzed by Monte Carlo simulation on the basis of the sensitization and reaction mechanisms of chemically amplified extreme ultraviolet resists. It was estimated that a ±0.82σ fluctuation of the number of protected units per polymer molecule led to line edge roughness formation. Here, σ is the standard deviation of the number of protected units per polymer molecule after postexposure baking (PEB). The threshold for the elimination of stochastic bridge generation was 4.38σ (the difference between the average number of protected units after PEB and the dissolution point). The threshold for the elimination of stochastic pinching was 2.16σ.

  13. SLUG-STOCHASTICALLY LIGHTING UP GALAXIES. I. METHODS AND VALIDATING TESTS

    SciTech Connect

    Da Silva, Robert L.; Fumagalli, Michele; Krumholz, Mark

    2012-02-01

    The effects of stochasticity on the luminosities of stellar populations are an often neglected but crucial element for understanding populations in the low-mass or the low star formation rate regime. To address this issue, we present SLUG, a new code to 'Stochastically Light Up Galaxies'. SLUG synthesizes stellar populations using a Monte Carlo technique that properly treats stochastic sampling including the effects of clustering, the stellar initial mass function, star formation history, stellar evolution, and cluster disruption. This code produces many useful outputs, including (1) catalogs of star clusters and their properties such as their stellar initial mass distributions and their photometric properties in a variety of filters, (2) two dimensional histograms of color-magnitude diagrams of every star in the simulation, and (3) the photometric properties of field stars and the integrated photometry of the entire simulated galaxy. After presenting the SLUG algorithm in detail, we validate the code through comparisons with STARBURST99 in the well-sampled regime, and with observed photometry of Milky Way clusters. Finally, we demonstrate SLUG's capabilities by presenting outputs in the stochastic regime. SLUG is publicly distributed through the Web site http://sites.google.com/site/runslug/.

  14. Incorporating Wind Power Forecast Uncertainties Into Stochastic Unit Commitment Using Neural Network-Based Prediction Intervals.

    PubMed

    Quan, Hao; Srinivasan, Dipti; Khosravi, Abbas

    2015-09-01

    Penetration of renewable energy resources, such as wind and solar power, into power systems significantly increases the uncertainties on system operation, stability, and reliability in smart grids. In this paper, the nonparametric neural network-based prediction intervals (PIs) are implemented for forecast uncertainty quantification. Instead of a single level PI, wind power forecast uncertainties are represented in a list of PIs. These PIs are then decomposed into quantiles of wind power. A new scenario generation method is proposed to handle wind power forecast uncertainties. For each hour, an empirical cumulative distribution function (ECDF) is fitted to these quantile points. The Monte Carlo simulation method is used to generate scenarios from the ECDF. Then the wind power scenarios are incorporated into a stochastic security-constrained unit commitment (SCUC) model. The heuristic genetic algorithm is utilized to solve the stochastic SCUC problem. Five deterministic and four stochastic case studies incorporated with interval forecasts of wind power are implemented. The results of these cases are presented and discussed together. Generation costs, and the scheduled and real-time economic dispatch reserves of different unit commitment strategies are compared. The experimental results show that the stochastic model is more robust than deterministic ones and, thus, decreases the risk in system operations of smart grids.

  15. On stochastic FEM based computational homogenization of magneto-active heterogeneous materials with random microstructure

    NASA Astrophysics Data System (ADS)

    Pivovarov, Dmytro; Steinmann, Paul

    2016-09-01

    In the current work we apply the stochastic version of the FEM to the homogenization of magneto-elastic heterogeneous materials with random microstructure. The main aim of this study is to capture accurately the discontinuities appearing at matrix-inclusion interfaces. We demonstrate and compare three different techniques proposed in the literature for the purely mechanical problem, i.e. global, local and enriched stochastic basis functions. Moreover, we demonstrate the implementation of the isoparametric concept in the enlarged physical-stochastic product space. The Gauss integration rule in this multidimensional space is discussed. In order to design a realistic stochastic Representative Volume Element we analyze actual scans obtained by electron microscopy and provide numerical studies of the micro particle distribution. The SFEM framework described in our previous work (Pivovarov and Steinmann in Comput Mech 57(1): 123-147, 2016) is extended to the case of the magneto-elastic materials. To this end, the magneto-elastic energy function is used, and the corresponding hyper-tensors of the magneto-elastic problem are introduced. In order to estimate the methods' accuracy we performed a set of simulations for elastic and magneto-elastic problems using three different SFEM modifications. All results are compared with "brute-force" Monte-Carlo simulations used as reference solution.

  16. Application of stochastic Galerkin FEM to the complete electrode model of electrical impedance tomography

    SciTech Connect

    Leinonen, Matti Hakula, Harri Hyvönen, Nuutti

    2014-07-15

    The aim of electrical impedance tomography is to determine the internal conductivity distribution of some physical body from boundary measurements of current and voltage. The most accurate forward model for impedance tomography is the complete electrode model, which consists of the conductivity equation coupled with boundary conditions that take into account the electrode shapes and the contact resistances at the corresponding interfaces. If the reconstruction task of impedance tomography is recast as a Bayesian inference problem, it is essential to be able to solve the complete electrode model forward problem with the conductivity and the contact resistances treated as a random field and random variables, respectively. In this work, we apply a stochastic Galerkin finite element method to the ensuing elliptic stochastic boundary value problem and compare the results with Monte Carlo simulations.

  17. Fuzzy stochastic analysis of serviceability and ultimate limit states of two-span pedestrian steel bridge

    NASA Astrophysics Data System (ADS)

    Kala, Zdeněk; Sandovič, GiedrÄ--

    2012-09-01

    The paper deals with non-linear analysis of ultimate and serviceability limit states of two-span pedestrian steel bridge. The effects of random material and geometrical characteristics on limit states are analyzed. The Monte Carlo method was applied to stochastic analysis. For the serviceability limit state, also influence of fuzzy uncertainty of the limit deflection value on random characteristics of load capacity of variable action was studied. The results prove that, for the type of structure studied, the serviceability limit state is decisive from the point of view of design. The present paper opens a discussion on the use of stochastic analysis to verify the limit deflections given in the standards EUROCODES.

  18. Semi-analytical expression of stochastic closed curve attractors in nonlinear dynamical systems under weak noise

    NASA Astrophysics Data System (ADS)

    Guo, Kongming; Jiang, Jun; Xu, Yalan

    2016-09-01

    In this paper, a simple but accurate semi-analytical method to approximate probability density function of stochastic closed curve attractors is proposed. The expression of distribution applies to systems with strong nonlinearities, while only weak noise condition is needed. With the understanding that additive noise does not change the longitudinal distribution of the attractors, the high-dimensional probability density distribution is decomposed into two low-dimensional distributions: the longitudinal and the transverse probability density distributions. The longitudinal distribution can be calculated from the deterministic systems, while the probability density in the transverse direction of the curve can be approximated by the stochastic sensitivity function method. The effectiveness of this approach is verified by comparing the expression of distribution with the results of Monte Carlo numerical simulations in several planar systems.

  19. Reduced Complexity HMM Filtering With Stochastic Dominance Bounds: A Convex Optimization Approach

    NASA Astrophysics Data System (ADS)

    Krishnamurthy, Vikram; Rojas, Cristian R.

    2014-12-01

    This paper uses stochastic dominance principles to construct upper and lower sample path bounds for Hidden Markov Model (HMM) filters. Given a HMM, by using convex optimization methods for nuclear norm minimization with copositive constraints, we construct low rank stochastic marices so that the optimal filters using these matrices provably lower and upper bound (with respect to a partially ordered set) the true filtered distribution at each time instant. Since these matrices are low rank (say R), the computational cost of evaluating the filtering bounds is O(XR) instead of O(X2). A Monte-Carlo importance sampling filter is presented that exploits these upper and lower bounds to estimate the optimal posterior. Finally, using the Dobrushin coefficient, explicit bounds are given on the variational norm between the true posterior and the upper and lower bounds.

  20. Hamilton's principle in stochastic mechanics

    NASA Astrophysics Data System (ADS)

    Pavon, Michele

    1995-12-01

    In this paper we establish three variational principles that provide new foundations for Nelson's stochastic mechanics in the case of nonrelativistic particles without spin. The resulting variational picture is much richer and of a different nature with respect to the one previously considered in the literature. We first develop two stochastic variational principles whose Hamilton-Jacobi-like equations are precisely the two coupled partial differential equations that are obtained from the Schrödinger equation (Madelung equations). The two problems are zero-sum, noncooperative, stochastic differential games that are familiar in the control theory literature. They are solved here by means of a new, absolutely elementary method based on Lagrange functionals. For both games the saddle-point equilibrium solution is given by the Nelson's process and the optimal controls for the two competing players are precisely Nelson's current velocity v and osmotic velocity u, respectively. The first variational principle includes as special cases both the Guerra-Morato variational principle [Phys. Rev. D 27, 1774 (1983)] and Schrödinger original variational derivation of the time-independent equation. It also reduces to the classical least action principle when the intensity of the underlying noise tends to zero. It appears as a saddle-point action principle. In the second variational principle the action is simply the difference between the initial and final configurational entropy. It is therefore a saddle-point entropy production principle. From the variational principles it follows, in particular, that both v(x,t) and u(x,t) are gradients of appropriate principal functions. In the variational principles, the role of the background noise has the intuitive meaning of attempting to contrast the more classical mechanical features of the system by trying to maximize the action in the first principle and by trying to increase the entropy in the second. Combining the two variational

  1. Stochastic thermodynamics of information processing

    NASA Astrophysics Data System (ADS)

    Cardoso Barato, Andre

    2015-03-01

    We consider two recent advancements on theoretical aspects of thermodynamics of information processing. First we show that the theory of stochastic thermodynamics can be generalized to include information reservoirs. These reservoirs can be seen as a sequence of bits which has its Shannon entropy changed due to the interaction with the system. Second we discuss bipartite systems, which provide a convenient description of Maxwell's demon. Analyzing a special class of bipartite systems we show that they can be used to study cellular information processing, allowing for the definition of an entropic rate that quantifies how much a cell learns about a fluctuating external environment and that is bounded by the thermodynamic entropy production.

  2. Constrained Stochastic Extended Redundancy Analysis.

    PubMed

    DeSarbo, Wayne S; Hwang, Heungsun; Stadler Blank, Ashley; Kappe, Eelco

    2015-06-01

    We devise a new statistical methodology called constrained stochastic extended redundancy analysis (CSERA) to examine the comparative impact of various conceptual factors, or drivers, as well as the specific predictor variables that contribute to each driver on designated dependent variable(s). The technical details of the proposed methodology, the maximum likelihood estimation algorithm, and model selection heuristics are discussed. A sports marketing consumer psychology application is provided in a Major League Baseball (MLB) context where the effects of six conceptual drivers of game attendance and their defining predictor variables are estimated. Results compare favorably to those obtained using traditional extended redundancy analysis (ERA). PMID:24327066

  3. Constrained Stochastic Extended Redundancy Analysis.

    PubMed

    DeSarbo, Wayne S; Hwang, Heungsun; Stadler Blank, Ashley; Kappe, Eelco

    2015-06-01

    We devise a new statistical methodology called constrained stochastic extended redundancy analysis (CSERA) to examine the comparative impact of various conceptual factors, or drivers, as well as the specific predictor variables that contribute to each driver on designated dependent variable(s). The technical details of the proposed methodology, the maximum likelihood estimation algorithm, and model selection heuristics are discussed. A sports marketing consumer psychology application is provided in a Major League Baseball (MLB) context where the effects of six conceptual drivers of game attendance and their defining predictor variables are estimated. Results compare favorably to those obtained using traditional extended redundancy analysis (ERA).

  4. Stochastic dynamics on slow manifolds

    NASA Astrophysics Data System (ADS)

    Constable, George W. A.; McKane, Alan J.; Rogers, Tim

    2013-07-01

    The theory of slow manifolds is an important tool in the study of deterministic dynamical systems, giving a practical method by which to reduce the number of relevant degrees of freedom in a model, thereby often resulting in a considerable simplification. In this paper we demonstrate how the same basic methodology may also be applied to stochastic dynamical systems, by examining the behaviour of trajectories conditioned on the event that they do not depart the slow manifold. We apply the method to two models: one from ecology and one from epidemiology, achieving a reduction in model dimension and illustrating the high quality of the analytical approximations.

  5. Concurrent Monte Carlo transport and fluence optimization with fluence adjusting scalable transport Monte Carlo

    PubMed Central

    Svatos, M.; Zankowski, C.; Bednarz, B.

    2016-01-01

    Purpose: The future of radiation therapy will require advanced inverse planning solutions to support single-arc, multiple-arc, and “4π” delivery modes, which present unique challenges in finding an optimal treatment plan over a vast search space, while still preserving dosimetric accuracy. The successful clinical implementation of such methods would benefit from Monte Carlo (MC) based dose calculation methods, which can offer improvements in dosimetric accuracy when compared to deterministic methods. The standard method for MC based treatment planning optimization leverages the accuracy of the MC dose calculation and efficiency of well-developed optimization methods, by precalculating the fluence to dose relationship within a patient with MC methods and subsequently optimizing the fluence weights. However, the sequential nature of this implementation is computationally time consuming and memory intensive. Methods to reduce the overhead of the MC precalculation have been explored in the past, demonstrating promising reductions of computational time overhead, but with limited impact on the memory overhead due to the sequential nature of the dose calculation and fluence optimization. The authors propose an entirely new form of “concurrent” Monte Carlo treat plan optimization: a platform which optimizes the fluence during the dose calculation, reduces wasted computation time being spent on beamlets that weakly contribute to the final dose distribution, and requires only a low memory footprint to function. In this initial investigation, the authors explore the key theoretical and practical considerations of optimizing fluence in such a manner. Methods: The authors present a novel derivation and implementation of a gradient descent algorithm that allows for optimization during MC particle transport, based on highly stochastic information generated through particle transport of very few histories. A gradient rescaling and renormalization algorithm, and the

  6. Goal-oriented sensitivity analysis for lattice kinetic Monte Carlo simulations

    NASA Astrophysics Data System (ADS)

    Arampatzis, Georgios; Katsoulakis, Markos A.

    2014-03-01

    In this paper we propose a new class of coupling methods for the sensitivity analysis of high dimensional stochastic systems and in particular for lattice Kinetic Monte Carlo (KMC). Sensitivity analysis for stochastic systems is typically based on approximating continuous derivatives with respect to model parameters by the mean value of samples from a finite difference scheme. Instead of using independent samples the proposed algorithm reduces the variance of the estimator by developing a strongly correlated-"coupled"- stochastic process for both the perturbed and unperturbed stochastic processes, defined in a common state space. The novelty of our construction is that the new coupled process depends on the targeted observables, e.g., coverage, Hamiltonian, spatial correlations, surface roughness, etc., hence we refer to the proposed method as goal-oriented sensitivity analysis. In particular, the rates of the coupled Continuous Time Markov Chain are obtained as solutions to a goal-oriented optimization problem, depending on the observable of interest, by considering the minimization functional of the corresponding variance. We show that this functional can be used as a diagnostic tool for the design and evaluation of different classes of couplings. Furthermore, the resulting KMC sensitivity algorithm has an easy implementation that is based on the Bortz-Kalos-Lebowitz algorithm's philosophy, where events are divided in classes depending on level sets of the observable of interest. Finally, we demonstrate in several examples including adsorption, desorption, and diffusion Kinetic Monte Carlo that for the same confidence interval and observable, the proposed goal-oriented algorithm can be two orders of magnitude faster than existing coupling algorithms for spatial KMC such as the Common Random Number approach. We also provide a complete implementation of the proposed sensitivity analysis algorithms, including various spatial KMC examples, in a supplementary MATLAB

  7. Goal-oriented sensitivity analysis for lattice kinetic Monte Carlo simulations.

    PubMed

    Arampatzis, Georgios; Katsoulakis, Markos A

    2014-03-28

    In this paper we propose a new class of coupling methods for the sensitivity analysis of high dimensional stochastic systems and in particular for lattice Kinetic Monte Carlo (KMC). Sensitivity analysis for stochastic systems is typically based on approximating continuous derivatives with respect to model parameters by the mean value of samples from a finite difference scheme. Instead of using independent samples the proposed algorithm reduces the variance of the estimator by developing a strongly correlated-"coupled"- stochastic process for both the perturbed and unperturbed stochastic processes, defined in a common state space. The novelty of our construction is that the new coupled process depends on the targeted observables, e.g., coverage, Hamiltonian, spatial correlations, surface roughness, etc., hence we refer to the proposed method as goal-oriented sensitivity analysis. In particular, the rates of the coupled Continuous Time Markov Chain are obtained as solutions to a goal-oriented optimization problem, depending on the observable of interest, by considering the minimization functional of the corresponding variance. We show that this functional can be used as a diagnostic tool for the design and evaluation of different classes of couplings. Furthermore, the resulting KMC sensitivity algorithm has an easy implementation that is based on the Bortz-Kalos-Lebowitz algorithm's philosophy, where events are divided in classes depending on level sets of the observable of interest. Finally, we demonstrate in several examples including adsorption, desorption, and diffusion Kinetic Monte Carlo that for the same confidence interval and observable, the proposed goal-oriented algorithm can be two orders of magnitude faster than existing coupling algorithms for spatial KMC such as the Common Random Number approach. We also provide a complete implementation of the proposed sensitivity analysis algorithms, including various spatial KMC examples, in a supplementary MATLAB

  8. Goal-oriented sensitivity analysis for lattice kinetic Monte Carlo simulations

    SciTech Connect

    Arampatzis, Georgios; Katsoulakis, Markos A.

    2014-03-28

    In this paper we propose a new class of coupling methods for the sensitivity analysis of high dimensional stochastic systems and in particular for lattice Kinetic Monte Carlo (KMC). Sensitivity analysis for stochastic systems is typically based on approximating continuous derivatives with respect to model parameters by the mean value of samples from a finite difference scheme. Instead of using independent samples the proposed algorithm reduces the variance of the estimator by developing a strongly correlated-“coupled”- stochastic process for both the perturbed and unperturbed stochastic processes, defined in a common state space. The novelty of our construction is that the new coupled process depends on the targeted observables, e.g., coverage, Hamiltonian, spatial correlations, surface roughness, etc., hence we refer to the proposed method as goal-oriented sensitivity analysis. In particular, the rates of the coupled Continuous Time Markov Chain are obtained as solutions to a goal-oriented optimization problem, depending on the observable of interest, by considering the minimization functional of the corresponding variance. We show that this functional can be used as a diagnostic tool for the design and evaluation of different classes of couplings. Furthermore, the resulting KMC sensitivity algorithm has an easy implementation that is based on the Bortz–Kalos–Lebowitz algorithm's philosophy, where events are divided in classes depending on level sets of the observable of interest. Finally, we demonstrate in several examples including adsorption, desorption, and diffusion Kinetic Monte Carlo that for the same confidence interval and observable, the proposed goal-oriented algorithm can be two orders of magnitude faster than existing coupling algorithms for spatial KMC such as the Common Random Number approach. We also provide a complete implementation of the proposed sensitivity analysis algorithms, including various spatial KMC examples, in a supplementary

  9. Polarity formation in crystals with long range molecular interactions: A Monte Carlo study

    NASA Astrophysics Data System (ADS)

    Cannavacciuolo, Luigi; Hulliger, Jürg

    2016-09-01

    Stochastic formation of a bi-polar state in three dimensional arrays of polar molecules with full Coulomb interactions is reproduced by Monte Carlo simulation. The size of the system is comparable to that of a real crystal seed. The spatial decay of the average order parameter is significantly slowed down by the long range interactions and the exact representation of correlation effects in terms of a single characteristic length becomes impossible. Finite size effects and possible scale invariance symmetry of the order parameter are extensively discussed.

  10. Analysis and Monte Carlo simulation of near-terminal aircraft flight paths

    NASA Technical Reports Server (NTRS)

    Schiess, J. R.; Matthews, C. G.

    1982-01-01

    The flight paths of arriving and departing aircraft at an airport are stochastically represented. Radar data of the aircraft movements are used to decompose the flight paths into linear and curvilinear segments. Variables which describe the segments are derived, and the best fitting probability distributions of the variables, based on a sample of flight paths, are found. Conversely, given information on the probability distribution of the variables, generation of a random sample of flight paths in a Monte Carlo simulation is discussed. Actual flight paths at Dulles International Airport are analyzed and simulated.

  11. Residual Monte Carlo high-order solver for Moment-Based Accelerated Thermal Radiative Transfer equations

    SciTech Connect

    Willert, Jeffrey Park, H.

    2014-11-01

    In this article we explore the possibility of replacing Standard Monte Carlo (SMC) transport sweeps within a Moment-Based Accelerated Thermal Radiative Transfer (TRT) algorithm with a Residual Monte Carlo (RMC) formulation. Previous Moment-Based Accelerated TRT implementations have encountered trouble when stochastic noise from SMC transport sweeps accumulates over several iterations and pollutes the low-order system. With RMC we hope to significantly lower the build-up of statistical error at a much lower cost. First, we display encouraging results for a zero-dimensional test problem. Then, we demonstrate that we can achieve a lower degree of error in two one-dimensional test problems by employing an RMC transport sweep with multiple orders of magnitude fewer particles per sweep. We find that by reformulating the high-order problem, we can compute more accurate solutions at a fraction of the cost.

  12. Probabilistic Density Function Method for Stochastic ODEs of Power Systems with Uncertain Power Input

    SciTech Connect

    Wang, Peng; Barajas-Solano, David A.; Constantinescu, Emil; Abhyankar, S.; Ghosh, Donetta L.; Smith, Barry; Huang, Zhenyu; Tartakovsky, Alexandre M.

    2015-09-22

    Wind and solar power generators are commonly described by a system of stochastic ordinary differential equations (SODEs) where random input parameters represent uncertainty in wind and solar energy. The existing methods for SODEs are mostly limited to delta-correlated random parameters (white noise). Here we use the Probability Density Function (PDF) method for deriving a closed-form deterministic partial differential equation (PDE) for the joint probability density function of the SODEs describing a power generator with time-correlated power input. The resulting PDE is solved numerically. A good agreement with Monte Carlo Simulations shows accuracy of the PDF method.

  13. Stochastic bursts in the kinetics of gene expression with regulation by long non-coding RNAs

    NASA Astrophysics Data System (ADS)

    Zhdanov, V. P.

    2010-09-01

    One of the main recent breakthroughs in cellular biology is a discovery of numerous non-coding RNAs (ncR-NAs). We outline abilities of long ncRNAs and articulate that the corresponding kinetics may frequently exhibit stochastic bursts. For example, we scrutinize one of the generic cases when the gene transcription is regulated by competitive attachment of ncRNA and protein to a regulatory site. Our Monte Carlo simulations show that in this case one can observe huge long transcriptional bursts consisting of short bursts.

  14. RHIC stochastic cooling motion control

    SciTech Connect

    Gassner, D.; DeSanto, L.; Olsen, R.H.; Fu, W.; Brennan, J.M.; Liaw, CJ; Bellavia, S.; Brodowski, J.

    2011-03-28

    Relativistic Heavy Ion Collider (RHIC) beams are subject to Intra-Beam Scattering (IBS) that causes an emittance growth in all three-phase space planes. The only way to increase integrated luminosity is to counteract IBS with cooling during RHIC stores. A stochastic cooling system for this purpose has been developed, it includes moveable pick-ups and kickers in the collider that require precise motion control mechanics, drives and controllers. Since these moving parts can limit the beam path aperture, accuracy and reliability is important. Servo, stepper, and DC motors are used to provide actuation solutions for position control. The choice of motion stage, drive motor type, and controls are based on needs defined by the variety of mechanical specifications, the unique performance requirements, and the special needs required for remote operations in an accelerator environment. In this report we will describe the remote motion control related beam line hardware, position transducers, rack electronics, and software developed for the RHIC stochastic cooling pick-ups and kickers.

  15. Stochastic Methods for Aircraft Design

    NASA Technical Reports Server (NTRS)

    Pelz, Richard B.; Ogot, Madara

    1998-01-01

    The global stochastic optimization method, simulated annealing (SA), was adapted and applied to various problems in aircraft design. The research was aimed at overcoming the problem of finding an optimal design in a space with multiple minima and roughness ubiquitous to numerically generated nonlinear objective functions. SA was modified to reduce the number of objective function evaluations for an optimal design, historically the main criticism of stochastic methods. SA was applied to many CFD/MDO problems including: low sonic-boom bodies, minimum drag on supersonic fore-bodies, minimum drag on supersonic aeroelastic fore-bodies, minimum drag on HSCT aeroelastic wings, FLOPS preliminary design code, another preliminary aircraft design study with vortex lattice aerodynamics, HSR complete aircraft aerodynamics. In every case, SA provided a simple, robust and reliable optimization method which found optimal designs in order 100 objective function evaluations. Perhaps most importantly, from this academic/industrial project, technology has been successfully transferred; this method is the method of choice for optimization problems at Northrop Grumman.

  16. Numerical tests of stochastic tomography

    NASA Astrophysics Data System (ADS)

    Ru-Shan, Wu; Xiao-Bi, Xie

    1991-05-01

    The method of stochastic tomography proposed by Wu is tested numerically. This method reconstructs the heterospectra (power spectra of heterogeneities) at all depths of a non-uniform random medium using measured joint transverse-angular coherence functions (JTACF) of transmission fluctuations on an array. The inversion method is based on a constrained least-squares inversion implemented via the singular value decomposition. The inversion is also applicable to reconstructions using transverse coherence functions (TCF) or angular coherence functions (ACF); these are merely special cases of JTACF. Through the analysis of sampling functions and singular values, and through numerical examples of reconstruction using theoretically generated coherence functions, we compare the resolution and robustness of reconstructions using TCF, ACF and JTACF. The JTACF can `focus' the coherence analysis at different depths and therefore has a better depth resolution than TCF and ACF. In addition, the JTACF contains much more information than the sum of TCF and ACF, and has much better noise resistance properties than TCF and ACF. Inversion of JTACF can give a reliable reconstruction of heterospectra at different depths even for data with 20% noise contamination. This demonstrates the feasibility of stochastic tomography using JTACF.

  17. Stochastic Modeling of Laminar-Turbulent Transition

    NASA Technical Reports Server (NTRS)

    Rubinstein, Robert; Choudhari, Meelan

    2002-01-01

    Stochastic versions of stability equations are developed in order to develop integrated models of transition and turbulence and to understand the effects of uncertain initial conditions on disturbance growth. Stochastic forms of the resonant triad equations, a high Reynolds number asymptotic theory, and the parabolized stability equations are developed.

  18. Attainability analysis in stochastic controlled systems

    SciTech Connect

    Ryashko, Lev

    2015-03-10

    A control problem for stochastically forced nonlinear continuous-time systems is considered. We propose a method for construction of the regulator that provides a preassigned probabilistic distribution of random states in stochastic equilibrium. Geometric criteria of the controllability are obtained. Constructive technique for the specification of attainability sets is suggested.

  19. From Complex to Simple: Interdisciplinary Stochastic Models

    ERIC Educational Resources Information Center

    Mazilu, D. A.; Zamora, G.; Mazilu, I.

    2012-01-01

    We present two simple, one-dimensional, stochastic models that lead to a qualitative understanding of very complex systems from biology, nanoscience and social sciences. The first model explains the complicated dynamics of microtubules, stochastic cellular highways. Using the theory of random walks in one dimension, we find analytical expressions…

  20. Variational principles for stochastic fluid dynamics

    PubMed Central

    Holm, Darryl D.

    2015-01-01

    This paper derives stochastic partial differential equations (SPDEs) for fluid dynamics from a stochastic variational principle (SVP). The paper proceeds by taking variations in the SVP to derive stochastic Stratonovich fluid equations; writing their Itô representation; and then investigating the properties of these stochastic fluid models in comparison with each other, and with the corresponding deterministic fluid models. The circulation properties of the stochastic Stratonovich fluid equations are found to closely mimic those of the deterministic ideal fluid models. As with deterministic ideal flows, motion along the stochastic Stratonovich paths also preserves the helicity of the vortex field lines in incompressible stochastic flows. However, these Stratonovich properties are not apparent in the equivalent Itô representation, because they are disguised by the quadratic covariation drift term arising in the Stratonovich to Itô transformation. This term is a geometric generalization of the quadratic covariation drift term already found for scalar densities in Stratonovich's famous 1966 paper. The paper also derives motion equations for two examples of stochastic geophysical fluid dynamics; namely, the Euler–Boussinesq and quasi-geostropic approximations. PMID:27547083

  1. Present status of vectorized Monte Carlo

    SciTech Connect

    Brown, F.B.

    1987-01-01

    Monte Carlo applications have traditionally been limited by the large amounts of computer time required to produce acceptably small statistical uncertainties, so the immediate benefit of vectorization is an increase in either the number of jobs completed or the number of particles processed per job, typically by one order of magnitude or more. This results directly in improved engineering design analyses, since Monte Carlo methods are used as standards for correcting more approximate methods. The relatively small number of vectorized programs is a consequence of the newness of vectorized Monte Carlo, the difficulties of nonportability, and the very large development effort required to rewrite or restructure Monte Carlo codes for vectorization. Based on the successful efforts to date, it may be concluded that Monte Carlo vectorization will spread to increasing numbers of codes and applications. The possibility of multitasking provides even further motivation for vectorizing Monte Carlo, since the step from vector to multitasked vector is relatively straightforward.

  2. Stochastic ion acceleration by beating electrostatic waves.

    PubMed

    Jorns, B; Choueiri, E Y

    2013-01-01

    A study is presented of the stochasticity in the orbit of a single, magnetized ion produced by the particle's interaction with two beating electrostatic waves whose frequencies differ by the ion cyclotron frequency. A second-order Lie transform perturbation theory is employed in conjunction with a numerical analysis of the maximum Lyapunov exponent to determine the velocity conditions under which stochasticity occurs in this dynamical system. Upper and lower bounds in ion velocity are found for stochastic orbits with the lower bound approximately equal to the phase velocity of the slower wave. A threshold condition for the onset of stochasticity that is linear with respect to the wave amplitudes is also derived. It is shown that the onset of stochasticity occurs for beating electrostatic waves at lower total wave energy densities than for the case of a single electrostatic wave or two nonbeating electrostatic waves. PMID:23410446

  3. On controllability of nonlinear stochastic systems

    NASA Astrophysics Data System (ADS)

    Sakthivel, R.; Kim, J.-H.; Mahmudov, N. I.

    2006-12-01

    In this paper, complete controllability for nonlinear stochastic systems is studied. First this paper addresses the problem of complete controllability of nonlinear stochastic systems with standard Brownian motion. Then this result is extended to establish complete controllability criterion for stochastic systems with fractional Brownian motion. A fixed point approach is employed for achieving the required result. The solutions are given by a variation of constants formula which allows us to study the complete controllability for nonlinear stochastic systems. In this paper, we prove the complete controllability of nonlinear stochastic system under the natural assumption that the associated linear control system is completely controllable. Finally, an illustrative example is provided to show the usefulness of the proposed technique.

  4. On the utility of graphics cards to perform massively parallel simulation of advanced Monte Carlo methods.

    PubMed

    Lee, Anthony; Yau, Christopher; Giles, Michael B; Doucet, Arnaud; Holmes, Christopher C

    2010-12-01

    We present a case-study on the utility of graphics cards to perform massively parallel simulation of advanced Monte Carlo methods. Graphics cards, containing multiple Graphics Processing Units (GPUs), are self-contained parallel computational devices that can be housed in conventional desktop and laptop computers and can be thought of as prototypes of the next generation of many-core processors. For certain classes of population-based Monte Carlo algorithms they offer massively parallel simulation, with the added advantage over conventional distributed multi-core processors that they are cheap, easily accessible, easy to maintain, easy to code, dedicated local devices with low power consumption. On a canonical set of stochastic simulation examples including population-based Markov chain Monte Carlo methods and Sequential Monte Carlo methods, we nd speedups from 35 to 500 fold over conventional single-threaded computer code. Our findings suggest that GPUs have the potential to facilitate the growth of statistical modelling into complex data rich domains through the availability of cheap and accessible many-core computation. We believe the speedup we observe should motivate wider use of parallelizable simulation methods and greater methodological attention to their design. PMID:22003276

  5. Coupling Deterministic and Monte Carlo Transport Methods for the Simulation of Gamma-Ray Spectroscopy Scenarios

    SciTech Connect

    Smith, Leon E.; Gesh, Christopher J.; Pagh, Richard T.; Miller, Erin A.; Shaver, Mark W.; Ashbaker, Eric D.; Batdorf, Michael T.; Ellis, J. E.; Kaye, William R.; McConn, Ronald J.; Meriwether, George H.; Ressler, Jennifer J.; Valsan, Andrei B.; Wareing, Todd A.

    2008-10-31

    Radiation transport modeling methods used in the radiation detection community fall into one of two broad categories: stochastic (Monte Carlo) and deterministic. Monte Carlo methods are typically the tool of choice for simulating gamma-ray spectrometers operating in homeland and national security settings (e.g. portal monitoring of vehicles or isotope identification using handheld devices), but deterministic codes that discretize the linear Boltzmann transport equation in space, angle, and energy offer potential advantages in computational efficiency for many complex radiation detection problems. This paper describes the development of a scenario simulation framework based on deterministic algorithms. Key challenges include: formulating methods to automatically define an energy group structure that can support modeling of gamma-ray spectrometers ranging from low to high resolution; combining deterministic transport algorithms (e.g. ray-tracing and discrete ordinates) to mitigate ray effects for a wide range of problem types; and developing efficient and accurate methods to calculate gamma-ray spectrometer response functions from the deterministic angular flux solutions. The software framework aimed at addressing these challenges is described and results from test problems that compare coupled deterministic-Monte Carlo methods and purely Monte Carlo approaches are provided.

  6. Disentangling the importance of ecological niches from stochastic processes across scales.

    PubMed

    Chase, Jonathan M; Myers, Jonathan A

    2011-08-27

    Deterministic theories in community ecology suggest that local, niche-based processes, such as environmental filtering, biotic interactions and interspecific trade-offs largely determine patterns of species diversity and composition. In contrast, more stochastic theories emphasize the importance of chance colonization, random extinction and ecological drift. The schisms between deterministic and stochastic perspectives, which date back to the earliest days of ecology, continue to fuel contemporary debates (e.g. niches versus neutrality). As illustrated by the pioneering studies of Robert H. MacArthur and co-workers, resolution to these debates requires consideration of how the importance of local processes changes across scales. Here, we develop a framework for disentangling the relative importance of deterministic and stochastic processes in generating site-to-site variation in species composition (β-diversity) along ecological gradients (disturbance, productivity and biotic interactions) and among biogeographic regions that differ in the size of the regional species pool. We illustrate how to discern the importance of deterministic processes using null-model approaches that explicitly account for local and regional factors that inherently create stochastic turnover. By embracing processes across scales, we can build a more synthetic framework for understanding how niches structure patterns of biodiversity in the face of stochastic processes that emerge from local and biogeographic factors.

  7. Stochastic regularization operators on unstructured meshes

    NASA Astrophysics Data System (ADS)

    Jordi, Claudio; Doetsch, Joseph; Günther, Thomas; Schmelzbach, Cedric; Robertsson, Johan

    2016-04-01

    Most geophysical inverse problems require the solution of underdetermined systems of equations. In order to solve such inverse problems, appropriate regularization is required. Ideally, this regularization includes information on the expected model variability and spatial correlation. Based on geostatistical covariance functions, which can be adapted to the specific situation, stochastic regularization can be used to add auxiliary constraints to the given inverse problem. Stochastic regularization operators have been successfully applied to geophysical inverse problems formulated on regular grids. Here, we demonstrate the calculation of stochastic regularization operators for unstructured meshes. Unstructured meshes are advantageous with regards to incorporating arbitrary topography, undulating geological interfaces and complex acquisition geometries into the inversion. However, compared to regular grids, unstructured meshes have variable cell sizes, complicating the calculation of stochastic operators. The stochastic operators proposed here are based on a 2D exponential correlation function, allowing to predefine spatial correlation lengths. The regularization thus acts over an imposed correlation length rather than only taking into account neighbouring cells as in regular smoothing constraints. Correlation over a spatial length partly removes the effects of variable cell sizes of unstructured meshes on the regularization. Synthetic models having large-scale interfaces as well as small-scale stochastic variations are used to analyse the performance and behaviour of the stochastic regularization operators. The resulting inverted models obtained with stochastic regularization are compare against the results of standard regularization approaches (damping and smoothing). Besides using stochastic operators for regularization, we plan to incorporate the footprint of the stochastic operator in further applications such as the calculation of the cross-gradient functions

  8. Kalman filter parameter estimation for a nonlinear diffusion model of epithelial cell migration using stochastic collocation and the Karhunen-Loeve expansion.

    PubMed

    Barber, Jared; Tanase, Roxana; Yotov, Ivan

    2016-06-01

    Several Kalman filter algorithms are presented for data assimilation and parameter estimation for a nonlinear diffusion model of epithelial cell migration. These include the ensemble Kalman filter with Monte Carlo sampling and a stochastic collocation (SC) Kalman filter with structured sampling. Further, two types of noise are considered -uncorrelated noise resulting in one stochastic dimension for each element of the spatial grid and correlated noise parameterized by the Karhunen-Loeve (KL) expansion resulting in one stochastic dimension for each KL term. The efficiency and accuracy of the four methods are investigated for two cases with synthetic data with and without noise, as well as data from a laboratory experiment. While it is observed that all algorithms perform reasonably well in matching the target solution and estimating the diffusion coefficient and the growth rate, it is illustrated that the algorithms that employ SC and KL expansion are computationally more efficient, as they require fewer ensemble members for comparable accuracy. In the case of SC methods, this is due to improved approximation in stochastic space compared to Monte Carlo sampling. In the case of KL methods, the parameterization of the noise results in a stochastic space of smaller dimension. The most efficient method is the one combining SC and KL expansion. PMID:27085426

  9. Kalman filter parameter estimation for a nonlinear diffusion model of epithelial cell migration using stochastic collocation and the Karhunen-Loeve expansion.

    PubMed

    Barber, Jared; Tanase, Roxana; Yotov, Ivan

    2016-06-01

    Several Kalman filter algorithms are presented for data assimilation and parameter estimation for a nonlinear diffusion model of epithelial cell migration. These include the ensemble Kalman filter with Monte Carlo sampling and a stochastic collocation (SC) Kalman filter with structured sampling. Further, two types of noise are considered -uncorrelated noise resulting in one stochastic dimension for each element of the spatial grid and correlated noise parameterized by the Karhunen-Loeve (KL) expansion resulting in one stochastic dimension for each KL term. The efficiency and accuracy of the four methods are investigated for two cases with synthetic data with and without noise, as well as data from a laboratory experiment. While it is observed that all algorithms perform reasonably well in matching the target solution and estimating the diffusion coefficient and the growth rate, it is illustrated that the algorithms that employ SC and KL expansion are computationally more efficient, as they require fewer ensemble members for comparable accuracy. In the case of SC methods, this is due to improved approximation in stochastic space compared to Monte Carlo sampling. In the case of KL methods, the parameterization of the noise results in a stochastic space of smaller dimension. The most efficient method is the one combining SC and KL expansion.

  10. Uncertainty Propagation with Fast Monte Carlo Techniques

    NASA Astrophysics Data System (ADS)

    Rochman, D.; van der Marck, S. C.; Koning, A. J.; Sjöstrand, H.; Zwermann, W.

    2014-04-01

    Two new and faster Monte Carlo methods for the propagation of nuclear data uncertainties in Monte Carlo nuclear simulations are presented (the "Fast TMC" and "Fast GRS" methods). They are addressing the main drawback of the original Total Monte Carlo method (TMC), namely the necessary large time multiplication factor compared to a single calculation. With these new methods, Monte Carlo simulations can now be accompanied with uncertainty propagation (other than statistical), with small additional calculation time. The new methods are presented and compared with the TMC methods for criticality benchmarks.

  11. Monte Carlo surface flux tallies

    SciTech Connect

    Favorite, Jeffrey A

    2010-11-19

    Particle fluxes on surfaces are difficult to calculate with Monte Carlo codes because the score requires a division by the surface-crossing angle cosine, and grazing angles lead to inaccuracies. We revisit the standard practice of dividing by half of a cosine 'cutoff' for particles whose surface-crossing cosines are below the cutoff. The theory behind this approximation is sound, but the application of the theory to all possible situations does not account for two implicit assumptions: (1) the grazing band must be symmetric about 0, and (2) a single linear expansion for the angular flux must be applied in the entire grazing band. These assumptions are violated in common circumstances; for example, for separate in-going and out-going flux tallies on internal surfaces, and for out-going flux tallies on external surfaces. In some situations, dividing by two-thirds of the cosine cutoff is more appropriate. If users were able to control both the cosine cutoff and the substitute value, they could use these parameters to make accurate surface flux tallies. The procedure is demonstrated in a test problem in which Monte Carlo surface fluxes in cosine bins are converted to angular fluxes and compared with the results of a discrete ordinates calculation.

  12. Automated-biasing approach to Monte Carlo shipping-cask calculations

    SciTech Connect

    Hoffman, T.J.; Tang, J.S.; Parks, C.V.; Childs, R.L.

    1982-01-01

    Computer Sciences at Oak Ridge National Laboratory, under a contract with the Nuclear Regulatory Commission, has developed the SCALE system for performing standardized criticality, shielding, and heat transfer analyses of nuclear systems. During the early phase of shielding development in SCALE, it was established that Monte Carlo calculations of radiation levels exterior to a spent fuel shipping cask would be extremely expensive. This cost can be substantially reduced by proper biasing of the Monte Carlo histories. The purpose of this study is to develop and test an automated biasing procedure for the MORSE-SGC/S module of the SCALE system.

  13. Quantifying the Effect of Undersampling in Monte Carlo Simulations Using SCALE

    SciTech Connect

    Perfetti, Christopher M; Rearden, Bradley T

    2014-01-01

    This study explores the effect of undersampling in Monte Carlo calculations on tally estimates and tally variance estimates for burnup credit applications. Steady-state Monte Carlo simulations were performed for models of several critical systems with varying degrees of spatial and isotopic complexity and the impact of undersampling on eigenvalue and flux estimates was examined. Using an inadequate number of particle histories in each generation was found to produce an approximately 100 pcm bias in the eigenvalue estimates, and biases that exceeded 10% in fuel pin flux estimates.

  14. Fuel pin

    DOEpatents

    Christiansen, D.W.; Karnesky, R.A.; Leggett, R.D.; Baker, R.B.

    1987-11-24

    A fuel pin for a liquid metal nuclear reactor is provided. The fuel pin includes a generally cylindrical cladding member with metallic fuel material disposed therein. At least a portion of the fuel material extends radially outwardly to the inner diameter of the cladding member to promote efficient transfer of heat to the reactor coolant system. The fuel material defines at least one void space therein to facilitate swelling of the fuel material during fission.

  15. Stochastic inflation and nonlinear gravity

    NASA Astrophysics Data System (ADS)

    Salopek, D. S.; Bond, J. R.

    1991-02-01

    We show how nonlinear effects of the metric and scalar fields may be included in stochastic inflation. Our formalism can be applied to non-Gaussian fluctuation models for galaxy formation. Fluctuations with wavelengths larger than the horizon length are governed by a network of Langevin equations for the physical fields. Stochastic noise terms arise from quantum fluctuations that are assumed to become classical at horizon crossing and that then contribute to the background. Using Hamilton-Jacobi methods, we solve the Arnowitt-Deser-Misner constraint equations which allows us to separate the growing modes from the decaying ones in the drift phase following each stochastic impulse. We argue that the most reasonable choice of time hypersurfaces for the Langevin system during inflation is T=ln(Ha), where H and a are the local values of the Hubble parameter and the scale factor, since T is the natural time for evolving the short-wavelength scalar field fluctuations in an inhomogeneous background. We derive a Fokker-Planck equation which describes how the probability distribution of scalar field values at a given spatial point evolves in T. Analytic Green's-function solutions obtained for a single scalar field self-interacting through an exponential potential are used to demonstrate (1) if the initial condition of the Hubble parameter is chosen to be consistent with microwave-background limits, H(φ0)/mρ<~10-4, then the fluctuations obey Gaussian statistics to a high precision, independent of the time hypersurface choice and operator-ordering ambiguities in the Fokker-Planck equation, and (2) for scales much larger than our present observable patch of the Universe, the distribution is non-Gaussian, with a tail extending to large energy densities; although there are no observable manifestations, it does show eternal inflation. Lattice simulations of our Langevin network for the exponential potential demonstrate how spatial correlations are incorporated. An initially

  16. A coupled stochastic inverse-management framework for dealing with nonpoint agriculture pollution under groundwater parameter uncertainty

    NASA Astrophysics Data System (ADS)

    Llopis-Albert, Carlos; Palacios-Marqués, Daniel; Merigó, José M.

    2014-04-01

    In this paper a methodology for the stochastic management of groundwater quality problems is presented, which can be used to provide agricultural advisory services. A stochastic algorithm to solve the coupled flow and mass transport inverse problem is combined with a stochastic management approach to develop methods for integrating uncertainty; thus obtaining more reliable policies on groundwater nitrate pollution control from agriculture. The stochastic inverse model allows identifying non-Gaussian parameters and reducing uncertainty in heterogeneous aquifers by constraining stochastic simulations to data. The management model determines the spatial and temporal distribution of fertilizer application rates that maximizes net benefits in agriculture constrained by quality requirements in groundwater at various control sites. The quality constraints can be taken, for instance, by those given by water laws such as the EU Water Framework Directive (WFD). Furthermore, the methodology allows providing the trade-off between higher economic returns and reliability in meeting the environmental standards. Therefore, this new technology can help stakeholders in the decision-making process under an uncertainty environment. The methodology has been successfully applied to a 2D synthetic aquifer, where an uncertainty assessment has been carried out by means of Monte Carlo simulation techniques.

  17. Stochastic thermodynamics for active matter

    NASA Astrophysics Data System (ADS)

    Speck, Thomas

    2016-05-01

    The theoretical understanding of active matter, which is driven out of equilibrium by directed motion, is still fragmental and model oriented. Stochastic thermodynamics, on the other hand, is a comprehensive theoretical framework for driven systems that allows to define fluctuating work and heat. We apply these definitions to active matter, assuming that dissipation can be modelled by effective non-conservative forces. We show that, through the work, conjugate extensive and intensive observables can be defined even in non-equilibrium steady states lacking a free energy. As an illustration, we derive the expressions for the pressure and interfacial tension of active Brownian particles. The latter becomes negative despite the observed stable phase separation. We discuss this apparent contradiction, highlighting the role of fluctuations, and we offer a tentative explanation.

  18. Thermodynamics of stochastic Turing machines.

    PubMed

    Strasberg, Philipp; Cerrillo, Javier; Schaller, Gernot; Brandes, Tobias

    2015-10-01

    In analogy to Brownian computers we explicitly show how to construct stochastic models which mimic the behavior of a general-purpose computer (a Turing machine). Our models are discrete state systems obeying a Markovian master equation, which are logically reversible and have a well-defined and consistent thermodynamic interpretation. The resulting master equation, which describes a simple one-step process on an enormously large state space, allows us to thoroughly investigate the thermodynamics of computation for this situation. Especially in the stationary regime we can well approximate the master equation by a simple Fokker-Planck equation in one dimension. We then show that the entropy production rate at steady state can be made arbitrarily small, but the total (integrated) entropy production is finite and grows logarithmically with the number of computational steps. PMID:26565165

  19. Stochastic dynamics of dengue epidemics

    NASA Astrophysics Data System (ADS)

    de Souza, David R.; Tomé, Tânia; Pinho, Suani T. R.; Barreto, Florisneide R.; de Oliveira, Mário J.

    2013-01-01

    We use a stochastic Markovian dynamics approach to describe the spreading of vector-transmitted diseases, such as dengue, and the threshold of the disease. The coexistence space is composed of two structures representing the human and mosquito populations. The human population follows a susceptible-infected-recovered (SIR) type dynamics and the mosquito population follows a susceptible-infected-susceptible (SIS) type dynamics. The human infection is caused by infected mosquitoes and vice versa, so that the SIS and SIR dynamics are interconnected. We develop a truncation scheme to solve the evolution equations from which we get the threshold of the disease and the reproductive ratio. The threshold of the disease is also obtained by performing numerical simulations. We found that for certain values of the infection rates the spreading of the disease is impossible, for any death rate of infected mosquitoes.

  20. Stochastic sensing through covalent interactions

    DOEpatents

    Bayley, Hagan; Shin, Seong-Ho; Luchian, Tudor; Cheley, Stephen

    2013-03-26

    A system and method for stochastic sensing in which the analyte covalently bonds to the sensor element or an adaptor element. If such bonding is irreversible, the bond may be broken by a chemical reagent. The sensor element may be a protein, such as the engineered P.sub.SH type or .alpha.HL protein pore. The analyte may be any reactive analyte, including chemical weapons, environmental toxins and pharmaceuticals. The analyte covalently bonds to the sensor element to produce a detectable signal. Possible signals include change in electrical current, change in force, and change in fluorescence. Detection of the signal allows identification of the analyte and determination of its concentration in a sample solution. Multiple analytes present in the same solution may be detected.

  1. Stochastic low Reynolds number swimmers.

    PubMed

    Golestanian, Ramin; Ajdari, Armand

    2009-05-20

    As technological advances allow us to fabricate smaller autonomous self-propelled devices, it is clear that at some point directed propulsion could not come from pre-specified deterministic periodic deformation of the swimmer's body and we need to develop strategies for extracting a net directed motion from a series of random transitions in the conformation space of the swimmer. We present a theoretical formulation for describing the 'stochastic motor' that drives the motion of low Reynolds number swimmers based on this concept, and use it to study the propulsion of a simple low Reynolds number swimmer, namely, the three-sphere swimmer model. When the detailed balanced is broken and the motor is driven out of equilibrium, it can propel the swimmer in the required direction. The formulation can be used to study optimal design strategies for molecular scale low Reynolds number swimmers.

  2. Heuristic-biased stochastic sampling

    SciTech Connect

    Bresina, J.L.

    1996-12-31

    This paper presents a search technique for scheduling problems, called Heuristic-Biased Stochastic Sampling (HBSS). The underlying assumption behind the HBSS approach is that strictly adhering to a search heuristic often does not yield the best solution and, therefore, exploration off the heuristic path can prove fruitful. Within the HBSS approach, the balance between heuristic adherence and exploration can be controlled according to the confidence one has in the heuristic. By varying this balance, encoded as a bias function, the HBSS approach encompasses a family of search algorithms of which greedy search and completely random search are extreme members. We present empirical results from an application of HBSS to the realworld problem of observation scheduling. These results show that with the proper bias function, it can be easy to outperform greedy search.

  3. Multiscale Stochastic Simulation and Modeling

    SciTech Connect

    James Glimm; Xiaolin Li

    2006-01-10

    Acceleration driven instabilities of fluid mixing layers include the classical cases of Rayleigh-Taylor instability, driven by a steady acceleration and Richtmyer-Meshkov instability, driven by an impulsive acceleration. Our program starts with high resolution methods of numerical simulation of two (or more) distinct fluids, continues with analytic analysis of these solutions, and the derivation of averaged equations. A striking achievement has been the systematic agreement we obtained between simulation and experiment by using a high resolution numerical method and improved physical modeling, with surface tension. Our study is accompanies by analysis using stochastic modeling and averaged equations for the multiphase problem. We have quantified the error and uncertainty using statistical modeling methods.

  4. On the Value Function of Weakly Coercive Problems in Nonlinear Stochastic Control

    SciTech Connect

    Motta, Monica; Sartori, Caterina

    2011-08-15

    In this paper we investigate via a dynamic programming approach some nonlinear stochastic control problems where the control set is unbounded and a classical coercivity hypothesis is replaced by some weaker assumptions. We prove that these problems can be approximated by finite fuel problems; show the continuity of the relative value functions and characterize them as unique viscosity solutions of a quasi-variational inequality with suitable boundary conditions.

  5. Shipping Cask Studies with MOX Fuel

    SciTech Connect

    Pavlovichev, A.M.

    2001-05-17

    Tasks of nuclear safety assurance for storage and transport of fresh mixed uranium-plutonium fuel of the VVER-1000 reactor are considered in the view of 3 MOX LTAs introduction into the core. The precise code MCU that realizes the Monte Carlo method is used for calculations.

  6. Stochastic inversion by ray continuation

    SciTech Connect

    Haas, A.; Viallix

    1989-05-01

    The conventional tomographic inversion consists in minimizing residuals between measured and modelled traveltimes. The process tends to be unstable and some additional constraints are required to stabilize it. The stochastic formulation generalizes the technique and sets it on firmer theoretical bases. The Stochastic Inversion by Ray Continuation (SIRC) is a probabilistic approach, which takes a priori geological information into account and uses probability distributions to characterize data correlations and errors. It makes it possible to tie uncertainties to the results. The estimated parameters are interval velocities and B-spline coefficients used to represent smoothed interfaces. Ray tracing is done by a continuation technique between source and receives. The ray coordinates are computed from one path to the next by solving a linear system derived from Fermat's principle. The main advantages are fast computations, accurate traveltimes and derivatives. The seismic traces are gathered in CMPs. For a particular CMP, several reflecting elements are characterized by their time gradient measured on the stacked section, and related to a mean emergence direction. The program capabilities are tested on a synthetic example as well as on a field example. The strategy consists in inverting the parameters for one layer, then for the next one down. An inversion step is divided in two parts. First the parameters for the layer concerned are inverted, while the parameters for the upper layers remain fixed. Then all the parameters are reinverted. The velocity-depth section computed by the program together with the corresponding errors can be used directly for the interpretation, as an initial model for depth migration or for the complete inversion program under development.

  7. Stochastic dynamics of cancer initiation

    NASA Astrophysics Data System (ADS)

    Foo, Jasmine; Leder, Kevin; Michor, Franziska

    2011-02-01

    Most human cancer types result from the accumulation of multiple genetic and epigenetic alterations in a single cell. Once the first change (or changes) have arisen, tumorigenesis is initiated and the subsequent emergence of additional alterations drives progression to more aggressive and ultimately invasive phenotypes. Elucidation of the dynamics of cancer initiation is of importance for an understanding of tumor evolution and cancer incidence data. In this paper, we develop a novel mathematical framework to study the processes of cancer initiation. Cells at risk of accumulating oncogenic mutations are organized into small compartments of cells and proliferate according to a stochastic process. During each cell division, an (epi)genetic alteration may arise which leads to a random fitness change, drawn from a probability distribution. Cancer is initiated when a cell gains a fitness sufficiently high to escape from the homeostatic mechanisms of the cell compartment. To investigate cancer initiation during a human lifetime, a 'race' between this fitness process and the aging process of the patient is considered; the latter is modeled as a second stochastic Markov process in an aging dimension. This model allows us to investigate the dynamics of cancer initiation and its dependence on the mutational fitness distribution. Our framework also provides a methodology to assess the effects of different life expectancy distributions on lifetime cancer incidence. We apply this methodology to colorectal tumorigenesis while considering life expectancy data of the US population to inform the dynamics of the aging process. We study how the probability of cancer initiation prior to death, the time until cancer initiation, and the mutational profile of the cancer-initiating cell depends on the shape of the mutational fitness distribution and life expectancy of the population.

  8. Intrinsic noise analyzer: a software package for the exploration of stochastic biochemical kinetics using the system size expansion.

    PubMed

    Thomas, Philipp; Matuschek, Hannes; Grima, Ramon

    2012-01-01

    The accepted stochastic descriptions of biochemical dynamics under well-mixed conditions are given by the Chemical Master Equation and the Stochastic Simulation Algorithm, which are equivalent. The latter is a Monte-Carlo method, which, despite enjoying broad availability in a large number of existing software packages, is computationally expensive due to the huge amounts of ensemble averaging required for obtaining accurate statistical information. The former is a set of coupled differential-difference equations for the probability of the system being in any one of the possible mesoscopic states; these equations are typically computationally intractable because of the inherently large state space. Here we introduce the software package intrinsic Noise Analyzer (iNA), which allows for systematic analysis of stochastic biochemical kinetics by means of van Kampen's system size expansion of the Chemical Master Equation. iNA is platform independent and supports the popular SBML format natively. The present implementation is the first to adopt a complementary approach that combines state-of-the-art analysis tools using the computer algebra system Ginac with traditional methods of stochastic simulation. iNA integrates two approximation methods based on the system size expansion, the Linear Noise Approximation and effective mesoscopic rate equations, which to-date have not been available to non-expert users, into an easy-to-use graphical user interface. In particular, the present methods allow for quick approximate analysis of time-dependent mean concentrations, variances, covariances and correlations coefficients, which typically outperforms stochastic simulations. These analytical tools are complemented by automated multi-core stochastic simulations with direct statistical evaluation and visualization. We showcase iNA's performance by using it to explore the stochastic properties of cooperative and non-cooperative enzyme kinetics and a gene network associated with

  9. Multiple Stochastic Point Processes in Gene Expression

    NASA Astrophysics Data System (ADS)

    Murugan, Rajamanickam

    2008-04-01

    We generalize the idea of multiple-stochasticity in chemical reaction systems to gene expression. Using Chemical Langevin Equation approach we investigate how this multiple-stochasticity can influence the overall molecular number fluctuations. We show that the main sources of this multiple-stochasticity in gene expression could be the randomness in transcription and translation initiation times which in turn originates from the underlying bio-macromolecular recognition processes such as the site-specific DNA-protein interactions and therefore can be internally regulated by the supra-molecular structural factors such as the condensation/super-coiling of DNA. Our theory predicts that (1) in case of gene expression system, the variances ( φ) introduced by the randomness in transcription and translation initiation-times approximately scales with the degree of condensation ( s) of DNA or mRNA as φ ∝ s -6. From the theoretical analysis of the Fano factor as well as coefficient of variation associated with the protein number fluctuations we predict that (2) unlike the singly-stochastic case where the Fano factor has been shown to be a monotonous function of translation rate, in case of multiple-stochastic gene expression the Fano factor is a turn over function with a definite minimum. This in turn suggests that the multiple-stochastic processes can also be well tuned to behave like a singly-stochastic point processes by adjusting the rate parameters.

  10. Scenario tree reduction in stochastic programming with recourse for hydropower operations

    NASA Astrophysics Data System (ADS)

    Xu, Bin; Zhong, Ping-An; Zambon, Renato C.; Zhao, Yunfa; Yeh, William W.-G.

    2015-08-01

    A stochastic programming with recourse model requires the consequences of recourse actions be modeled for all possible realizations of the stochastic variables. Continuous stochastic variables are approximated by scenario trees. This paper evaluates the impact of scenario tree reduction on model performance for hydropower operations and suggests procedures to determine the optimal level of scenario tree reduction. We first establish a stochastic programming model for the optimal operation of a cascaded system of reservoirs for hydropower production. We then use the neural gas method to generate scenario trees and employ a Monte Carlo method to systematically reduce the scenario trees. We conduct in-sample and out-of-sample tests to evaluate the impact of scenario tree reduction on the objective function of the hydropower optimization model. We then apply a statistical hypothesis test to determine the significance of the impact due to scenario tree reduction. We develop a stochastic programming with recourse model and apply it to real-time operation for hydropower production to determine the loss in solution accuracy due to scenario tree reduction. We apply the proposed methodology to the Qingjiang cascade system of reservoirs in China. The results show: (1) the neural gas method preserves the mean value of the original streamflow series but introduces bias to variance, cross variance, and lag-one covariance due to information loss when the original tree is systematically reduced; (2) reducing the scenario number by as much as 40% results in insignificant change in the objective function and solution quality, but significantly reduces computational demand.

  11. A one-time truncate and encode multiresolution stochastic framework

    NASA Astrophysics Data System (ADS)

    Abgrall, R.; Congedo, P. M.; Geraci, G.

    2014-01-01

    In this work a novel adaptive strategy for stochastic problems, inspired from the classical Harten's framework, is presented. The proposed algorithm allows building, in a very general manner, stochastic numerical schemes starting from a whatever type of deterministic schemes and handling a large class of problems, from unsteady to discontinuous solutions. Its formulations permits to recover the same results concerning the interpolation theory of the classical multiresolution approach, but with an extension to uncertainty quantification problems. The present strategy permits to build numerical scheme with a higher accuracy with respect to other classical uncertainty quantification techniques, but with a strong reduction of the numerical cost and memory requirements. Moreover, the flexibility of the proposed approach allows to employ any kind of probability density function, even discontinuous and time varying, without introducing further complications in the algorithm. The advantages of the present strategy are demonstrated by performing several numerical problems where different forms of uncertainty distributions are taken into account, such as discontinuous and unsteady custom-defined probability density functions. In addition to algebraic and ordinary differential equations, numerical results for the challenging 1D Kraichnan-Orszag are reported in terms of accuracy and convergence. Finally, a two degree-of-freedom aeroelastic model for a subsonic case is presented. Though quite simple, the model allows recovering some physical key aspect, on the fluid/structure interaction, thanks to the quasi-steady aerodynamic approximation employed. The injection of an uncertainty is chosen in order to obtain a complete parameterization of the mass matrix. All the numerical results are compared with respect to classical Monte Carlo solution and with a non-intrusive Polynomial Chaos method.

  12. Analysis of system drought for Manitoba Hydro using stochastic methods

    NASA Astrophysics Data System (ADS)

    Akintug, Bertug

    Stochastic time series models are commonly used in the analysis of large-scale water resources systems. In the stochastic approach, synthetic flow scenarios are generated and used for the analysis of complex events such as multi-year droughts. Conclusions drawn from such analyses are only plausible to the extent that the underlying time series model realistically represents the natural variability of flows. Traditionally, hydrologists have favoured autoregressive moving average (ARMA) models to describe annual flows. In this research project, a class of model called Markov-Switching (MS) model (also referred to as a Hidden Markov model) is presented as an alternative to conventional ARMA models. The basic assumption underlying this model is that a limited number of flow regimes exists and that each flow year can be classified as belonging to one of these regimes. The persistence of and switching between regimes is described by a Markov chain. Within each regime, it is assumed that annual flows follow a normal distribution with mean and variance that depend on the regime. The simplicity of this model makes it possible to derive a number of model characteristics analytically such as moments, autocorrelation, and crosscorrelation. Model estimation is possible with the maximum likelihood method implemented using the Expectation Maximization (EM) algorithm. The uncertainty in the model parameters can be assessed through Bayesian inference using Markov Chain Monte Carlo (MCMC) methods. A Markov-Switching disaggregation (MSD) model is also proposed in this research project to disaggregate higher-level flows generated using the MS model into lower-level flows. The MSD model preserves the additivity property because for a given year both the higher-level and lower-level variables are generated from normal distributions. The 2-state MS and MSD models are applied to Manitoba Hydro's system along with more conventional first order autoregressive and disaggregation models and

  13. A one-time truncate and encode multiresolution stochastic framework

    SciTech Connect

    Abgrall, R.; Congedo, P.M.; Geraci, G.

    2014-01-15

    In this work a novel adaptive strategy for stochastic problems, inspired from the classical Harten's framework, is presented. The proposed algorithm allows building, in a very general manner, stochastic numerical schemes starting from a whatever type of deterministic schemes and handling a large class of problems, from unsteady to discontinuous solutions. Its formulations permits to recover the same results concerning the interpolation theory of the classical multiresolution approach, but with an extension to uncertainty quantification problems. The present strategy permits to build numerical scheme with a higher accuracy with respect to other classical uncertainty quantification techniques, but with a strong reduction of the numerical cost and memory requirements. Moreover, the flexibility of the proposed approach allows to employ any kind of probability density function, even discontinuous and time varying, without introducing further complications in the algorithm. The advantages of the present strategy are demonstrated by performing several numerical problems where different forms of uncertainty distributions are taken into account, such as discontinuous and unsteady custom-defined probability density functions. In addition to algebraic and ordinary differential equations, numerical results for the challenging 1D Kraichnan–Orszag are reported in terms of accuracy and convergence. Finally, a two degree-of-freedom aeroelastic model for a subsonic case is presented. Though quite simple, the model allows recovering some physical key aspect, on the fluid/structure interaction, thanks to the quasi-steady aerodynamic approximation employed. The injection of an uncertainty is chosen in order to obtain a complete parameterization of the mass matrix. All the numerical results are compared with respect to classical Monte Carlo solution and with a non-intrusive Polynomial Chaos method.

  14. Detection methods for non-Gaussian gravitational wave stochastic backgrounds

    NASA Astrophysics Data System (ADS)

    Drasco, Steve; Flanagan, Éanna É.

    2003-04-01

    A gravitational wave stochastic background can be produced by a collection of independent gravitational wave events. There are two classes of such backgrounds, one for which the ratio of the average time between events to the average duration of an event is small (i.e., many events are on at once), and one for which the ratio is large. In the first case the signal is continuous, sounds something like a constant hiss, and has a Gaussian probability distribution. In the second case, the discontinuous or intermittent signal sounds something like popcorn popping, and is described by a non-Gaussian probability distribution. In this paper we address the issue of finding an optimal detection method for such a non-Gaussian background. As a first step, we examine the idealized situation in which the event durations are short compared to the detector sampling time, so that the time structure of the events cannot be resolved, and we assume white, Gaussian noise in two collocated, aligned detectors. For this situation we derive an appropriate version of the maximum likelihood detection statistic. We compare the performance of this statistic to that of the standard cross-correlation statistic both analytically and with Monte Carlo simulations. In general the maximum likelihood statistic performs better than the cross-correlation statistic when the stochastic background is sufficiently non-Gaussian, resulting in a gain factor in the minimum gravitational-wave energy density necessary for detection. This gain factor ranges roughly between 1 and 3, depending on the duty cycle of the background, for realistic observing times and signal strengths for both ground and space based detectors. The computational cost of the statistic, although significantly greater than that of the cross-correlation statistic, is not unreasonable. Before the statistic can be used in practice with real detector data, further work is required to generalize our analysis to accommodate separated, misaligned

  15. Second Cancers After Fractionated Radiotherapy: Stochastic Population Dynamics Effects

    NASA Technical Reports Server (NTRS)

    Sachs, Rainer K.; Shuryak, Igor; Brenner, David; Fakir, Hatim; Hahnfeldt, Philip

    2007-01-01

    When ionizing radiation is used in cancer therapy it can induce second cancers in nearby organs. Mainly due to longer patient survival times, these second cancers have become of increasing concern. Estimating the risk of solid second cancers involves modeling: because of long latency times, available data is usually for older, obsolescent treatment regimens. Moreover, modeling second cancers gives unique insights into human carcinogenesis, since the therapy involves administering well characterized doses of a well studied carcinogen, followed by long-term monitoring. In addition to putative radiation initiation that produces pre-malignant cells, inactivation (i.e. cell killing), and subsequent cell repopulation by proliferation can be important at the doses relevant to second cancer situations. A recent initiation/inactivation/proliferation (IIP) model characterized quantitatively the observed occurrence of second breast and lung cancers, using a deterministic cell population dynamics approach. To analyze ifradiation-initiated pre-malignant clones become extinct before full repopulation can occur, we here give a stochastic version of this I I model. Combining Monte Carlo simulations with standard solutions for time-inhomogeneous birth-death equations, we show that repeated cycles of inactivation and repopulation, as occur during fractionated radiation therapy, can lead to distributions of pre-malignant cells per patient with variance >> mean, even when pre-malignant clones are Poisson-distributed. Thus fewer patients would be affected, but with a higher probability, than a deterministic model, tracking average pre-malignant cell numbers, would predict. Our results are applied to data on breast cancers after radiotherapy for Hodgkin disease. The stochastic IIP analysis, unlike the deterministic one, indicates: a) initiated, pre-malignant cells can have a growth advantage during repopulation, not just during the longer tumor latency period that follows; b) weekend

  16. Solving stochastic epidemiological models using computer algebra

    NASA Astrophysics Data System (ADS)

    Hincapie, Doracelly; Ospina, Juan

    2011-06-01

    Mathematical modeling in Epidemiology is an important tool to understand the ways under which the diseases are transmitted and controlled. The mathematical modeling can be implemented via deterministic or stochastic models. Deterministic models are based on short systems of non-linear ordinary differential equations and the stochastic models are based on very large systems of linear differential equations. Deterministic models admit complete, rigorous and automatic analysis of stability both local and global from which is possible to derive the algebraic expressions for the basic reproductive number and the corresponding epidemic thresholds using computer algebra software. Stochastic models are more difficult to treat and the analysis of their properties requires complicated considerations in statistical mathematics. In this work we propose to use computer algebra software with the aim to solve epidemic stochastic models such as the SIR model and the carrier-borne model. Specifically we use Maple to solve these stochastic models in the case of small groups and we obtain results that do not appear in standard textbooks or in the books updated on stochastic models in epidemiology. From our results we derive expressions which coincide with those obtained in the classical texts using advanced procedures in mathematical statistics. Our algorithms can be extended for other stochastic models in epidemiology and this shows the power of computer algebra software not only for analysis of deterministic models but also for the analysis of stochastic models. We also perform numerical simulations with our algebraic results and we made estimations for the basic parameters as the basic reproductive rate and the stochastic threshold theorem. We claim that our algorithms and results are important tools to control the diseases in a globalized world.

  17. Large scale stochastic spatio-temporal modelling with PCRaster

    NASA Astrophysics Data System (ADS)

    Karssenberg, Derek; Drost, Niels; Schmitz, Oliver; de Jong, Kor; Bierkens, Marc F. P.

    2013-04-01

    PCRaster is a software framework for building spatio-temporal models of land surface processes (http://www.pcraster.eu). Building blocks of models are spatial operations on raster maps, including a large suite of operations for water and sediment routing. These operations are available to model builders as Python functions. The software comes with Python framework classes providing control flow for spatio-temporal modelling, Monte Carlo simulation, and data assimilation (Ensemble Kalman Filter and Particle Filter). Models are built by combining the spatial operations in these framework classes. This approach enables modellers without specialist programming experience to construct large, rather complicated models, as many technical details of modelling (e.g., data storage, solving spatial operations, data assimilation algorithms) are taken care of by the PCRaster toolbox. Exploratory modelling is supported by routines for prompt, interactive visualisation of stochastic spatio-temporal data generated by the models. The high computational requirements for stochastic spatio-temporal modelling, and an increasing demand to run models over large areas at high resolution, e.g. in global hydrological modelling, require an optimal use of available, heterogeneous computing resources by the modelling framework. Current work in the context of the eWaterCycle project is on a parallel implementation of the modelling engine, capable of running on a high-performance computing infrastructure such as clusters and supercomputers. Model runs will be distributed over multiple compute nodes and multiple processors (GPUs and CPUs). Parallelization will be done by parallel execution of Monte Carlo realizations and sub regions of the modelling domain. In our approach we use multiple levels of parallelism, improving scalability considerably. On the node level we will use OpenCL, the industry standard for low-level high performance computing kernels. To combine multiple nodes we will use

  18. Monte Carlo Simulations for Radiobiology

    NASA Astrophysics Data System (ADS)

    Ackerman, Nicole; Bazalova, Magdalena; Chang, Kevin; Graves, Edward

    2012-02-01

    The relationship between tumor response and radiation is currently modeled as dose, quantified on the mm or cm scale through measurement or simulation. This does not take into account modern knowledge of cancer, including tissue heterogeneities and repair mechanisms. We perform Monte Carlo simulations utilizing Geant4 to model radiation treatment on a cellular scale. Biological measurements are correlated to simulated results, primarily the energy deposit in nuclear volumes. One application is modeling dose enhancement through the use of high-Z materials, such gold nanoparticles. The model matches in vitro data and predicts dose enhancement ratios for a variety of in vivo scenarios. This model shows promise for both treatment design and furthering our understanding of radiobiology.

  19. Structural mapping of Maxwell Montes

    NASA Technical Reports Server (NTRS)

    Keep, Myra; Hansen, Vicki L.

    1993-01-01

    Four sets of structures were mapped in the western and southern portions of Maxwell Montes. An early north-trending set of penetrative lineaments is cut by dominant, spaced ridges and paired valleys that trend northwest. To the south the ridges and valleys splay and graben form in the valleys. The spaced ridges and graben are cut by northeast-trending graben. The northwest-trending graben formed synchronously with or slightly later than the spaced ridges. Formation of the northeast-trending graben may have overlapped with that of the northwest-trending graben, but occurred in a spatially distinct area (regions of 2 deg slope). Graben formation, with northwest-southeast extension, may be related to gravity-sliding. Individually and collectively these structures are too small to support the immense topography of Maxwell, and are interpreted as parasitic features above a larger mass that supports the mountain belt.

  20. Stochastic system identification in structural dynamics

    USGS Publications Warehouse

    Safak, Erdal

    1988-01-01

    Recently, new identification methods have been developed by using the concept of optimal-recursive filtering and stochastic approximation. These methods, known as stochastic identification, are based on the statistical properties of the signal and noise, and do not require the assumptions of current methods. The criterion for stochastic system identification is that the difference between the recorded output and the output from the identified system (i.e., the residual of the identification) should be equal to white noise. In this paper, first a brief review of the theory is given. Then, an application of the method is presented by using ambient vibration data from a nine-story building.

  1. Immigration-extinction dynamics of stochastic populations

    NASA Astrophysics Data System (ADS)

    Meerson, Baruch; Ovaskainen, Otso

    2013-07-01

    How high should be the rate of immigration into a stochastic population in order to significantly reduce the probability of observing the population become extinct? Is there any relation between the population size distributions with and without immigration? Under what conditions can one justify the simple patch occupancy models, which ignore the population distribution and its dynamics in a patch, and treat a patch simply as either occupied or empty? We answer these questions by exactly solving a simple stochastic model obtained by adding a steady immigration to a variant of the Verhulst model: a prototypical model of an isolated stochastic population.

  2. Connecting deterministic and stochastic metapopulation models.

    PubMed

    Barbour, A D; McVinish, R; Pollett, P K

    2015-12-01

    In this paper, we study the relationship between certain stochastic and deterministic versions of Hanski's incidence function model and the spatially realistic Levins model. We show that the stochastic version can be well approximated in a certain sense by the deterministic version when the number of habitat patches is large, provided that the presence or absence of individuals in a given patch is influenced by a large number of other patches. Explicit bounds on the deviation between the stochastic and deterministic models are given. PMID:25735440

  3. Stochastic deformation of a thermodynamic symplectic structure.

    PubMed

    Kazinski, P O

    2009-01-01

    A stochastic deformation of a thermodynamic symplectic structure is studied. The stochastic deformation is analogous to the deformation of an algebra of observables such as deformation quantization, but for an imaginary deformation parameter (the Planck constant). Gauge symmetries of thermodynamics and corresponding stochastic mechanics, which describes fluctuations of a thermodynamic system, are revealed and gauge fields are introduced. A physical interpretation to the gauge transformations and gauge fields is given. An application of the formalism to a description of systems with distributed parameters in a local thermodynamic equilibrium is considered.

  4. Stochastic deformation of a thermodynamic symplectic structure

    NASA Astrophysics Data System (ADS)

    Kazinski, P. O.

    2009-01-01

    A stochastic deformation of a thermodynamic symplectic structure is studied. The stochastic deformation is analogous to the deformation of an algebra of observables such as deformation quantization, but for an imaginary deformation parameter (the Planck constant). Gauge symmetries of thermodynamics and corresponding stochastic mechanics, which describes fluctuations of a thermodynamic system, are revealed and gauge fields are introduced. A physical interpretation to the gauge transformations and gauge fields is given. An application of the formalism to a description of systems with distributed parameters in a local thermodynamic equilibrium is considered.

  5. Stochastic string models with continuous semimartingales

    NASA Astrophysics Data System (ADS)

    Bueno-Guerrero, Alberto; Moreno, Manuel; Navas, Javier F.

    2015-09-01

    This paper reformulates the stochastic string model of Santa-Clara and Sornette using stochastic calculus with continuous semimartingales. We present some new results, such as: (a) the dynamics of the short-term interest rate, (b) the PDE that must be satisfied by the bond price, and (c) an analytic expression for the price of a European bond call option. Additionally, we clarify some important features of the stochastic string model and show its relevance to price derivatives and the equivalence with an infinite dimensional HJM model to price European options.

  6. Quantum Monte Carlo Algorithms for Diagrammatic Vibrational Structure Calculations

    NASA Astrophysics Data System (ADS)

    Hermes, Matthew; Hirata, So

    2015-06-01

    Convergent hierarchies of theories for calculating many-body vibrational ground and excited-state wave functions, such as Møller-Plesset perturbation theory or coupled cluster theory, tend to rely on matrix-algebraic manipulations of large, high-dimensional arrays of anharmonic force constants, tasks which require large amounts of computer storage space and which are very difficult to implement in a parallel-scalable fashion. On the other hand, existing quantum Monte Carlo (QMC) methods for vibrational wave functions tend to lack robust techniques for obtaining excited-state energies, especially for large systems. By exploiting analytical identities for matrix elements of position operators in a harmonic oscillator basis, we have developed stochastic implementations of the size-extensive vibrational self-consistent field (MC-XVSCF) and size-extensive vibrational Møller-Plesset second-order perturbation (MC-XVMP2) theories which do not require storing the potential energy surface (PES). The programmable equations of MC-XVSCF and MC-XVMP2 take the form of a small number of high-dimensional integrals evaluated using Metropolis Monte Carlo techniques. The associated integrands require independent evaluations of only the value, not the derivatives, of the PES at many points, a task which is trivial to parallelize. However, unlike existing vibrational QMC methods, MC-XVSCF and MC-XVMP2 can calculate anharmonic frequencies directly, rather than as a small difference between two noisy total energies, and do not require user-selected coordinates or nodal surfaces. MC-XVSCF and MC-XVMP2 can also directly sample the PES in a given approximation without analytical or grid-based approximations, enabling us to quantify the errors induced by such approximations.

  7. Multi-element stochastic spectral projection for high quantile estimation

    NASA Astrophysics Data System (ADS)

    Ko, Jordan; Garnier, Josselin

    2013-06-01

    We investigate quantile estimation by multi-element generalized Polynomial Chaos (gPC) metamodel where the exact numerical model is approximated by complementary metamodels in overlapping domains that mimic the model's exact response. The gPC metamodel is constructed by the non-intrusive stochastic spectral projection approach and function evaluation on the gPC metamodel can be considered as essentially free. Thus, large number of Monte Carlo samples from the metamodel can be used to estimate α-quantile, for moderate values of α. As the gPC metamodel is an expansion about the means of the inputs, its accuracy may worsen away from these mean values where the extreme events may occur. By increasing the approximation accuracy of the metamodel, we may eventually improve accuracy of quantile estimation but it is very expensive. A multi-element approach is therefore proposed by combining a global metamodel in the standard normal space with supplementary local metamodels constructed in bounded domains about the design points corresponding to the extreme events. To improve the accuracy and to minimize the sampling cost, sparse-tensor and anisotropic-tensor quadratures are tested in addition to the full-tensor Gauss quadrature in the construction of local metamodels; different bounds of the gPC expansion are also examined. The global and local metamodels are combined in the multi-element gPC (MEgPC) approach and it is shown that MEgPC can be more accurate than Monte Carlo or importance sampling methods for high quantile estimations for input dimensions roughly below N=8, a limit that is very much case- and α-dependent.

  8. Variance reduction for Fokker–Planck based particle Monte Carlo schemes

    SciTech Connect

    Gorji, M. Hossein Andric, Nemanja; Jenny, Patrick

    2015-08-15

    Recently, Fokker–Planck based particle Monte Carlo schemes have been proposed and evaluated for simulations of rarefied gas flows [1–3]. In this paper, the variance reduction for particle Monte Carlo simulations based on the Fokker–Planck model is considered. First, deviational based schemes were derived and reviewed, and it is shown that these deviational methods are not appropriate for practical Fokker–Planck based rarefied gas flow simulations. This is due to the fact that the deviational schemes considered in this study lead either to instabilities in the case of two-weight methods or to large statistical errors if the direct sampling method is applied. Motivated by this conclusion, we developed a novel scheme based on correlated stochastic processes. The main idea here is to synthesize an additional stochastic process with a known solution, which is simultaneously solved together with the main one. By correlating the two processes, the statistical errors can dramatically be reduced; especially for low Mach numbers. To assess the methods, homogeneous relaxation, planar Couette and lid-driven cavity flows were considered. For these test cases, it could be demonstrated that variance reduction based on parallel processes is very robust and effective.

  9. COMET-PE as an Alternative to Monte Carlo for Photon and Electron Transport

    NASA Astrophysics Data System (ADS)

    Hayward, Robert M.; Rahnema, Farzad

    2014-06-01

    Monte Carlo methods are a central component of radiotherapy treatment planning, shielding design, detector modeling, and other applications. Long calculation times, however, can limit the usefulness of these purely stochastic methods. The coarse mesh method for photon and electron transport (COMET-PE) provides an attractive alternative. By combining stochastic pre-computation with a deterministic solver, COMET-PE achieves accuracy comparable to Monte Carlo methods in only a fraction of the time. The method's implementation has been extended to 3D, and in this work, it is validated by comparison to DOSXYZnrc using a photon radiotherapy benchmark. The comparison demonstrates excellent agreement; of the voxels that received more than 10% of the maximum dose, over 97.3% pass a 2% / 2mm acceptance test and over 99.7% pass a 3% / 3mm test. Furthermore, the method is over an order of magnitude faster than DOSXYZnrc and is able to take advantage of both distributed-memory and shared-memory parallel architectures for increased performance.

  10. Variance reduction for Fokker-Planck based particle Monte Carlo schemes

    NASA Astrophysics Data System (ADS)

    Gorji, M. Hossein; Andric, Nemanja; Jenny, Patrick

    2015-08-01

    Recently, Fokker-Planck based particle Monte Carlo schemes have been proposed and evaluated for simulations of rarefied gas flows [1-3]. In this paper, the variance reduction for particle Monte Carlo simulations based on the Fokker-Planck model is considered. First, deviational based schemes were derived and reviewed, and it is shown that these deviational methods are not appropriate for practical Fokker-Planck based rarefied gas flow simulations. This is due to the fact that the deviational schemes considered in this study lead either to instabilities in the case of two-weight methods or to large statistical errors if the direct sampling method is applied. Motivated by this conclusion, we developed a novel scheme based on correlated stochastic processes. The main idea here is to synthesize an additional stochastic process with a known solution, which is simultaneously solved together with the main one. By correlating the two processes, the statistical errors can dramatically be reduced; especially for low Mach numbers. To assess the methods, homogeneous relaxation, planar Couette and lid-driven cavity flows were considered. For these test cases, it could be demonstrated that variance reduction based on parallel processes is very robust and effective.

  11. Stochastic pump effect and geometric phases in dissipative and stochastic systems

    SciTech Connect

    Sinitsyn, Nikolai

    2008-01-01

    The success of Berry phases in quantum mechanics stimulated the study of similar phenomena in other areas of physics, including the theory of living cell locomotion and motion of patterns in nonlinear media. More recently, geometric phases have been applied to systems operating in a strongly stochastic environment, such as molecular motors. We discuss such geometric effects in purely classical dissipative stochastic systems and their role in the theory of the stochastic pump effect (SPE).

  12. A model and variance reduction method for computing statistical outputs of stochastic elliptic partial differential equations

    SciTech Connect

    Vidal-Codina, F.; Nguyen, N.C.; Giles, M.B.; Peraire, J.

    2015-09-15

    We present a model and variance reduction method for the fast and reliable computation of statistical outputs of stochastic elliptic partial differential equations. Our method consists of three main ingredients: (1) the hybridizable discontinuous Galerkin (HDG) discretization of elliptic partial differential equations (PDEs), which allows us to obtain high-order accurate solutions of the governing PDE; (2) the reduced basis method for a new HDG discretization of the underlying PDE to enable real-time solution of the parameterized PDE in the presence of stochastic parameters; and (3) a multilevel variance reduction method that exploits the statistical correlation among the different reduced basis approximations and the high-fidelity HDG discretization to accelerate the convergence of the Monte Carlo simulations. The multilevel variance reduction method provides efficient computation of the statistical outputs by shifting most of the computational burden from the high-fidelity HDG approximation to the reduced basis approximations. Furthermore, we develop a posteriori error estimates for our approximations of the statistical outputs. Based on these error estimates, we propose an algorithm for optimally choosing both the dimensions of the reduced basis approximations and the sizes of Monte Carlo samples to achieve a given error tolerance. We provide numerical examples to demonstrate the performance of the proposed method.

  13. A genetic-algorithm-aided stochastic optimization model for regional air quality management under uncertainty.

    PubMed

    Qin, Xiaosheng; Huang, Guohe; Liu, Lei

    2010-01-01

    A genetic-algorithm-aided stochastic optimization (GASO) model was developed in this study for supporting regional air quality management under uncertainty. The model incorporated genetic algorithm (GA) and Monte Carlo simulation techniques into a general stochastic chance-constrained programming (CCP) framework and allowed uncertainties in simulation and optimization model parameters to be considered explicitly in the design of least-cost strategies. GA was used to seek the optimal solution of the management model by progressively evaluating the performances of individual solutions. Monte Carlo simulation was used to check the feasibility of each solution. A management problem in terms of regional air pollution control was studied to demonstrate the applicability of the proposed method. Results of the case study indicated the proposed model could effectively communicate uncertainties into the optimization process and generate solutions that contained a spectrum of potential air pollutant treatment options with risk and cost information. Decision alternatives could be obtained by analyzing tradeoffs between the overall pollutant treatment cost and the system-failure risk due to inherent uncertainties.

  14. An efficient distribution method for nonlinear transport problems in stochastic porous media

    NASA Astrophysics Data System (ADS)

    Ibrahima, F.; Tchelepi, H.; Meyer, D. W.

    2015-12-01

    Because geophysical data are inexorably sparse and incomplete, stochastic treatments of simulated responses are convenient to explore possible scenarios and assess risks in subsurface problems. In particular, understanding how uncertainties propagate in porous media with nonlinear two-phase flow is essential, yet challenging, in reservoir simulation and hydrology. We give a computationally efficient and numerically accurate method to estimate the one-point probability density (PDF) and cumulative distribution functions (CDF) of the water saturation for the stochastic Buckley-Leverett problem when the probability distributions of the permeability and porosity fields are available. The method draws inspiration from the streamline approach and expresses the distributions of interest essentially in terms of an analytically derived mapping and the distribution of the time of flight. In a large class of applications the latter can be estimated at low computational costs (even via conventional Monte Carlo). Once the water saturation distribution is determined, any one-point statistics thereof can be obtained, especially its average and standard deviation. Moreover, rarely available in other approaches, yet crucial information such as the probability of rare events and saturation quantiles (e.g. P10, P50 and P90) can be derived from the method. We provide various examples and comparisons with Monte Carlo simulations to illustrate the performance of the method.

  15. Stochastic Optimal Control for Series Hybrid Electric Vehicles

    SciTech Connect

    Malikopoulos, Andreas

    2013-01-01

    Increasing demand for improving fuel economy and reducing emissions has stimulated significant research and investment in hybrid propulsion systems. In this paper, we address the problem of optimizing online the supervisory control in a series hybrid configuration by modeling its operation as a controlled Markov chain using the average cost criterion. We treat the stochastic optimal control problem as a dual constrained optimization problem. We show that the control policy that yields higher probability distribution to the states with low cost and lower probability distribution to the states with high cost is an optimal control policy, defined as an equilibrium control policy. We demonstrate the effectiveness of the efficiency of the proposed controller in a series hybrid configuration and compare it with a thermostat-type controller.

  16. Turbulent hydrocarbon combustions kinetics - Stochastic modeling and verification

    NASA Technical Reports Server (NTRS)

    Wang, T. S.; Farmer, R. C.; Tucker, Kevin

    1989-01-01

    Idealized reactors, that are designed to ensure perfect mixing and are used to generate the combustion kinetics for complex hydrocarbon fuels, may depart from the ideal and influence the kinetics model performance. A complex hydrocarbon kinetics model that was established by modeling a jet-stirred combustor (JSC) as a perfectly stirred reactor (PSR), is reevaluated with a simple stochastic process in order to introduce the unmixedness effect quantitatively into the reactor system. It is shown that the comparisons of the predictions and experimental data have improved dramatically with the inclusion of the unmixedness effect in the rich combustion region. The complex hydrocarbon kinetics is therefore verified to be mixing effect free and be applicable to general reacting flow calculations.

  17. Assessing predictability of a hydrological stochastic-dynamical system

    NASA Astrophysics Data System (ADS)

    Gelfan, Alexander

    2014-05-01

    to those of the corresponding series of the actual data measured at the station. Beginning from the initial conditions and being forced by Monte-Carlo generated synthetic meteorological series, the model simulated diverging trajectories of soil moisture characteristics (water content of soil column, moisture of different soil layers, etc.). Limit of predictability of the specific characteristic was determined through time of stabilization of variance of the characteristic between the trajectories, as they move away from the initial state. Numerical experiments were carried out with the stochastic-dynamical model to analyze sensitivity of the soil moisture predictability assessments to uncertainty in the initial conditions, to determine effects of the soil hydraulic properties and processes of soil freezing on the predictability. It was found, particularly, that soil water content predictability is sensitive to errors in the initial conditions and strongly depends on the hydraulic properties of soil under both unfrozen and frozen conditions. Even if the initial conditions are "well-established", the assessed predictability of water content of unfrozen soil does not exceed 30-40 days, while for frozen conditions it may be as long as 3-4 months. The latter creates opportunity for utilizing the autumn water content of soil as the predictor for spring snowmelt runoff in the region under consideration.

  18. Bootstrap performance profiles in stochastic algorithms assessment

    SciTech Connect

    Costa, Lino; Espírito Santo, Isabel A.C.P.; Oliveira, Pedro

    2015-03-10

    Optimization with stochastic algorithms has become a relevant research field. Due to its stochastic nature, its assessment is not straightforward and involves integrating accuracy and precision. Performance profiles for the mean do not show the trade-off between accuracy and precision, and parametric stochastic profiles require strong distributional assumptions and are limited to the mean performance for a large number of runs. In this work, bootstrap performance profiles are used to compare stochastic algorithms for different statistics. This technique allows the estimation of the sampling distribution of almost any statistic even with small samples. Multiple comparison profiles are presented for more than two algorithms. The advantages and drawbacks of each assessment methodology are discussed.

  19. Perspective: Stochastic algorithms for chemical kinetics

    NASA Astrophysics Data System (ADS)

    Gillespie, Daniel T.; Hellander, Andreas; Petzold, Linda R.

    2013-05-01

    We outline our perspective on stochastic chemical kinetics, paying particular attention to numerical simulation algorithms. We first focus on dilute, well-mixed systems, whose description using ordinary differential equations has served as the basis for traditional chemical kinetics for the past 150 years. For such systems, we review the physical and mathematical rationale for a discrete-stochastic approach, and for the approximations that need to be made in order to regain the traditional continuous-deterministic description. We next take note of some of the more promising strategies for dealing stochastically with stiff systems, rare events, and sensitivity analysis. Finally, we review some recent efforts to adapt and extend the discrete-stochastic approach to systems that are not well-mixed. In that currently developing area, we focus mainly on the strategy of subdividing the system into well-mixed subvolumes, and then simulating diffusional transfers of reactant molecules between adjacent subvolumes together with chemical reactions inside the subvolumes.

  20. Stochasticity in plant cellular growth and patterning

    PubMed Central

    Meyer, Heather M.; Roeder, Adrienne H. K.

    2014-01-01

    Plants, along with other multicellular organisms, have evolved specialized regulatory mechanisms to achieve proper tissue growth and morphogenesis. During development, growing tissues generate specialized cell types and complex patterns necessary for establishing the function of the organ. Tissue growth is a tightly regulated process that yields highly reproducible outcomes. Nevertheless, the underlying cellular and molecular behaviors are often stochastic. Thus, how does stochasticity, together with strict genetic regulation, give rise to reproducible tissue development? This review draws examples from plants as well as other systems to explore stochasticity in plant cell division, growth, and patterning. We conclude that stochasticity is often needed to create small differences between identical cells, which are amplified and stabilized by genetic and mechanical feedback loops to begin cell differentiation. These first few differentiating cells initiate traditional patterning mechanisms to ensure regular development. PMID:25250034

  1. Communication: Embedded fragment stochastic density functional theory

    SciTech Connect

    Neuhauser, Daniel; Baer, Roi; Rabani, Eran

    2014-07-28

    We develop a method in which the electronic densities of small fragments determined by Kohn-Sham density functional theory (DFT) are embedded using stochastic DFT to form the exact density of the full system. The new method preserves the scaling and the simplicity of the stochastic DFT but cures the slow convergence that occurs when weakly coupled subsystems are treated. It overcomes the spurious charge fluctuations that impair the applications of the original stochastic DFT approach. We demonstrate the new approach on a fullerene dimer and on clusters of water molecules and show that the density of states and the total energy can be accurately described with a relatively small number of stochastic orbitals.

  2. Communication: Embedded fragment stochastic density functional theory

    NASA Astrophysics Data System (ADS)

    Neuhauser, Daniel; Baer, Roi; Rabani, Eran

    2014-07-01

    We develop a method in which the electronic densities of small fragments determined by Kohn-Sham density functional theory (DFT) are embedded using stochastic DFT to form the exact density of the full system. The new method preserves the scaling and the simplicity of the stochastic DFT but cures the slow convergence that occurs when weakly coupled subsystems are treated. It overcomes the spurious charge fluctuations that impair the applications of the original stochastic DFT approach. We demonstrate the new approach on a fullerene dimer and on clusters of water molecules and show that the density of states and the total energy can be accurately described with a relatively small number of stochastic orbitals.

  3. Perspective: Stochastic algorithms for chemical kinetics.

    PubMed

    Gillespie, Daniel T; Hellander, Andreas; Petzold, Linda R

    2013-05-01

    We outline our perspective on stochastic chemical kinetics, paying particular attention to numerical simulation algorithms. We first focus on dilute, well-mixed systems, whose description using ordinary differential equations has served as the basis for traditional chemical kinetics for the past 150 years. For such systems, we review the physical and mathematical rationale for a discrete-stochastic approach, and for the approximations that need to be made in order to regain the traditional continuous-deterministic description. We next take note of some of the more promising strategies for dealing stochastically with stiff systems, rare events, and sensitivity analysis. Finally, we review some recent efforts to adapt and extend the discrete-stochastic approach to systems that are not well-mixed. In that currently developing area, we focus mainly on the strategy of subdividing the system into well-mixed subvolumes, and then simulating diffusional transfers of reactant molecules between adjacent subvolumes together with chemical reactions inside the subvolumes.

  4. Stochastic differential equation model to Prendiville processes

    SciTech Connect

    Granita; Bahar, Arifah

    2015-10-22

    The Prendiville process is another variation of the logistic model which assumes linearly decreasing population growth rate. It is a continuous time Markov chain (CTMC) taking integer values in the finite interval. The continuous time Markov chain can be approximated by stochastic differential equation (SDE). This paper discusses the stochastic differential equation of Prendiville process. The work started with the forward Kolmogorov equation in continuous time Markov chain of Prendiville process. Then it was formulated in the form of a central-difference approximation. The approximation was then used in Fokker-Planck equation in relation to the stochastic differential equation of the Prendiville process. The explicit solution of the Prendiville process was obtained from the stochastic differential equation. Therefore, the mean and variance function of the Prendiville process could be easily found from the explicit solution.

  5. Stochastic description of quantum Brownian dynamics

    NASA Astrophysics Data System (ADS)

    Yan, Yun-An; Shao, Jiushu

    2016-08-01

    Classical Brownian motion has well been investigated since the pioneering work of Einstein, which inspired mathematicians to lay the theoretical foundation of stochastic processes. A stochastic formulation for quantum dynamics of dissipative systems described by the system-plus-bath model has been developed and found many applications in chemical dynamics, spectroscopy, quantum transport, and other fields. This article provides a tutorial review of the stochastic formulation for quantum dissipative dynamics. The key idea is to decouple the interaction between the system and the bath by virtue of the Hubbard-Stratonovich transformation or Itô calculus so that the system and the bath are not directly entangled during evolution, rather they are correlated due to the complex white noises introduced. The influence of the bath on the system is thereby defined by an induced stochastic field, which leads to the stochastic Liouville equation for the system. The exact reduced density matrix can be calculated as the stochastic average in the presence of bath-induced fields. In general, the plain implementation of the stochastic formulation is only useful for short-time dynamics, but not efficient for long-time dynamics as the statistical errors go very fast. For linear and other specific systems, the stochastic Liouville equation is a good starting point to derive the master equation. For general systems with decomposable bath-induced processes, the hierarchical approach in the form of a set of deterministic equations of motion is derived based on the stochastic formulation and provides an effective means for simulating the dissipative dynamics. A combination of the stochastic simulation and the hierarchical approach is suggested to solve the zero-temperature dynamics of the spin-boson model. This scheme correctly describes the coherent-incoherent transition (Toulouse limit) at moderate dissipation and predicts a rate dynamics in the overdamped regime. Challenging problems

  6. Stochastic resonance in the brusselator model

    PubMed

    Osipov; Ponizovskaya

    2000-04-01

    Using the Brusselator model, we show that in a simple dynamical system small noise can be converted into stochastic spikewise oscillations of huge amplitude (bursting noises) in the vicinity of a Hopf bifurcation. Small periodic signals with amplitude several times less than the noise intensity transform these stochastic oscillations into quasiperiodic large-amplitude spikewise oscillations or small-amplitude quasiharmonic oscillations, depending on the signal form. PMID:11088262

  7. Structural model uncertainty in stochastic simulation

    SciTech Connect

    McKay, M.D.; Morrison, J.D.

    1997-09-01

    Prediction uncertainty in stochastic simulation models can be described by a hierarchy of components: stochastic variability at the lowest level, input and parameter uncertainty at a higher level, and structural model uncertainty at the top. It is argued that a usual paradigm for analysis of input uncertainty is not suitable for application to structural model uncertainty. An approach more likely to produce an acceptable methodology for analyzing structural model uncertainty is one that uses characteristics specific to the particular family of models.

  8. Desynchronization of stochastically synchronized chemical oscillators

    SciTech Connect

    Snari, Razan; Tinsley, Mark R. E-mail: kshowalt@wvu.edu; Faramarzi, Sadegh; Showalter, Kenneth E-mail: kshowalt@wvu.edu; Wilson, Dan; Moehlis, Jeff; Netoff, Theoden Ivan

    2015-12-15

    Experimental and theoretical studies are presented on the design of perturbations that enhance desynchronization in populations of oscillators that are synchronized by periodic entrainment. A phase reduction approach is used to determine optimal perturbation timing based upon experimentally measured phase response curves. The effectiveness of the perturbation waveforms is tested experimentally in populations of periodically and stochastically synchronized chemical oscillators. The relevance of the approach to therapeutic methods for disrupting phase coherence in groups of stochastically synchronized neuronal oscillators is discussed.

  9. A discussion of bunched beam stochastic cooling

    SciTech Connect

    Neuffer, David; /Fermilab

    2005-08-01

    The analysis of Herr and Mohl[1] is used as a basis for a discussion of bunched beam cooling in the Fermilab recycler and the Tevatron. Differences between the two cooling regimes are discussed. Criteria discussed in that paper imply the failure of stochastic cooling in the Tevatron while permitting the success of stochastic cooling in the Recycler. These ''predictions'' are in agreement with observations.

  10. Stochastic differential games with inside information

    NASA Astrophysics Data System (ADS)

    Draouil, Olfa; Øksendal, Bernt

    2016-08-01

    We study stochastic differential games of jump diffusions, where the players have access to inside information. Our approach is based on anticipative stochastic calculus, white noise, Hida-Malliavin calculus, forward integrals and the Donsker delta functional. We obtain a characterization of Nash equilibria of such games in terms of the corresponding Hamiltonians. This is used to study applications to insider games in finance, specifically optimal insider consumption and optimal insider portfolio under model uncertainty.

  11. Behavioral Stochastic Resonance within the Human Brain

    NASA Astrophysics Data System (ADS)

    Kitajo, Keiichi; Nozaki, Daichi; Ward, Lawrence M.; Yamamoto, Yoshiharu

    2003-05-01

    We provide the first evidence that stochastic resonance within the human brain can enhance behavioral responses to weak sensory inputs. We asked subjects to adjust handgrip force to a slowly changing, subthreshold gray level signal presented to their right eye. Behavioral responses were optimized by presenting randomly changing gray levels separately to the left eye. The results indicate that observed behavioral stochastic resonance was mediated by neural activity within the human brain where the information from both eyes converges.

  12. Complexity and synchronization in stochastic chaotic systems

    NASA Astrophysics Data System (ADS)

    Son Dang, Thai; Palit, Sanjay Kumar; Mukherjee, Sayan; Hoang, Thang Manh; Banerjee, Santo

    2016-02-01

    We investigate the complexity of a hyperchaotic dynamical system perturbed by noise and various nonlinear speech and music signals. The complexity is measured by the weighted recurrence entropy of the hyperchaotic and stochastic systems. The synchronization phenomenon between two stochastic systems with complex coupling is also investigated. These criteria are tested on chaotic and perturbed systems by mean conditional recurrence and normalized synchronization error. Numerical results including surface plots, normalized synchronization errors, complexity variations etc show the effectiveness of the proposed analysis.

  13. Stochastic learning via optimizing the variational inequalities.

    PubMed

    Tao, Qing; Gao, Qian-Kun; Chu, De-Jun; Wu, Gao-Wei

    2014-10-01

    A wide variety of learning problems can be posed in the framework of convex optimization. Many efficient algorithms have been developed based on solving the induced optimization problems. However, there exists a gap between the theoretically unbeatable convergence rate and the practically efficient learning speed. In this paper, we use the variational inequality (VI) convergence to describe the learning speed. To this end, we avoid the hard concept of regret in online learning and directly discuss the stochastic learning algorithms. We first cast the regularized learning problem as a VI. Then, we present a stochastic version of alternating direction method of multipliers (ADMMs) to solve the induced VI. We define a new VI-criterion to measure the convergence of stochastic algorithms. While the rate of convergence for any iterative algorithms to solve nonsmooth convex optimization problems cannot be better than O(1/√t), the proposed stochastic ADMM (SADMM) is proved to have an O(1/t) VI-convergence rate for the l1-regularized hinge loss problems without strong convexity and smoothness. The derived VI-convergence results also support the viewpoint that the standard online analysis is too loose to analyze the stochastic setting properly. The experiments demonstrate that SADMM has almost the same performance as the state-of-the-art stochastic learning algorithms but its O(1/t) VI-convergence rate is capable of tightly characterizing the real learning speed.

  14. Stochastic resonance during a polymer translocation process

    NASA Astrophysics Data System (ADS)

    Mondal, Debasish; Muthukumar, M.

    2016-04-01

    We have studied the occurrence of stochastic resonance when a flexible polymer chain undergoes a single-file translocation through a nano-pore separating two spherical cavities, under a time-periodic external driving force. The translocation of the chain is controlled by a free energy barrier determined by chain length, pore length, pore-polymer interaction, and confinement inside the donor and receiver cavities. The external driving force is characterized by a frequency and amplitude. By combining the Fokker-Planck formalism for polymer translocation and a two-state model for stochastic resonance, we have derived analytical formulas for criteria for emergence of stochastic resonance during polymer translocation. We show that no stochastic resonance is possible if the free energy barrier for polymer translocation is purely entropic in nature. The polymer chain exhibits stochastic resonance only in the presence of an energy threshold in terms of polymer-pore interactions. Once stochastic resonance is feasible, the chain entropy controls the optimal synchronization conditions significantly.

  15. Automated Flight Routing Using Stochastic Dynamic Programming

    NASA Technical Reports Server (NTRS)

    Ng, Hok K.; Morando, Alex; Grabbe, Shon

    2010-01-01

    Airspace capacity reduction due to convective weather impedes air traffic flows and causes traffic congestion. This study presents an algorithm that reroutes flights in the presence of winds, enroute convective weather, and congested airspace based on stochastic dynamic programming. A stochastic disturbance model incorporates into the reroute design process the capacity uncertainty. A trajectory-based airspace demand model is employed for calculating current and future airspace demand. The optimal routes minimize the total expected traveling time, weather incursion, and induced congestion costs. They are compared to weather-avoidance routes calculated using deterministic dynamic programming. The stochastic reroutes have smaller deviation probability than the deterministic counterpart when both reroutes have similar total flight distance. The stochastic rerouting algorithm takes into account all convective weather fields with all severity levels while the deterministic algorithm only accounts for convective weather systems exceeding a specified level of severity. When the stochastic reroutes are compared to the actual flight routes, they have similar total flight time, and both have about 1% of travel time crossing congested enroute sectors on average. The actual flight routes induce slightly less traffic congestion than the stochastic reroutes but intercept more severe convective weather.

  16. A deterministic alternative to the full configuration interaction quantum Monte Carlo method.

    PubMed

    Tubman, Norm M; Lee, Joonho; Takeshita, Tyler Y; Head-Gordon, Martin; Whaley, K Birgitta

    2016-07-28

    Development of exponentially scaling methods has seen great progress in tackling larger systems than previously thought possible. One such technique, full configuration interaction quantum Monte Carlo, is a useful algorithm that allows exact diagonalization through stochastically sampling determinants. The method derives its utility from the information in the matrix elements of the Hamiltonian, along with a stochastic projected wave function, to find the important parts of Hilbert space. However, the stochastic representation of the wave function is not required to search Hilbert space efficiently, and here we describe a highly efficient deterministic method that can achieve chemical accuracy for a wide range of systems, including the difficult Cr2 molecule. We demonstrate for systems like Cr2 that such calculations can be performed in just a few cpu hours which makes it one of the most efficient and accurate methods that can attain chemical accuracy for strongly correlated systems. In addition our method also allows efficient calculation of excited state energies, which we illustrate with benchmark results for the excited states of C2. PMID:27475353

  17. A deterministic alternative to the full configuration interaction quantum Monte Carlo method

    NASA Astrophysics Data System (ADS)

    Tubman, Norm M.; Lee, Joonho; Takeshita, Tyler Y.; Head-Gordon, Martin; Whaley, K. Birgitta

    2016-07-01

    Development of exponentially scaling methods has seen great progress in tackling larger systems than previously thought possible. One such technique, full configuration interaction quantum Monte Carlo, is a useful algorithm that allows exact diagonalization through stochastically sampling determinants. The method derives its utility from the information in the matrix elements of the Hamiltonian, along with a stochastic projected wave function, to find the important parts of Hilbert space. However, the stochastic representation of the wave function is not required to search Hilbert space efficiently, and here we describe a highly efficient deterministic method that can achieve chemical accuracy for a wide range of systems, including the difficult Cr2 molecule. We demonstrate for systems like Cr2 that such calculations can be performed in just a few cpu hours which makes it one of the most efficient and accurate methods that can attain chemical accuracy for strongly correlated systems. In addition our method also allows efficient calculation of excited state energies, which we illustrate with benchmark results for the excited states of C2.

  18. Stochastic modeling of deterioration in nuclear power plant components

    NASA Astrophysics Data System (ADS)

    Yuan, Xianxun

    2007-12-01

    heterogeneity of individual units and additive measurement errors. Another common way to model deterioration in civil engineering is to treat the rate of deterioration as a random variable. In the context of condition-based maintenance, the thesis shows that the random variable rate (RV) model is inadequate to incorporate temporal variability, because the deterioration along a specific sample path becomes deterministic. This distinction between the RV and GP models has profound implications to the optimization of maintenance strategies. The thesis presents detailed practical applications of the proposed models to feeder pipe systems and fuel channels in CANDU nuclear reactors. In summary, a careful consideration of the nature of uncertainties associated with deterioration is important for credible life-cycle management of engineering systems. If the deterioration process is affected by temporal uncertainty, it is important to model it as a stochastic process.

  19. Stochastic modelling of animal movement

    PubMed Central

    Smouse, Peter E.; Focardi, Stefano; Moorcroft, Paul R.; Kie, John G.; Forester, James D.; Morales, Juan M.

    2010-01-01

    Modern animal movement modelling derives from two traditions. Lagrangian models, based on random walk behaviour, are useful for multi-step trajectories of single animals. Continuous Eulerian models describe expected behaviour, averaged over stochastic realizations, and are usefully applied to ensembles of individuals. We illustrate three modern research arenas. (i) Models of home-range formation describe the process of an animal ‘settling down’, accomplished by including one or more focal points that attract the animal's movements. (ii) Memory-based models are used to predict how accumulated experience translates into biased movement choices, employing reinforced random walk behaviour, with previous visitation increasing or decreasing the probability of repetition. (iii) Lévy movement involves a step-length distribution that is over-dispersed, relative to standard probability distributions, and adaptive in exploring new environments or searching for rare targets. Each of these modelling arenas implies more detail in the movement pattern than general models of movement can accommodate, but realistic empiric evaluation of their predictions requires dense locational data, both in time and space, only available with modern GPS telemetry. PMID:20566497

  20. Lower hybrid wavepacket stochasticity revisited

    SciTech Connect

    Fuchs, V.; Krlín, L.; Pánek, R.; Preinhaelter, J.; Seidl, J.; Urban, J.

    2014-02-12

    Analysis is presented in support of the explanation in Ref. [1] for the observation of relativistic electrons during Lower Hybrid (LH) operation in EC pre-heated plasma at the WEGA stellarator [1,2]. LH power from the WEGA TE11 circular waveguide, 9 cm diameter, un-phased, 2.45 GHz antenna, is radiated into a B≅0.5 T, Ðœ„n{sub e}≅5×10{sup 17} 1/m{sup 3} plasma at T{sub e}≅10 eV bulk temperature with an EC generated 50 keV component [1]. The fast electrons cycle around flux or drift surfaces with few collisions, sufficient for randomizing phases but insufficient for slowing fast electrons down, and thus repeatedly interact with the rf field close to the antenna mouth, gaining energy in the process. Our antenna calculations reveal a standing electric field pattern at the antenna mouth, with which we formulate the electron dynamics via a relativistic Hamiltonian. A simple approximation of the equations of motion leads to a relativistic generalization of the area-preserving Fermi-Ulam (F-U) map [3], allowing phase-space global stochasticity analysis. At typical WEGA plasma and antenna conditions, the F-U map predicts an LH driven current of about 230 A, at about 225 W of dissipated power, in good agreement with the measurements and analysis reported in [1].

  1. Stochastic models of intracellular transport

    NASA Astrophysics Data System (ADS)

    Bressloff, Paul C.; Newby, Jay M.

    2013-01-01

    The interior of a living cell is a crowded, heterogenuous, fluctuating environment. Hence, a major challenge in modeling intracellular transport is to analyze stochastic processes within complex environments. Broadly speaking, there are two basic mechanisms for intracellular transport: passive diffusion and motor-driven active transport. Diffusive transport can be formulated in terms of the motion of an overdamped Brownian particle. On the other hand, active transport requires chemical energy, usually in the form of adenosine triphosphate hydrolysis, and can be direction specific, allowing biomolecules to be transported long distances; this is particularly important in neurons due to their complex geometry. In this review a wide range of analytical methods and models of intracellular transport is presented. In the case of diffusive transport, narrow escape problems, diffusion to a small target, confined and single-file diffusion, homogenization theory, and fractional diffusion are considered. In the case of active transport, Brownian ratchets, random walk models, exclusion processes, random intermittent search processes, quasi-steady-state reduction methods, and mean-field approximations are considered. Applications include receptor trafficking, axonal transport, membrane diffusion, nuclear transport, protein-DNA interactions, virus trafficking, and the self-organization of subcellular structures.

  2. Stochastic phase-change neurons

    NASA Astrophysics Data System (ADS)

    Tuma, Tomas; Pantazi, Angeliki; Le Gallo, Manuel; Sebastian, Abu; Eleftheriou, Evangelos

    2016-08-01

    Artificial neuromorphic systems based on populations of spiking neurons are an indispensable tool in understanding the human brain and in constructing neuromimetic computational systems. To reach areal and power efficiencies comparable to those seen in biological systems, electroionics-based and phase-change-based memristive devices have been explored as nanoscale counterparts of synapses. However, progress on scalable realizations of neurons has so far been limited. Here, we show that chalcogenide-based phase-change materials can be used to create an artificial neuron in which the membrane potential is represented by the phase configuration of the nanoscale phase-change device. By exploiting the physics of reversible amorphous-to-crystal phase transitions, we show that the temporal integration of postsynaptic potentials can be achieved on a nanosecond timescale. Moreover, we show that this is inherently stochastic because of the melt-quench-induced reconfiguration of the atomic structure occurring when the neuron is reset. We demonstrate the use of these phase-change neurons, and their populations, in the detection of temporal correlations in parallel data streams and in sub-Nyquist representation of high-bandwidth signals.

  3. Postmodern string theory: Stochastic formulation

    NASA Astrophysics Data System (ADS)

    Aurilia, A.; Spallucci, E.; Vanzetta, I.

    1994-11-01

    In this paper we study the dynamics of a statistical ensemble of strings, building on a recently proposed gauge theory of the string geodesic field. We show that this stochastic approach is equivalent to the Carathéodory formulation of the Nambu-Goto action, supplemented by an averaging procedure over the family of classical string world sheets which are solutions of the equation of motion. In this new framework, the string geodesic field is reinterpreted as the Gibbs current density associated with the string statistical ensemble. Next, we show that the classical field equations derived from the string gauge action can be obtained as the semiclassical limit of the string functional wave equation. For closed strings, the wave equation itself is completely analogous to the Wheeler-DeWitt equation used in quantum cosmology. Thus, in the string case, the wave function has support on the space of all possible spatial loop configurations. Finally, we show that the string distribution induces a multiphase, or cellular structure on the spacetime manifold characterized by domains with a purely Riemannian geometry separated by domain walls over which there exists a predominantly Weyl geometry.

  4. Toward mission-specific service utility estimation using analytic stochastic process models

    NASA Astrophysics Data System (ADS)

    Thornley, David J.; Young, Robert J.; Richardson, James P.

    2009-05-01

    Planning a mission to monitor, control or prevent activity requires postulation of subject behaviours, specification of goals, and the identification of suitable effects, candidate methods, information requirements, and effective infrastructure. In an operation that comprises many missions, it is desirable to base decisions to assign assets and computation time or communications bandwidth on the value of the result of doing so in a particular mission to the operation. We describe initial investigations of a holistic approach for judging the value of candidate sensing service designs by stochastic modeling of information delivery, knowledge building, synthesis of situational awareness, and the selection of actions and achievement of goals. Abstraction of physical and information transformations to interdependent stochastic state transition models enables calculation of probability distributions over uncertain futures using wellcharacterized approximations. This complements traditional Monte Carlo war gaming in which example futures are explored individually, by capturing probability distributions over loci of behaviours that show the importance and value of mission component designs. The overall model is driven by sensing processes that are constructed by abstracting from the physics of sensing to a stochastic model of the system's trajectories through sensing modes. This is formulated by analysing probabilistic projections of subject behaviours against functions which describe the quality of information delivered by the sensing service. This enables energy consumption predictions, and when composed into a mission model, supports calculation of situational awareness formulation and command satisfaction timing probabilities. These outcome probabilities then support calculation of relative utility and value.

  5. Stochastic simulation of radium-223 dichloride therapy at the sub-cellular level.

    PubMed

    Gholami, Y; Zhu, X; Fulton, R; Meikle, S; El-Fakhri, G; Kuncic, Z

    2015-08-01

    Radium-223 dichloride ((223)Ra) is an alpha particle emitter and a natural bone-seeking radionuclide that is currently used for treating osteoblastic bone metastases associated with prostate cancer. The stochastic nature of alpha emission, hits and energy deposition poses some challenges for estimating radiation damage. In this paper we investigate the distribution of hits to cells by multiple alpha particles corresponding to a typical clinically delivered dose using a Monte Carlo model to simulate the stochastic effects. The number of hits and dose deposition were recorded in the cytoplasm and nucleus of each cell. Alpha particle tracks were also visualized. We found that the stochastic variation in dose deposited in cell nuclei ([Formula: see text]40%) can be attributed in part to the variation in LET with pathlength. We also found that [Formula: see text]18% of cell nuclei receive less than one sigma below the average dose per cell ([Formula: see text]15.4 Gy). One possible implication of this is that the efficacy of cell kill in alpha particle therapy need not rely solely on ionization clustering on DNA but possibly also on indirect DNA damage through the production of free radicals and ensuing intracellular signaling.

  6. Stochastic simulation of radium-223 dichloride therapy at the sub-cellular level

    NASA Astrophysics Data System (ADS)

    Gholami, Y.; Zhu, X.; Fulton, R.; Meikle, S.; El-Fakhri, G.; Kuncic, Z.

    2015-08-01

    Radium-223 dichloride (223Ra) is an alpha particle emitter and a natural bone-seeking radionuclide that is currently used for treating osteoblastic bone metastases associated with prostate cancer. The stochastic nature of alpha emission, hits and energy deposition poses some challenges for estimating radiation damage. In this paper we investigate the distribution of hits to cells by multiple alpha particles corresponding to a typical clinically delivered dose using a Monte Carlo model to simulate the stochastic effects. The number of hits and dose deposition were recorded in the cytoplasm and nucleus of each cell. Alpha particle tracks were also visualized. We found that the stochastic variation in dose deposited in cell nuclei (≃ 40%) can be attributed in part to the variation in LET with pathlength. We also found that ≃ 18% of cell nuclei receive less than one sigma below the average dose per cell (≃ 15.4 Gy). One possible implication of this is that the efficacy of cell kill in alpha particle therapy need not rely solely on ionization clustering on DNA but possibly also on indirect DNA damage through the production of free radicals and ensuing intracellular signaling.

  7. Stochastic unilateral free vibration of an in-plane cable network

    NASA Astrophysics Data System (ADS)

    Giaccu, Gian Felice; Barbiellini, Bernardo; Caracoglia, Luca

    2015-03-01

    Cross-ties are often used on cable-stayed bridges for mitigating wind-induced stay vibration since they can be easily installed on existing systems. The system obtained by connecting two (or more) stays with a transverse restrainer is designated as an "in-plane cable-network". Failures in the restrainers of an existing network have been observed. In a previous study [1] a model was proposed to explain the failures in the cross-ties as being related to a loss in the initial pre-tensioning force imparted to the connector. This effect leads to the "unilateral" free vibration of the network. Deterministic free vibrations of a three-cable network were investigated by using the "equivalent linearization method". Since the value of the initial vibration amplitude is often not well known due to the complex aeroelastic vibration regimes, which can be experienced by the stays, the stochastic nature of the problem must be considered. This issue is investigated in the present paper. Free-vibration dynamics of the cable network, driven by an initial stochastic disturbance associated with uncertain vibration amplitudes, is examined. The corresponding random eigen-value problem for the vibration frequencies is solved through an implementation of Stochastic Approximation, (SA) based on the Robbins-Monro Theorem. Monte-Carlo methods are also used for validating the SA results.

  8. Variational Bayesian identification and prediction of stochastic nonlinear dynamic causal models

    PubMed Central

    Daunizeau, J.; Friston, K.J.; Kiebel, S.J.

    2009-01-01

    In this paper, we describe a general variational Bayesian approach for approximate inference on nonlinear stochastic dynamic models. This scheme extends established approximate inference on hidden-states to cover: (i) nonlinear evolution and observation functions, (ii) unknown parameters and (precision) hyperparameters and (iii) model comparison and prediction under uncertainty. Model identification or inversion entails the estimation of the marginal likelihood or evidence of a model. This difficult integration problem can be finessed by optimising a free-energy bound on the evidence using results from variational calculus. This yields a deterministic update scheme that optimises an approximation to the posterior density on the unknown model variables. We derive such a variational Bayesian scheme in the context of nonlinear stochastic dynamic hierarchical models, for both model identification and time-series prediction. The computational complexity of the scheme is comparable to that of an extended Kalman filter, which is critical when inverting high dimensional models or long time-series. Using Monte-Carlo simulations, we assess the estimation efficiency of this variational Bayesian approach using three stochastic variants of chaotic dynamic systems. We also demonstrate the model comparison capabilities of the method, its self-consistency and its predictive power. PMID:19862351

  9. Confinement and diffusion modulate bistability and stochastic switching in a reaction network with positive feedback

    NASA Astrophysics Data System (ADS)

    Mlynarczyk, Paul J.; Pullen, Robert H.; Abel, Steven M.

    2016-01-01

    Positive feedback is a common feature in signal transduction networks and can lead to phenomena such as bistability and signal propagation by domain growth. Physical features of the cellular environment, such as spatial confinement and the mobility of proteins, play important but inadequately understood roles in shaping the behavior of signaling networks. Here, we use stochastic, spatially resolved kinetic Monte Carlo simulations to explore a positive feedback network as a function of system size, system shape, and mobility of molecules. We show that these physical properties can markedly alter characteristics of bistability and stochastic switching when compared with well-mixed simulations. Notably, systems of equal volume but different shapes can exhibit qualitatively different behaviors under otherwise identical conditions. We show that stochastic switching to a state maintained by positive feedback occurs by cluster formation and growth. Additionally, the frequency at which switching occurs depends nontrivially on the diffusion coefficient, which can promote or suppress switching relative to the well-mixed limit. Taken together, the results provide a framework for understanding how confinement and protein mobility influence emergent features of the positive feedback network by modulating molecular concentrations, diffusion-influenced rate parameters, and spatiotemporal correlations between molecules.

  10. Stochastic Threshold Microdose Model for Cell Killing by Insoluble Metallic Nanomaterial Particles

    PubMed Central

    Scott, Bobby R.

    2010-01-01

    This paper introduces a novel microdosimetric model for metallic nanomaterial-particles (MENAP)-induced cytotoxicity. The focus is on the engineered insoluble MENAP which represent a significant breakthrough in the design and development of new products for consumers, industry, and medicine. Increased production is rapidly occurring and may cause currently unrecognized health effects (e.g., nervous system dysfunction, heart disease, cancer); thus, dose-response models for MENAP-induced biological effects are needed to facilitate health risk assessment. The stochastic threshold microdose (STM) model presented introduces novel stochastic microdose metrics for use in constructing dose-response relationships for the frequency of specific cellular (e.g., cell killing, mutations, neoplastic transformation) or subcellular (e.g., mitochondria dysfunction) effects. A key metric is the exposure-time-dependent, specific burden (MENAP count) for a given critical target (e.g., mitochondria, nucleus). Exceeding a stochastic threshold specific burden triggers cell death. For critical targets in the cytoplasm, the autophagic mode of death is triggered. For the nuclear target, the apoptotic mode of death is triggered. Overall cell survival is evaluated for the indicated competing modes of death when both apply. The STM model can be applied to cytotoxicity data using Bayesian methods implemented via Markov chain Monte Carlo. PMID:21191483

  11. Boosting Bayesian parameter inference of nonlinear stochastic differential equation models by Hamiltonian scale separation

    NASA Astrophysics Data System (ADS)

    Albert, Carlo; Ulzega, Simone; Stoop, Ruedi

    2016-04-01

    Parameter inference is a fundamental problem in data-driven modeling. Given observed data that is believed to be a realization of some parameterized model, the aim is to find parameter values that are able to explain the observed data. In many situations, the dominant sources of uncertainty must be included into the model for making reliable predictions. This naturally leads to stochastic models. Stochastic models render parameter inference much harder, as the aim then is to find a distribution of likely parameter values. In Bayesian statistics, which is a consistent framework for data-driven learning, this so-called posterior distribution can be used to make probabilistic predictions. We propose a novel, exact, and very efficient approach for generating posterior parameter distributions for stochastic differential equation models calibrated to measured time series. The algorithm is inspired by reinterpreting the posterior distribution as a statistical mechanics partition function of an object akin to a polymer, where the measurements are mapped on heavier beads compared to those of the simulated data. To arrive at distribution samples, we employ a Hamiltonian Monte Carlo approach combined with a multiple time-scale integration. A separation of time scales naturally arises if either the number of measurement points or the number of simulation points becomes large. Furthermore, at least for one-dimensional problems, we can decouple the harmonic modes between measurement points and solve the fastest part of their dynamics analytically. Our approach is applicable to a wide range of inference problems and is highly parallelizable.

  12. Confinement and diffusion modulate bistability and stochastic switching in a reaction network with positive feedback.

    PubMed

    Mlynarczyk, Paul J; Pullen, Robert H; Abel, Steven M

    2016-01-01

    Positive feedback is a common feature in signal transduction networks and can lead to phenomena such as bistability and signal propagation by domain growth. Physical features of the cellular environment, such as spatial confinement and the mobility of proteins, play important but inadequately understood roles in shaping the behavior of signaling networks. Here, we use stochastic, spatially resolved kinetic Monte Carlo simulations to explore a positive feedback network as a function of system size, system shape, and mobility of molecules. We show that these physical properties can markedly alter characteristics of bistability and stochastic switching when compared with well-mixed simulations. Notably, systems of equal volume but different shapes can exhibit qualitatively different behaviors under otherwise identical conditions. We show that stochastic switching to a state maintained by positive feedback occurs by cluster formation and growth. Additionally, the frequency at which switching occurs depends nontrivially on the diffusion coefficient, which can promote or suppress switching relative to the well-mixed limit. Taken together, the results provide a framework for understanding how confinement and protein mobility influence emergent features of the positive feedback network by modulating molecular concentrations, diffusion-influenced rate parameters, and spatiotemporal correlations between molecules.

  13. Stochastic modelling of turbulent combustion for design optimization of gas turbine combustors

    NASA Astrophysics Data System (ADS)

    Mehanna Ismail, Mohammed Ali

    The present work covers the development and the implementation of an efficient algorithm for the design optimization of gas turbine combustors. The purpose is to explore the possibilities and indicate constructive suggestions for optimization techniques as alternative methods for designing gas turbine combustors. The algorithm is general to the extent that no constraints are imposed on the combustion phenomena or on the combustor configuration. The optimization problem is broken down into two elementary problems: the first is the optimum search algorithm, and the second is the turbulent combustion model used to determine the combustor performance parameters. These performance parameters constitute the objective and physical constraints in the optimization problem formulation. The examination of both turbulent combustion phenomena and the gas turbine design process suggests that the turbulent combustion model represents a crucial part of the optimization algorithm. The basic requirements needed for a turbulent combustion model to be successfully used in a practical optimization algorithm are discussed. In principle, the combustion model should comply with the conflicting requirements of high fidelity, robustness and computational efficiency. To that end, the problem of turbulent combustion is discussed and the current state of the art of turbulent combustion modelling is reviewed. According to this review, turbulent combustion models based on the composition PDF transport equation are found to be good candidates for application in the present context. However, these models are computationally expensive. To overcome this difficulty, two different models based on the composition PDF transport equation were developed: an improved Lagrangian Monte Carlo composition PDF algorithm and the generalized stochastic reactor model. Improvements in the Lagrangian Monte Carlo composition PDF model performance and its computational efficiency were achieved through the

  14. Impact of Geological Characterization Uncertainties on Subsurface Flow & Transport Using a Stochastic Discrete Fracture Network Approach

    NASA Astrophysics Data System (ADS)

    Ezzedine, S. M.

    2009-12-01

    Fractures and fracture networks are the principal pathways for transport of water and contaminants in groundwater systems, enhanced geothermal system fluids, migration of oil and gas, carbon dioxide leakage from carbon sequestration sites, and of radioactive and toxic industrial wastes from underground storage repositories. A major issue to overcome when characterizing a fractured reservoir is that of data limitation due to accessibility and affordability. Moreover, the ability to map discontinuities in the rock with available geological and geophysical tools tends to decrease particularly as the scale of the discontinuity goes down. Geological characterization data include measurements of fracture density, orientation, extent, and aperture, and are based on analysis of outcrops, borehole optical and acoustic televiewer logs, aerial photographs, and core samples, among other techniques. All of these measurements are taken at the field scale through a very sparse limited number of deep boreholes. These types of data are often reduced to probability distribution functions for predictive modeling and simulation in a stochastic framework such as a stochastic discrete fracture network. Stochastic discrete fracture network models enable, through Monte Carlo realizations and simulations, probabilistic assessment of flow and transport phenomena that are not adequately captured using continuum models. Despite the fundamental uncertainties inherited within the probabilistic reduction of the sparse data collected, very little work has been conducted on quantifying uncertainty on the reduced probabilistic distribution functions. In the current study, using nested Monte Carlo simulations, we present the impact of parameter uncertainties of the distribution functions of fracture density, orientation, aperture and size on the flow and transport using topological measures such as fracture connectivity, physical characteristics such as effective hydraulic conductivity tensors, and

  15. Lattice Monte Carlo simulation of Galilei variant anomalous diffusion

    SciTech Connect

    Guo, Gang; Bittig, Arne; Uhrmacher, Adelinde

    2015-05-01

    The observation of an increasing number of anomalous diffusion phenomena motivates the study to reveal the actual reason for such stochastic processes. When it is difficult to get analytical solutions or necessary to track the trajectory of particles, lattice Monte Carlo (LMC) simulation has been shown to be particularly useful. To develop such an LMC simulation algorithm for the Galilei variant anomalous diffusion, we derive explicit solutions for the conditional and unconditional first passage time (FPT) distributions with double absorbing barriers. According to the theory of random walks on lattices and the FPT distributions, we propose an LMC simulation algorithm and prove that such LMC simulation can reproduce both the mean and the mean square displacement exactly in the long-time limit. However, the error introduced in the second moment of the displacement diverges according to a power law as the simulation time progresses. We give an explicit criterion for choosing a small enough lattice step to limit the error within the specified tolerance. We further validate the LMC simulation algorithm and confirm the theoretical error analysis through numerical simulations. The numerical results agree with our theoretical predictions very well.

  16. Monte Carlo role in radiobiological modelling of radiotherapy outcomes

    NASA Astrophysics Data System (ADS)

    El Naqa, Issam; Pater, Piotr; Seuntjens, Jan

    2012-06-01

    Radiobiological models are essential components of modern radiotherapy. They are increasingly applied to optimize and evaluate the quality of different treatment planning modalities. They are frequently used in designing new radiotherapy clinical trials by estimating the expected therapeutic ratio of new protocols. In radiobiology, the therapeutic ratio is estimated from the expected gain in tumour control probability (TCP) to the risk of normal tissue complication probability (NTCP). However, estimates of TCP/NTCP are currently based on the deterministic and simplistic linear-quadratic formalism with limited prediction power when applied prospectively. Given the complex and stochastic nature of the physical, chemical and biological interactions associated with spatial and temporal radiation induced effects in living tissues, it is conjectured that methods based on Monte Carlo (MC) analysis may provide better estimates of TCP/NTCP for radiotherapy treatment planning and trial design. Indeed, over the past few decades, methods based on MC have demonstrated superior performance for accurate simulation of radiation transport, tumour growth and particle track structures; however, successful application of modelling radiobiological response and outcomes in radiotherapy is still hampered with several challenges. In this review, we provide an overview of some of the main techniques used in radiobiological modelling for radiotherapy, with focus on the MC role as a promising computational vehicle. We highlight the current challenges, issues and future potentials of the MC approach towards a comprehensive systems-based framework in radiobiological modelling for radiotherapy.

  17. Prediction of Protein-DNA binding by Monte Carlo method

    NASA Astrophysics Data System (ADS)

    Deng, Yuefan; Eisenberg, Moises; Korobka, Alex

    1997-08-01

    We present an analysis and prediction of protein-DNA binding specificity based on the hydrogen bonding between DNA, protein, and auxillary clusters of water molecules. Zif268, glucocorticoid receptor, λ-repressor mutant, HIN-recombinase, and tramtrack protein-DNA complexes are studied. Hydrogen bonds are approximated by the Lennard-Jones potential with a cutoff distance between the hydrogen and the acceptor atoms set to 3.2 Åand an angular component based on a dipole-dipole interaction. We use a three-stage docking algorithm: geometric hashing that matches pairs of hydrogen bonding sites; (2) least-squares minimization of pairwise distances to filter out insignificant matches; and (3) Monte Carlo stochastic search to minimize the energy of the system. More information can be obtained from our first paper on this subject [Y.Deng et all, J.Computational Chemistry (1995)]. Results show that the biologically correct base pair is selected preferentially when there are two or more strong hydrogen bonds (with LJ potential lower than -0.20) that bind it to the protein. Predicted sequences are less stable in the case of weaker bonding sites. In general the inclusion of water bridges does increase the number of base pairs for which correct specificity is predicted.

  18. Numerical Stochastic Homogenization Method and Multiscale Stochastic Finite Element Method - A Paradigm for Multiscale Computation of Stochastic PDEs

    SciTech Connect

    X. Frank Xu

    2010-03-30

    Multiscale modeling of stochastic systems, or uncertainty quantization of multiscale modeling is becoming an emerging research frontier, with rapidly growing engineering applications in nanotechnology, biotechnology, advanced materials, and geo-systems, etc. While tremendous efforts have been devoted to either stochastic methods or multiscale methods, little combined work had been done on integration of multiscale and stochastic methods, and there was no method formally available to tackle multiscale problems involving uncertainties. By developing an innovative Multiscale Stochastic Finite Element Method (MSFEM), this research has made a ground-breaking contribution to the emerging field of Multiscale Stochastic Modeling (MSM) (Fig 1). The theory of MSFEM basically decomposes a boundary value problem of random microstructure into a slow scale deterministic problem and a fast scale stochastic one. The slow scale problem corresponds to common engineering modeling practices where fine-scale microstructure is approximated by certain effective constitutive constants, which can be solved by using standard numerical solvers. The fast scale problem evaluates fluctuations of local quantities due to random microstructure, which is important for scale-coupling systems and particularly those involving failure mechanisms. The Green-function-based fast-scale solver developed in this research overcomes the curse-of-dimensionality commonly met in conventional approaches, by proposing a random field-based orthogonal expansion approach. The MSFEM formulated in this project paves the way to deliver the first computational tool/software on uncertainty quantification of multiscale systems. The applications of MSFEM on engineering problems will directly enhance our modeling capability on materials science (composite materials, nanostructures), geophysics (porous media, earthquake), biological systems (biological tissues, bones, protein folding). Continuous development of MSFEM will

  19. Substantiation of parameters of the geometric model of the research reactor core for the calculation using the Monte Carlo method

    SciTech Connect

    Radaev, A. I. Schurovskaya, M. V.

    2015-12-15

    The choice of the spatial nodalization for the calculation of the power density and burnup distribution in a research reactor core with fuel assemblies of the IRT-3M and VVR-KN type using the program based on the Monte Carlo code is described. The influence of the spatial nodalization on the results of calculating basic neutronic characteristics and calculation time is investigated.

  20. Opportunity fuels

    SciTech Connect

    Lutwen, R.C.

    1996-12-31

    The paper consists of viewgraphs from a conference presentation. A comparison is made of opportunity fuels, defined as fuels that can be converted to other forms of energy at lower cost than standard fossil fuels. Types of fuels for which some limited technical data is provided include petroleum coke, garbage, wood waste, and tires. Power plant economics and pollution concerns are listed for each fuel, and compared to coal and natural gas power plant costs. A detailed cost breakdown for different plant types is provided for use in base fuel pricing.

  1. Continuous-time quantum Monte Carlo impurity solvers

    NASA Astrophysics Data System (ADS)

    Gull, Emanuel; Werner, Philipp; Fuchs, Sebastian; Surer, Brigitte; Pruschke, Thomas; Troyer, Matthias

    2011-04-01

    representations of quantum dots and molecular conductors and play an increasingly important role in the theory of "correlated electron" materials as auxiliary problems whose solution gives the "dynamical mean field" approximation to the self-energy and local correlation functions. Solution method: Quantum impurity models require a method of solution which provides access to both high and low energy scales and is effective for wide classes of physically realistic models. The continuous-time quantum Monte Carlo algorithms for which we present implementations here meet this challenge. Continuous-time quantum impurity methods are based on partition function expansions of quantum impurity models that are stochastically sampled to all orders using diagrammatic quantum Monte Carlo techniques. For a review of quantum impurity models and their applications and of continuous-time quantum Monte Carlo methods for impurity models we refer the reader to [2]. Additional comments: Use of dmft requires citation of this paper. Use of any ALPS program requires citation of the ALPS [1] paper. Running time: 60 s-8 h per iteration.

  2. Noise and stochastic resonance in voltage-gated ion channels

    PubMed Central

    Adair, Robert K.

    2003-01-01

    Using Monte Carlo techniques, I calculate the effects of internally generated noise on information transfer through the passage of action potential spikes along unmyelinated axons in a simple nervous system. I take the Hodgkin–Huxley (HH) description of Na and K channels in squid giant axons as the basis of the calculations and find that most signal transmission noise is generated by fluctuations in the channel open and closed populations. To bring the model closer to conventional descriptions in terms of thermal noise energy, kT, and to determine gating currents, I express the HH equations in the form of simple relations from statistical mechanics where the states are separated by a Gibbs energy that is modified by the action of the transmembrane potential on dipole moments held by the domains. Using the HH equations, I find that the output response (in the probability of action potential spikes) from small input potential pulses across the cell membrane is increased by added noise but falls off when the input noise becomes large, as in stochastic resonance models. That output noise response is sharply reduced by a small increase in the membrane polarization potential or a moderate increase in the channel densities. Because any reduction of noise incurs metabolic and developmental costs to an animal, the natural noise level is probably optimal and any increase in noise is likely to be harmful. Although these results are specific to signal transmission in unmyelinated axons, I suggest that the conclusions are likely to be general. PMID:14506291

  3. Bayesian analysis of botanical epidemics using stochastic compartmental models.

    PubMed

    Gibson, G J; Kleczkowski, A; Gilligan, C A

    2004-08-17

    A stochastic model for an epidemic, incorporating susceptible, latent, and infectious states, is developed. The model represents primary and secondary infection rates and a time-varying host susceptibility with applications to a wide range of epidemiological systems. A Markov chain Monte Carlo algorithm is presented that allows the model to be fitted to experimental observations within a Bayesian framework. The approach allows the uncertainty in unobserved aspects of the process to be represented in the parameter posterior densities. The methods are applied to experimental observations of damping-off of radish (Raphanus sativus) caused by the fungal pathogen Rhizoctonia solani, in the presence and absence of the antagonistic fungus Trichoderma viride, a biological control agent that has previously been shown to affect the rate of primary infection by using a maximum-likelihood estimate for a simpler model with no allowance for a latent period. Using the Bayesian analysis, we are able to estimate the latent period from population data, even when there is uncertainty in discriminating infectious from latently infected individuals in data collection. We also show that the inference that T. viride can control primary, but not secondary, infection is robust to inclusion of the latent period in the model, although the absolute values of the parameters change. Some refinements and potential difficulties with the Bayesian approach in this context, when prior information on parameters is lacking, are discussed along with broader applications of the methods to a wide range of epidemiological systems.

  4. Stochastic nature of clathrin-coated pit assembly

    NASA Astrophysics Data System (ADS)

    Banerjee, Anand; Berezhkovskii, Alexander; Nossal, Ralph

    2013-03-01

    Clathrin-mediated endocytosis is a complex process through which eukaryotic cells internalize various macromolecules (cargo). The process occurs via the formation of invaginations on the cell membrane, called clathrin-coated pits (CCPs). The dynamics of CCP formation shows remarkable variability. After initiation, a fraction of CCPs, called ``productive pits'', bind to cargo and then grow and mature into clathrin-coated vesicles (CCVs). In contrast, a large fraction of CCPs, called ``abortive pits'', fail to bind to cargo, grow only up to intermediate sizes and then disassemble. There is notable heterogeneity in the lifetimes of both productive and abortive pits. We propose a stochastic model of CCP dynamics to explain these experimental observations. Our model includes a kinetic scheme for CCP assembly and a related functional form for the dependence of free energy of a CCP on its size. Using this model, we calculate the lifetime distribution of abortive pits (via Monte Carlo simulation) and show that the distribution fits experimental data very well. By fitting the data we determine the free energy of CCP formation and show that CCPs without cargo are energetically unstable. We also suggest a mechanism by which cargo binding stabilizes CCPs and facilitates their growth.

  5. Stochastic Effects in Computational Biology of Space Radiation Cancer Risk

    NASA Technical Reports Server (NTRS)

    Cucinotta, Francis A.; Pluth, Janis; Harper, Jane; O'Neill, Peter

    2007-01-01

    Estimating risk from space radiation poses important questions on the radiobiology of protons and heavy ions. We are considering systems biology models to study radiation induced repair foci (RIRF) at low doses, in which less than one-track on average transverses the cell, and the subsequent DNA damage processing and signal transduction events. Computational approaches for describing protein regulatory networks coupled to DNA and oxidative damage sites include systems of differential equations, stochastic equations, and Monte-Carlo simulations. We review recent developments in the mathematical description of protein regulatory networks and possible approaches to radiation effects simulation. These include robustness, which states that regulatory networks maintain their functions against external and internal perturbations due to compensating properties of redundancy and molecular feedback controls, and modularity, which leads to general theorems for considering molecules that interact through a regulatory mechanism without exchange of matter leading to a block diagonal reduction of the connecting pathways. Identifying rate-limiting steps, robustness, and modularity in pathways perturbed by radiation damage are shown to be valid techniques for reducing large molecular systems to realistic computer simulations. Other techniques studied are the use of steady-state analysis, and the introduction of composite molecules or rate-constants to represent small collections of reactants. Applications of these techniques to describe spatial and temporal distributions of RIRF and cell populations following low dose irradiation are described.

  6. Monte Carlo Shower Counter Studies

    NASA Technical Reports Server (NTRS)

    Snyder, H. David

    1991-01-01

    Activities and accomplishments related to the Monte Carlo shower counter studies are summarized. A tape of the VMS version of the GEANT software was obtained and installed on the central computer at Gallaudet University. Due to difficulties encountered in updating this VMS version, a decision was made to switch to the UNIX version of the package. This version was installed and used to generate the set of data files currently accessed by various analysis programs. The GEANT software was used to write files of data for positron and proton showers. Showers were simulated for a detector consisting of 50 alternating layers of lead and scintillator. Each file consisted of 1000 events at each of the following energies: 0.1, 0.5, 2.0, 10, 44, and 200 GeV. Data analysis activities related to clustering, chi square, and likelihood analyses are summarized. Source code for the GEANT user subprograms and data analysis programs are provided along with example data plots.

  7. Applicability of 3D Monte Carlo simulations for local values calculations in a PWR core

    NASA Astrophysics Data System (ADS)

    Bernard, Franck; Cochet, Bertrand; Jinaphanh, Alexis; Jacquet, Olivier

    2014-06-01

    As technical support of the French Nuclear Safety Authority, IRSN has been developing the MORET Monte Carlo code for many years in the framework of criticality safety assessment and is now working to extend its application to reactor physics. For that purpose, beside the validation for criticality safety (more than 2000 benchmarks from the ICSBEP Handbook have been modeled and analyzed), a complementary validation phase for reactor physics has been started, with benchmarks from IRPHEP Handbook and others. In particular, to evaluate the applicability of MORET and other Monte Carlo codes for local flux or power density calculations in large power reactors, it has been decided to contribute to the "Monte Carlo Performance Benchmark" (hosted by OECD/NEA). The aim of this benchmark is to monitor, in forthcoming decades, the performance progress of detailed Monte Carlo full core calculations. More precisely, it measures their advancement towards achieving high statistical accuracy in reasonable computation time for local power at fuel pellet level. A full PWR reactor core is modeled to compute local power densities for more than 6 million fuel regions. This paper presents results obtained at IRSN for this benchmark with MORET and comparisons with MCNP. The number of fuel elements is so large that source convergence as well as statistical convergence issues could cause large errors in local tallies, especially in peripheral zones. Various sampling or tracking methods have been implemented in MORET, and their operational effects on such a complex case have been studied. Beyond convergence issues, to compute local values in so many fuel regions could cause prohibitive slowing down of neutron tracking. To avoid this, energy grid unification and tallies preparation before tracking have been implemented, tested and proved to be successful. In this particular case, IRSN obtained promising results with MORET compared to MCNP, in terms of local power densities, standard

  8. Coupled Monte Carlo neutronics and thermal hydraulics for power reactors

    SciTech Connect

    Bernnat, W.; Buck, M.; Mattes, M.; Zwermann, W.; Pasichnyk, I.; Velkov, K.

    2012-07-01

    The availability of high performance computing resources enables more and more the use of detailed Monte Carlo models even for full core power reactors. The detailed structure of the core can be described by lattices, modeled by so-called repeated structures e.g. in Monte Carlo codes such as MCNP5 or MCNPX. For cores with mainly uniform material compositions, fuel and moderator temperatures, there is no problem in constructing core models. However, when the material composition and the temperatures vary strongly a huge number of different material cells must be described which complicate the input and in many cases exceed code or memory limits. The second problem arises with the preparation of corresponding temperature dependent cross sections and thermal scattering laws. Only if these problems can be solved, a realistic coupling of Monte Carlo neutronics with an appropriate thermal-hydraulics model is possible. In this paper a method for the treatment of detailed material and temperature distributions in MCNP5 is described based on user-specified internal functions which assign distinct elements of the core cells to material specifications (e.g. water density) and temperatures from a thermal-hydraulics code. The core grid itself can be described with a uniform material specification. The temperature dependency of cross sections and thermal neutron scattering laws is taken into account by interpolation, requiring only a limited number of data sets generated for different temperatures. Applications will be shown for the stationary part of the Purdue PWR benchmark using ATHLET for thermal- hydraulics and for a generic Modular High Temperature reactor using THERMIX for thermal- hydraulics. (authors)

  9. Parallelization of KENO-Va Monte Carlo code

    NASA Astrophysics Data System (ADS)

    Ramón, Javier; Peña, Jorge

    1995-07-01

    KENO-Va is a code integrated within the SCALE system developed by Oak Ridge that solves the transport equation through the Monte Carlo Method. It is being used at the Consejo de Seguridad Nuclear (CSN) to perform criticality calculations for fuel storage pools and shipping casks. Two parallel versions of the code: one for shared memory machines and other for distributed memory systems using the message-passing interface PVM have been generated. In both versions the neutrons of each generation are tracked in parallel. In order to preserve the reproducibility of the results in both versions, advanced seeds for random numbers were used. The CONVEX C3440 with four processors and shared memory at CSN was used to implement the shared memory version. A FDDI network of 6 HP9000/735 was employed to implement the message-passing version using proprietary PVM. The speedup obtained was 3.6 in both cases.

  10. Accelerated rescaling of single Monte Carlo simulation runs with the Graphics Processing Unit (GPU).

    PubMed

    Yang, Owen; Choi, Bernard

    2013-01-01

    To interpret fiber-based and camera-based measurements of remitted light from biological tissues, researchers typically use analytical models, such as the diffusion approximation to light transport theory, or stochastic models, such as Monte Carlo modeling. To achieve rapid (ideally real-time) measurement of tissue optical properties, especially in clinical situations, there is a critical need to accelerate Monte Carlo simulation runs. In this manuscript, we report on our approach using the Graphics Processing Unit (GPU) to accelerate rescaling of single Monte Carlo runs to calculate rapidly diffuse reflectance values for different sets of tissue optical properties. We selected MATLAB to enable non-specialists in C and CUDA-based programming to use the generated open-source code. We developed a software package with four abstraction layers. To calculate a set of diffuse reflectance values from a simulated tissue with homogeneous optical properties, our rescaling GPU-based approach achieves a reduction in computation time of several orders of magnitude as compared to other GPU-based approaches. Specifically, our GPU-based approach generated a diffuse reflectance value in 0.08ms. The transfer time from CPU to GPU memory currently is a limiting factor with GPU-based calculations. However, for calculation of multiple diffuse reflectance values, our GPU-based approach still can lead to processing that is ~3400 times faster than other GPU-based approaches. PMID:24298424

  11. Hamiltonian Monte Carlo algorithm for the characterization of hydraulic conductivity from the heat tracing data

    NASA Astrophysics Data System (ADS)

    Djibrilla Saley, A.; Jardani, A.; Soueid Ahmed, A.; Raphael, A.; Dupont, J. P.

    2016-11-01

    Estimating spatial distributions of the hydraulic conductivity in heterogeneous aquifers has always been an important and challenging task in hydrology. Generally, the hydraulic conductivity field is determined from hydraulic head or pressure measurements. In the present study, we propose to use temperature data as source of information for characterizing the spatial distributions of the hydraulic conductivity field. In this way, we performed a laboratory sandbox experiment with the aim of imaging the heterogeneities of the hydraulic conductivity field from thermal monitoring. During the laboratory experiment, we injected a hot water pulse, which induces a heat plume motion into the sandbox. The induced plume was followed by a set of thermocouples placed in the sandbox. After the temperature data acquisition, we performed a hydraulic tomography using the stochastic Hybrid Monte Carlo approach, also called the Hamiltonian Monte Carlo (HMC) algorithm to invert the temperature data. This algorithm is based on a combination of the Metropolis Monte Carlo method and the Hamiltonian dynamics approach. The parameterization of the inverse problem was done with the Karhunen-Loève (KL) expansion to reduce the dimensionality of the unknown parameters. Our approach has provided successful reconstruction of the hydraulic conductivity field with low computational effort.

  12. Theoretically informed Monte Carlo simulation of liquid crystals by sampling of alignment-tensor fields.

    PubMed

    Armas-Pérez, Julio C; Londono-Hurtado, Alejandro; Guzmán, Orlando; Hernández-Ortiz, Juan P; de Pablo, Juan J

    2015-07-28

    A theoretically informed coarse-grained Monte Carlo method is proposed for studying liquid crystals. The free energy functional of the system is described in the framework of the Landau-de Gennes formalism. The alignment field and its gradients are approximated by finite differences, and the free energy is minimized through a stochastic sampling technique. The validity of the proposed method is established by comparing the results of the proposed approach to those of traditional free energy minimization techniques. Its usefulness is illustrated in the context of three systems, namely, a nematic liquid crystal confined in a slit channel, a nematic liquid crystal droplet, and a chiral liquid crystal in the bulk. It is found that for systems that exhibit multiple metastable morphologies, the proposed Monte Carlo method is generally able to identify lower free energy states that are often missed by traditional approaches. Importantly, the Monte Carlo method identifies such states from random initial configurations, thereby obviating the need for educated initial guesses that can be difficult to formulate.

  13. A comparison of generalized hybrid Monte Carlo methods with and without momentum flip

    SciTech Connect

    Akhmatskaya, Elena; Bou-Rabee, Nawaf; Reich, Sebastian

    2009-04-01

    The generalized hybrid Monte Carlo (GHMC) method combines Metropolis corrected constant energy simulations with a partial random refreshment step in the particle momenta. The standard detailed balance condition requires that momenta are negated upon rejection of a molecular dynamics proposal step. The implication is a trajectory reversal upon rejection, which is undesirable when interpreting GHMC as thermostated molecular dynamics. We show that a modified detailed balance condition can be used to implement GHMC without momentum flips. The same modification can be applied to the generalized shadow hybrid Monte Carlo (GSHMC) method. Numerical results indicate that GHMC/GSHMC implementations with momentum flip display a favorable behavior in terms of sampling efficiency, i.e., the traditional GHMC/GSHMC implementations with momentum flip got the advantage of a higher acceptance rate and faster decorrelation of Monte Carlo samples. The difference is more pronounced for GHMC. We also numerically investigate the behavior of the GHMC method as a Langevin-type thermostat. We find that the GHMC method without momentum flip interferes less with the underlying stochastic molecular dynamics in terms of autocorrelation functions and it to be preferred over the GHMC method with momentum flip. The same finding applies to GSHMC.

  14. Theoretically informed Monte Carlo simulation of liquid crystals by sampling of alignment-tensor fields.

    SciTech Connect

    Armas-Perez, Julio C.; Londono-Hurtado, Alejandro; Guzman, Orlando; Hernandez-Ortiz, Juan P.; de Pablo, Juan J.

    2015-07-27

    A theoretically informed coarse-grained Monte Carlo method is proposed for studying liquid crystals. The free energy functional of the system is described in the framework of the Landau-de Gennes formalism. The alignment field and its gradients are approximated by finite differences, and the free energy is minimized through a stochastic sampling technique. The validity of the proposed method is established by comparing the results of the proposed approach to those of traditional free energy minimization techniques. Its usefulness is illustrated in the context of three systems, namely, a nematic liquid crystal confined in a slit channel, a nematic liquid crystal droplet, and a chiral liquid crystal in the bulk. It is found that for systems that exhibit multiple metastable morphologies, the proposed Monte Carlo method is generally able to identify lower free energy states that are often missed by traditional approaches. Importantly, the Monte Carlo method identifies such states from random initial configurations, thereby obviating the need for educated initial guesses that can be difficult to formulate.

  15. Theoretically informed Monte Carlo simulation of liquid crystals by sampling of alignment-tensor fields

    SciTech Connect

    Armas-Pérez, Julio C.; Londono-Hurtado, Alejandro; Guzmán, Orlando; Hernández-Ortiz, Juan P.; Pablo, Juan J. de

    2015-07-28

    A theoretically informed coarse-grained Monte Carlo method is proposed for studying liquid crystals. The free energy functional of the system is described in the framework of the Landau-de Gennes formalism. The alignment field and its gradients are approximated by finite differences, and the free energy is minimized through a stochastic sampling technique. The validity of the proposed method is established by comparing the results of the proposed approach to those of traditional free energy minimization techniques. Its usefulness is illustrated in the context of three systems, namely, a nematic liquid crystal confined in a slit channel, a nematic liquid crystal droplet, and a chiral liquid crystal in the bulk. It is found that for systems that exhibit multiple metastable morphologies, the proposed Monte Carlo method is generally able to identify lower free energy states that are often missed by traditional approaches. Importantly, the Monte Carlo method identifies such states from random initial configurations, thereby obviating the need for educated initial guesses that can be difficult to formulate.

  16. Synthetic Fuel

    ScienceCinema

    Idaho National Laboratory - Steve Herring, Jim O'Brien, Carl Stoots

    2016-07-12

    Two global energy priorities today are finding environmentally friendly alternatives to fossil fuels, and reducing greenhouse gass Two global energy priorities today are finding environmentally friendly alternatives to fossil fuels, and reducing greenhous

  17. Fuel cells

    NASA Astrophysics Data System (ADS)

    1984-12-01

    The US Department of Energy (DOE), Office of Fossil Energy, has supported and managed a fuel cell research and development (R and D) program since 1976. Responsibility for implementing DOE's fuel cell program, which includes activities related to both fuel cells and fuel cell systems, has been assigned to the Morgantown Energy Technology Center (METC) in Morgantown, West Virginia. The total United States effort of the private and public sectors in developing fuel cell technology is referred to as the National Fuel Cell Program (NFCP). The goal of the NFCP is to develop fuel cell power plants for base-load and dispersed electric utility systems, industrial cogeneration, and on-site applications. To achieve this goal, the fuel cell developers, electric and gas utilities, research institutes, and Government agencies are working together. Four organized groups are coordinating the diversified activities of the NFCP. The status of the overall program is reviewed in detail.

  18. Synthetic Fuel

    SciTech Connect

    Idaho National Laboratory - Steve Herring, Jim O'Brien, Carl Stoots

    2008-03-26

    Two global energy priorities today are finding environmentally friendly alternatives to fossil fuels, and reducing greenhouse gass Two global energy priorities today are finding environmentally friendly alternatives to fossil fuels, and reducing greenhous

  19. Monte Carlo Ion Transport Analysis Code.

    2009-04-15

    Version: 00 TRIPOS is a versatile Monte Carlo ion transport analysis code. It has been applied to the treatment of both surface and bulk radiation effects. The media considered is composed of multilayer polyatomic materials.

  20. Improved Monte Carlo Renormalization Group Method

    DOE R&D Accomplishments Database

    Gupta, R.; Wilson, K. G.; Umrigar, C.

    1985-01-01

    An extensive program to analyze critical systems using an Improved Monte Carlo Renormalization Group Method (IMCRG) being undertaken at LANL and Cornell is described. Here we first briefly review the method and then list some of the topics being investigated.

  1. Analytical Applications of Monte Carlo Techniques.

    ERIC Educational Resources Information Center

    Guell, Oscar A.; Holcombe, James A.

    1990-01-01

    Described are analytical applications of the theory of random processes, in particular solutions obtained by using statistical procedures known as Monte Carlo techniques. Supercomputer simulations, sampling, integration, ensemble, annealing, and explicit simulation are discussed. (CW)

  2. Monte Carlo simulation of aorta autofluorescence

    NASA Astrophysics Data System (ADS)

    Kuznetsova, A. A.; Pushkareva, A. E.

    2016-08-01

    Results of numerical simulation of autofluorescence of the aorta by the method of Monte Carlo are reported. Two states of the aorta, normal and with atherosclerotic lesions, are studied. A model of the studied tissue is developed on the basis of information about optical, morphological, and physico-chemical properties. It is shown that the data obtained by numerical Monte Carlo simulation are in good agreement with experimental results indicating adequacy of the developed model of the aorta autofluorescence.

  3. Neural network and Monte Carlo simulation approach to investigate variability of copper concentration in phytoremediated contaminated soils.

    PubMed

    Hattab, Nour; Hambli, Ridha; Motelica-Heino, Mikael; Mench, Michel

    2013-11-15

    The statistical variation of soil properties and their stochastic combinations may affect the extent of soil contamination by metals. This paper describes a method for the stochastic analysis of the effects of the variation in some selected soil factors (pH, DOC and EC) on the concentration of copper in dwarf bean leaves (phytoavailability) grown in the laboratory on contaminated soils treated with different amendments. The method is based on a hybrid modeling technique that combines an artificial neural network (ANN) and Monte Carlo Simulations (MCS). Because the repeated analyses required by MCS are time-consuming, the ANN is employed to predict the copper concentration in dwarf bean leaves in response to stochastic (random) combinations of soil inputs. The input data for the ANN are a set of selected soil parameters generated randomly according to a Gaussian distribution to represent the parameter variabilities. The output is the copper concentration in bean leaves. The results obtained by the stochastic (hybrid) ANN-MCS method show that the proposed approach may be applied (i) to perform a sensitivity analysis of soil factors in order to quantify the most important soil parameters including soil properties and amendments on a given metal concentration, (ii) to contribute toward the development of decision-making processes at a large field scale such as the delineation of contaminated sites.

  4. Fossil Fuels.

    ERIC Educational Resources Information Center

    Crank, Ron

    This instructional unit is one of 10 developed by students on various energy-related areas that deals specifically with fossil fuels. Some topics covered are historic facts, development of fuels, history of oil production, current and future trends of the oil industry, refining fossil fuels, and environmental problems. Material in each unit may…

  5. Probability Forecasting Using Monte Carlo Simulation

    NASA Astrophysics Data System (ADS)

    Duncan, M.; Frisbee, J.; Wysack, J.

    2014-09-01

    Space Situational Awareness (SSA) is defined as the knowledge and characterization of all aspects of space. SSA is now a fundamental and critical component of space operations. Increased dependence on our space assets has in turn lead to a greater need for accurate, near real-time knowledge of all space activities. With the growth of the orbital debris population, satellite operators are performing collision avoidance maneuvers more frequently. Frequent maneuver execution expends fuel and reduces the operational lifetime of the spacecraft. Thus the need for new, more sophisticated collision threat characterization methods must be implemented. The collision probability metric is used operationally to quantify the collision risk. The collision probability is typically calculated days into the future, so that high risk and potential high risk conjunction events are identified early enough to develop an appropriate course of action. As the time horizon to the conjunction event is reduced, the collision probability changes. A significant change in the collision probability will change the satellite mission stakeholder's course of action. So constructing a method for estimating how the collision probability will evolve improves operations by providing satellite operators with a new piece of information, namely an estimate or 'forecast' of how the risk will change as time to the event is reduced. Collision probability forecasting is a predictive process where the future risk of a conjunction event is estimated. The method utilizes a Monte Carlo simulation that produces a likelihood distribution for a given collision threshold. Using known state and state uncertainty information, the simulation generates a set possible trajectories for a given space object pair. Each new trajectory produces a unique event geometry at the time of close approach. Given state uncertainty information for both objects, a collision probability value can be computed for every trail. This yields a

  6. A split-step method to include electron-electron collisions via Monte Carlo in multiple rate equation simulations

    NASA Astrophysics Data System (ADS)

    Huthmacher, Klaus; Molberg, Andreas K.; Rethfeld, Bärbel; Gulley, Jeremy R.

    2016-10-01

    A split-step numerical method for calculating ultrafast free-electron dynamics in dielectrics is introduced. The two split steps, independently programmed in C++11 and FORTRAN 2003, are interfaced via the presented open source wrapper. The first step solves a deterministic extended multi-rate equation for the ionization, electron-phonon collisions, and single photon absorption by free-carriers. The second step is stochastic and models electron-electron collisions using Monte-Carlo techniques. This combination of deterministic and stochastic approaches is a unique and efficient method of calculating the nonlinear dynamics of 3D materials exposed to high intensity ultrashort pulses. Results from simulations solving the proposed model demonstrate how electron-electron scattering relaxes the non-equilibrium electron distribution on the femtosecond time scale.

  7. Kinetic Monte Carlo simulations of travelling pulses and spiral waves in the lattice Lotka-Volterra model.

    PubMed

    Makeev, Alexei G; Kurkina, Elena S; Kevrekidis, Ioannis G

    2012-06-01

    Kinetic Monte Carlo simulations are used to study the stochastic two-species Lotka-Volterra model on a square lattice. For certain values of the model parameters, the system constitutes an excitable medium: travelling pulses and rotating spiral waves can be excited. Stable solitary pulses travel with constant (modulo stochastic fluctuations) shape and speed along a periodic lattice. The spiral waves observed persist sometimes for hundreds of rotations, but they are ultimately unstable and break-up (because of fluctuations and interactions between neighboring fronts) giving rise to complex dynamic behavior in which numerous small spiral waves rotate and interact with each other. It is interesting that travelling pulses and spiral waves can be exhibited by the model even for completely immobile species, due to the non-local reaction kinetics.

  8. Kinetic Monte Carlo simulations of travelling pulses and spiral waves in the lattice Lotka-Volterra model

    NASA Astrophysics Data System (ADS)

    Makeev, Alexei G.; Kurkina, Elena S.; Kevrekidis, Ioannis G.

    2012-06-01

    Kinetic Monte Carlo simulations are used to study the stochastic two-species Lotka-Volterra model on a square lattice. For certain values of the model parameters, the system constitutes an excitable medium: travelling pulses and rotating spiral waves can be excited. Stable solitary pulses travel with constant (modulo stochastic fluctuations) shape and speed along a periodic lattice. The spiral waves observed persist sometimes for hundreds of rotations, but they are ultimately unstable and break-up (because of fluctuations and interactions between neighboring fronts) giving rise to complex dynamic behavior in which numerous small spiral waves rotate and interact with each other. It is interesting that travelling pulses and spiral waves can be exhibited by the model even for completely immobile species, due to the non-local reaction kinetics.

  9. Stochastic Microlensing: Mathematical Theory and Applications

    NASA Astrophysics Data System (ADS)

    Teguia, Alberto Mokak

    Stochastic microlensing is a central tool in probing dark matter on galactic scales. From first principles, we initiate the development of a mathematical theory of stochastic microlensing. We first construct a natural probability space for stochastic microlensing and characterize the general behaviour of the random time delay functions' random critical sets. Next we study stochastic microlensing in two distinct random microlensing scenarios: The uniform stars' distribution with constant mass spectrum and the spatial stars' distribution with general mass spectrum. For each scenario, we determine exact and asymptotic (in the large number of point masses limit) stochastic properties of the random time delay functions and associated random lensing maps and random shear tensors, including their moments and asymptotic density functions. We use these results to study certain random observables, such as random fixed lensed images, random bending angles, and random magnifications. These results are relevant to the theory of random fields and provide a platform for further generalizations as well as analytical limits for checking astrophysical studies of stochastic microlensing. Continuing our development of a mathematical theory of stochastic microlensing, we study the stochastic version of the Image Counting Problem, first considered in the non-random setting by Einstein and generalized by Petters. In particular, we employ the Kac-Rice formula and Morse theory to deduce general formulas for the expected total number of images and the expected number of saddle images for a general random lensing scenario. We further generalize these results by considering random sources defined on a countable compact covering of the light source plane. This is done to introduce the notion of global expected number of positive parity images due to a general lensing map. Applying the result to the uniform stars' distribution random microlensing scenario, we calculate the asymptotic global

  10. Random musings on stochastics (Lorenz Lecture)

    NASA Astrophysics Data System (ADS)

    Koutsoyiannis, D.

    2014-12-01

    In 1960 Lorenz identified the chaotic nature of atmospheric dynamics, thus highlighting the importance of the discovery of chaos by Poincare, 70 years earlier, in the motion of three bodies. Chaos in the macroscopic world offered a natural way to explain unpredictability, that is, randomness. Concurrently with Poincare's discovery, Boltzmann introduced statistical physics, while soon after Borel and Lebesgue laid the foundation of measure theory, later (in 1930s) used by Kolmogorov as the formal foundation of probability theory. Subsequently, Kolmogorov and Khinchin introduced the concepts of stochastic processes and stationarity, and advanced the concept of ergodicity. All these areas are now collectively described by the term "stochastics", which includes probability theory, stochastic processes and statistics. As paradoxical as it may seem, stochastics offers the tools to deal with chaos, even if it results from deterministic dynamics. As chaos entails uncertainty, it is more informative and effective to replace the study of exact system trajectories with that of probability densities. Also, as the exact laws of complex systems can hardly be deduced by synthesis of the detailed interactions of system components, these laws should inevitably be inferred by induction, based on observational data and using statistics. The arithmetic of stochastics is quite different from that of regular numbers. Accordingly, it needs the development of intuition and interpretations which differ from those built upon deterministic considerations. Using stochastic tools in a deterministic context may result in mistaken conclusions. In an attempt to contribute to a more correct interpretation and use of stochastic concepts in typical tasks of nonlinear systems, several examples are studied, which aim (a) to clarify the difference in the meaning of linearity in deterministic and stochastic context; (b) to contribute to a more attentive use of stochastic concepts (entropy, statistical

  11. Trace distance in stochastic dephasing with initial correlation

    SciTech Connect

    Ban, Masashi; Kitajima, Sachiko; Shibata, Fumiaki

    2011-10-15

    The time evolution of the trace distance between quantum states of a qubit which is placed under the influence of stochastic dephasing is investigated within the framework of the stochastic Liouville equation. When stochastic dephasing is subject to the homogeneous Gauss-Markov process, the trace distance is exactly calculated in the presence of the initial correlation between the qubit and the stochastic process, where the stochastic process is inevitably a nonstationary process. It is found that even the initial correlation with the classical environment can make the trace distance greater than the initial value if stochastic dephasing causes the slow modulation of the qubit.

  12. Stochastic dynamics and non-equilibrium thermodynamics of a bistable chemical system: the Schlögl model revisited.

    PubMed

    Vellela, Melissa; Qian, Hong

    2009-10-01

    Schlögl's model is the canonical example of a chemical reaction system that exhibits bistability. Because the biological examples of bistability and switching behaviour are increasingly numerous, this paper presents an integrated deterministic, stochastic and thermodynamic analysis of the model. After a brief review of the deterministic and stochastic modelling frameworks, the concepts of chemical and mathematical detailed balances are discussed and non-equilibrium conditions are shown to be necessary for bistability. Thermodynamic quantities such as the flux, chemical potential and entropy production rate are defined and compared across the two models. In the bistable region, the stochastic model exhibits an exchange of the global stability between the two stable states under changes in the pump parameters and volume size. The stochastic entropy production rate shows a sharp transition that mirrors this exchange. A new hybrid model that includes continuous diffusion and discrete jumps is suggested to deal with the multiscale dynamics of the bistable system. Accurate approximations of the exponentially small eigenvalue associated with the time scale of this switching and the full time-dependent solution are calculated using Matlab. A breakdown of previously known asymptotic approximations on small volume scales is observed through comparison with these and Monte Carlo results. Finally, in the appendix section is an illustration of how the diffusion approximation of the chemical master equation can fail to represent correctly the mesoscopically interesting steady-state behaviour of the system.

  13. Stochastic and Statistical Analysis of Utility Revenues and Weather Data Analysis for Consumer Demand Estimation in Smart Grids

    PubMed Central

    Ali, S. M.; Mehmood, C. A; Khan, B.; Jawad, M.; Farid, U; Jadoon, J. K.; Ali, M.; Tareen, N. K.; Usman, S.; Majid, M.; Anwar, S. M.

    2016-01-01

    In smart grid paradigm, the consumer demands are random and time-dependent, owning towards stochastic probabilities. The stochastically varying consumer demands have put the policy makers and supplying agencies in a demanding position for optimal generation management. The utility revenue functions are highly dependent on the consumer deterministic stochastic demand models. The sudden drifts in weather parameters effects the living standards of the consumers that in turn influence the power demands. Considering above, we analyzed stochastically and statistically the effect of random consumer demands on the fixed and variable revenues of the electrical utilities. Our work presented the Multi-Variate Gaussian Distribution Function (MVGDF) probabilistic model of the utility revenues with time-dependent consumer random demands. Moreover, the Gaussian probabilities outcome of the utility revenues is based on the varying consumer n demands data-pattern. Furthermore, Standard Monte Carlo (SMC) simulations are performed that validated the factor of accuracy in the aforesaid probabilistic demand-revenue model. We critically analyzed the effect of weather data parameters on consumer demands using correlation and multi-linear regression schemes. The statistical analysis of consumer demands provided a relationship between dependent (demand) and independent variables (weather data) for utility load management, generation control, and network expansion. PMID:27314229

  14. Stochastic and Statistical Analysis of Utility Revenues and Weather Data Analysis for Consumer Demand Estimation in Smart Grids.

    PubMed

    Ali, S M; Mehmood, C A; Khan, B; Jawad, M; Farid, U; Jadoon, J K; Ali, M; Tareen, N K; Usman, S; Majid, M; Anwar, S M

    2016-01-01

    In smart grid paradigm, the consumer demands are random and time-dependent, owning towards stochastic probabilities. The stochastically varying consumer demands have put the policy makers and supplying agencies in a demanding position for optimal generation management. The utility revenue functions are highly dependent on the consumer deterministic stochastic demand models. The sudden drifts in weather parameters effects the living standards of the consumers that in turn influence the power demands. Considering above, we analyzed stochastically and statistically the effect of random consumer demands on the fixed and variable revenues of the electrical utilities. Our work presented the Multi-Variate Gaussian Distribution Function (MVGDF) probabilistic model of the utility revenues with time-dependent consumer random demands. Moreover, the Gaussian probabilities outcome of the utility revenues is based on the varying consumer n demands data-pattern. Furthermore, Standard Monte Carlo (SMC) simulations are performed that validated the factor of accuracy in the aforesaid probabilistic demand-revenue model. We critically analyzed the effect of weather data parameters on consumer demands using correlation and multi-linear regression schemes. The statistical analysis of consumer demands provided a relationship between dependent (demand) and independent variables (weather data) for utility load management, generation control, and network expansion.

  15. Modeling pitting corrosion damage of high-level radioactive-waste containers, with emphasis on the stochastic approach

    SciTech Connect

    Henshall, G.A.; Halsey, W.G.; Clarke, W.L.; McCright, R.D.

    1993-01-01

    Recent efforts to identify methods of modeling pitting corrosion damage of high-level radioactive-waste containers are described. The need to develop models that can provide information useful to higher level system performance assessment models is emphasized, and examples of how this could be accomplished are described. Work to date has focused upon physically-based phenomenological stochastic models of pit initiation and growth. These models may provide a way to distill information from mechanistic theories in a way that provides the necessary information to the less detailed performance assessment models. Monte Carlo implementations of the stochastic theory have resulted in simulations that are, at least qualitatively, consistent with a wide variety of experimental data. The effects of environment on pitting corrosion have been included in the model using a set of simple phenomenological equations relating the parameters of the stochastic model to key environmental variables. The results suggest that stochastic models might be useful for extrapolating accelerated test data and for predicting the effects of changes in the environment on pit initiation and growth. Preliminary ideas for integrating pitting models with performance assessment models are discussed. These ideas include improving the concept of container ``failure``, and the use of ``rules-of-thumb`` to take information from the detailed process models and provide it to the higher level system and subsystem models. Finally, directions for future work are described, with emphasis on additional experimental work since it is an integral part of the modeling process.

  16. Stochastic and Statistical Analysis of Utility Revenues and Weather Data Analysis for Consumer Demand Estimation in Smart Grids.

    PubMed

    Ali, S M; Mehmood, C A; Khan, B; Jawad, M; Farid, U; Jadoon, J K; Ali, M; Tareen, N K; Usman, S; Majid, M; Anwar, S M

    2016-01-01

    In smart grid paradigm, the consumer demands are random and time-dependent, owning towards stochastic probabilities. The stochastically varying consumer demands have put the policy makers and supplying agencies in a demanding position for optimal generation management. The utility revenue functions are highly dependent on the consumer deterministic stochastic demand models. The sudden drifts in weather parameters effects the living standards of the consumers that in turn influence the power demands. Considering above, we analyzed stochastically and statistically the effect of random consumer demands on the fixed and variable revenues of the electrical utilities. Our work presented the Multi-Variate Gaussian Distribution Function (MVGDF) probabilistic model of the utility revenues with time-dependent consumer random demands. Moreover, the Gaussian probabilities outcome of the utility revenues is based on the varying consumer n demands data-pattern. Furthermore, Standard Monte Carlo (SMC) simulations are performed that validated the factor of accuracy in the aforesaid probabilistic demand-revenue model. We critically analyzed the effect of weather data parameters on consumer demands using correlation and multi-linear regression schemes. The statistical analysis of consumer demands provided a relationship between dependent (demand) and independent variables (weather data) for utility load management, generation control, and network expansion. PMID:27314229

  17. Stochastic wave packet vs. direct density matrix solution of Liouville-von Neumann equations for photodesorption problems

    NASA Astrophysics Data System (ADS)

    Saalfrank, Peter

    1996-11-01

    The performance of stochastic wave packet approaches is contrasted with a direct method to numerically solve quantum open system Liouville-von Neumann equations for photodesorption problems. As a test case a simple one-dimensional two-state state model representative for NO/Pt(111) is adopted. Both desorption induced by electronic transitions (DIET) treated by a single-dissipative channel model, and desorption induced by multiple electronic transitions (DIMET) treated by a double-dissipative channel model, are considered. It is found that stochastic wave packets are a memory-saving alternative to direct matrix propagation schemes. However, if statistically rare events as for example the bond breaking in NO/Pt(111) are of interest, the former converges only slowly to the exact results. We also find that - in the case of coordinate-independent rates - Gadzuk's "jumping wave packet and weighted average" procedure frequently employed to describe DIET dynamics, is a rapidly converging variant of the stochastic wave packet approach, and therefore rigorously equivalent to the exact solution of a Liouville-von Neumann equation. The usual stochastic (Monte Carlo) wave packet approach, however, is more generally applicable, and allows for example to quantify the notion of "multiple" in DIMET processes.

  18. Simulation on reactor TRIGA Puspati core kinetics fueled with thorium (Th) based fuel element

    NASA Astrophysics Data System (ADS)

    Mohammed, Abdul Aziz; Pauzi, Anas Muhamad; Rahman, Shaik Mohmmed Haikhal Abdul; Zin, Muhamad Rawi Muhammad; Jamro, Rafhayudi; Idris, Faridah Mohamad

    2016-01-01

    In confronting global energy requirement and the search for better technologies, there is a real case for widening the range of potential variations in the design of nuclear power plants. Smaller and simpler reactors are attractive, provided they can meet safety and security standards and non-proliferation issues. On fuel cycle aspect, thorium fuel cycles produce much less plutonium and other radioactive transuranic elements than uranium fuel cycles. Although not fissile itself, Th-232 will absorb slow neutrons to produce uranium-233 (233U), which is fissile. By introducing Thorium, the numbers of highly enriched uranium fuel element can be reduced while maintaining the core neutronic performance. This paper describes the core kinetic of a small research reactor core like TRIGA fueled with a Th filled fuel element matrix using a general purpose Monte Carlo N-Particle (MCNP) code.

  19. A stochastic model for annual reproductive success.

    PubMed

    Kendall, Bruce E; Wittmann, Marion E

    2010-04-01

    Demographic stochasticity can have large effects on the dynamics of small populations as well as on the persistence of rare genotypes and lineages. Survival is sensibly modeled as a binomial process, but annual reproductive success (ARS) is more complex and general models for demographic stochasticity do not exist. Here we introduce a stochastic model framework for ARS and illustrate some of its properties. We model a sequence of stochastic events: nest completion, the number of eggs or neonates produced, nest predation, and the survival of individual offspring to independence. We also allow multiple nesting attempts within a breeding season. Most of these components can be described by Bernoulli or binomial processes; the exception is the distribution of offspring number. Using clutch and litter size distributions from 53 vertebrate species, we demonstrate that among-individual variability in offspring number can usually be described by the generalized Poisson distribution. Our model framework allows the demographic variance to be calculated from underlying biological processes and can easily be linked to models of environmental stochasticity or selection because of its parametric structure. In addition, it reveals that the distributions of ARS are often multimodal and skewed, with implications for extinction risk and evolution in small populations. PMID:20163244

  20. Stochastic resonance in models of neuronal ensembles

    SciTech Connect

    Chialvo, D.R. Longtin, A.; Mueller-Gerkin, J.

    1997-02-01

    Two recently suggested mechanisms for the neuronal encoding of sensory information involving the effect of stochastic resonance with aperiodic time-varying inputs are considered. It is shown, using theoretical arguments and numerical simulations, that the nonmonotonic behavior with increasing noise of the correlation measures used for the so-called aperiodic stochastic resonance (ASR) scenario does not rely on the cooperative effect typical of stochastic resonance in bistable and excitable systems. Rather, ASR with slowly varying signals is more properly interpreted as linearization by noise. Consequently, the broadening of the {open_quotes}resonance curve{close_quotes} in the multineuron {ital stochastic resonance without tuning} scenario can also be explained by this linearization. Computation of the input-output correlation as a function of both signal frequency and noise for the model system further reveals conditions where noise-induced firing with aperiodic inputs will benefit from stochastic resonance rather than linearization by noise. Thus, our study clarifies the tuning requirements for the optimal transduction of subthreshold aperiodic signals. It also shows that a single deterministic neuron can perform as well as a network when biased into a suprathreshold regime. Finally, we show that the inclusion of a refractory period in the spike-detection scheme produces a better correlation between instantaneous firing rate and input signal. {copyright} {ital 1997} {ital The American Physical Society}