Science.gov

Sample records for fuel stochastic monte

  1. Multidimensional stochastic approximation Monte Carlo.

    PubMed

    Zablotskiy, Sergey V; Ivanov, Victor A; Paul, Wolfgang

    2016-06-01

    Stochastic Approximation Monte Carlo (SAMC) has been established as a mathematically founded powerful flat-histogram Monte Carlo method, used to determine the density of states, g(E), of a model system. We show here how it can be generalized for the determination of multidimensional probability distributions (or equivalently densities of states) of macroscopic or mesoscopic variables defined on the space of microstates of a statistical mechanical system. This establishes this method as a systematic way for coarse graining a model system, or, in other words, for performing a renormalization group step on a model. We discuss the formulation of the Kadanoff block spin transformation and the coarse-graining procedure for polymer models in this language. We also apply it to a standard case in the literature of two-dimensional densities of states, where two competing energetic effects are present g(E_{1},E_{2}). We show when and why care has to be exercised when obtaining the microcanonical density of states g(E_{1}+E_{2}) from g(E_{1},E_{2}). PMID:27415383

  2. Multidimensional stochastic approximation Monte Carlo

    NASA Astrophysics Data System (ADS)

    Zablotskiy, Sergey V.; Ivanov, Victor A.; Paul, Wolfgang

    2016-06-01

    Stochastic Approximation Monte Carlo (SAMC) has been established as a mathematically founded powerful flat-histogram Monte Carlo method, used to determine the density of states, g (E ) , of a model system. We show here how it can be generalized for the determination of multidimensional probability distributions (or equivalently densities of states) of macroscopic or mesoscopic variables defined on the space of microstates of a statistical mechanical system. This establishes this method as a systematic way for coarse graining a model system, or, in other words, for performing a renormalization group step on a model. We discuss the formulation of the Kadanoff block spin transformation and the coarse-graining procedure for polymer models in this language. We also apply it to a standard case in the literature of two-dimensional densities of states, where two competing energetic effects are present g (E1,E2) . We show when and why care has to be exercised when obtaining the microcanonical density of states g (E1+E2) from g (E1,E2) .

  3. Optimization of Monte Carlo transport simulations in stochastic media

    SciTech Connect

    Liang, C.; Ji, W.

    2012-07-01

    This paper presents an accurate and efficient approach to optimize radiation transport simulations in a stochastic medium of high heterogeneity, like the Very High Temperature Gas-cooled Reactor (VHTR) configurations packed with TRISO fuel particles. Based on a fast nearest neighbor search algorithm, a modified fast Random Sequential Addition (RSA) method is first developed to speed up the generation of the stochastic media systems packed with both mono-sized and poly-sized spheres. A fast neutron tracking method is then developed to optimize the next sphere boundary search in the radiation transport procedure. In order to investigate their accuracy and efficiency, the developed sphere packing and neutron tracking methods are implemented into an in-house continuous energy Monte Carlo code to solve an eigenvalue problem in VHTR unit cells. Comparison with the MCNP benchmark calculations for the same problem indicates that the new methods show considerably higher computational efficiency. (authors)

  4. Stabilized multilevel Monte Carlo method for stiff stochastic differential equations

    NASA Astrophysics Data System (ADS)

    Abdulle, Assyr; Blumenthal, Adrian

    2013-10-01

    A multilevel Monte Carlo (MLMC) method for mean square stable stochastic differential equations with multiple scales is proposed. For such problems, that we call stiff, the performance of MLMC methods based on classical explicit methods deteriorates because of the time step restriction to resolve the fastest scales that prevents to exploit all the levels of the MLMC approach. We show that by switching to explicit stabilized stochastic methods and balancing the stabilization procedure simultaneously with the hierarchical sampling strategy of MLMC methods, the computational cost for stiff systems is significantly reduced, while keeping the computational algorithm fully explicit and easy to implement. Numerical experiments on linear and nonlinear stochastic differential equations and on a stochastic partial differential equation illustrate the performance of the stabilized MLMC method and corroborate our theoretical findings.

  5. Semi-stochastic full configuration interaction quantum Monte Carlo

    NASA Astrophysics Data System (ADS)

    Holmes, Adam; Petruzielo, Frank; Khadilkar, Mihir; Changlani, Hitesh; Nightingale, M. P.; Umrigar, C. J.

    2012-02-01

    In the recently proposed full configuration interaction quantum Monte Carlo (FCIQMC) [1,2], the ground state is projected out stochastically, using a population of walkers each of which represents a basis state in the Hilbert space spanned by Slater determinants. The infamous fermion sign problem manifests itself in the fact that walkers of either sign can be spawned on a given determinant. We propose an improvement on this method in the form of a hybrid stochastic/deterministic technique, which we expect will improve the efficiency of the algorithm by ameliorating the sign problem. We test the method on atoms and molecules, e.g., carbon, carbon dimer, N2 molecule, and stretched N2. [4pt] [1] Fermion Monte Carlo without fixed nodes: a Game of Life, death and annihilation in Slater Determinant space. George Booth, Alex Thom, Ali Alavi. J Chem Phys 131, 050106, (2009).[0pt] [2] Survival of the fittest: Accelerating convergence in full configuration-interaction quantum Monte Carlo. Deidre Cleland, George Booth, and Ali Alavi. J Chem Phys 132, 041103 (2010).

  6. Monte Carlo solution for uncertainty propagation in particle transport with a stochastic Galerkin method

    SciTech Connect

    Franke, B. C.; Prinja, A. K.

    2013-07-01

    The stochastic Galerkin method (SGM) is an intrusive technique for propagating data uncertainty in physical models. The method reduces the random model to a system of coupled deterministic equations for the moments of stochastic spectral expansions of result quantities. We investigate solving these equations using the Monte Carlo technique. We compare the efficiency with brute-force Monte Carlo evaluation of uncertainty, the non-intrusive stochastic collocation method (SCM), and an intrusive Monte Carlo implementation of the stochastic collocation method. We also describe the stability limitations of our SGM implementation. (authors)

  7. Stochastic Kinetic Monte Carlo algorithms for long-range Hamiltonians

    SciTech Connect

    Mason, D R; Rudd, R E; Sutton, A P

    2003-10-13

    We present a higher order kinetic Monte Carlo methodology suitable to model the evolution of systems in which the transition rates are non- trivial to calculate or in which Monte Carlo moves are likely to be non- productive flicker events. The second order residence time algorithm first introduced by Athenes et al.[1] is rederived from the n-fold way algorithm of Bortz et al.[2] as a fully stochastic algorithm. The second order algorithm can be dynamically called when necessary to eliminate unproductive flickering between a metastable state and its neighbors. An algorithm combining elements of the first order and second order methods is shown to be more efficient, in terms of the number of rate calculations, than the first order or second order methods alone while remaining statistically identical. This efficiency is of prime importance when dealing with computationally expensive rate functions such as those arising from long- range Hamiltonians. Our algorithm has been developed for use when considering simulations of vacancy diffusion under the influence of elastic stress fields. We demonstrate the improved efficiency of the method over that of the n-fold way in simulations of vacancy diffusion in alloys. Our algorithm is seen to be an order of magnitude more efficient than the n-fold way in these simulations. We show that when magnesium is added to an Al-2at.%Cu alloy, this has the effect of trapping vacancies. When trapping occurs, we see that our algorithm performs thousands of events for each rate calculation performed.

  8. Stochastic resonance phenomenon in Monte Carlo simulations of silver adsorbed on gold

    NASA Astrophysics Data System (ADS)

    Gimenez, María Cecilia

    2016-03-01

    The possibility of observing the stochastic resonance phenomenon was analyzed by means of Monte Carlo simulations of silver adsorbed on 100 gold surfaces. The coverage degree was studied as a function of the periodical variation of the chemical potential. The signal-noise relationship was studied as a function of the amplitude and frequency of chemical potential and temperature. When this value is plotted as a function of temperature, a maximum is found, indicating the possible presence of stochastic resonance.

  9. Monte Carlo Hybrid Applied to Binary Stochastic Mixtures

    Energy Science and Technology Software Center (ESTSC)

    2008-08-11

    The purpose of this set of codes isto use an inexpensive, approximate deterministic flux distribution to generate weight windows, wihich will then be used to bound particle weights for the Monte Carlo code run. The process is not automated; the user must run the deterministic code and use the output file as a command-line argument for the Monte Carlo code. Two sets of text input files are included as test problems/templates.

  10. Semi-stochastic full configuration interaction quantum Monte Carlo: Developments and application

    SciTech Connect

    Blunt, N. S. Kersten, J. A. F.; Smart, Simon D.; Spencer, J. S.; Booth, George H.; Alavi, Ali

    2015-05-14

    We expand upon the recent semi-stochastic adaptation to full configuration interaction quantum Monte Carlo (FCIQMC). We present an alternate method for generating the deterministic space without a priori knowledge of the wave function and present stochastic efficiencies for a variety of both molecular and lattice systems. The algorithmic details of an efficient semi-stochastic implementation are presented, with particular consideration given to the effect that the adaptation has on parallel performance in FCIQMC. We further demonstrate the benefit for calculation of reduced density matrices in FCIQMC through replica sampling, where the semi-stochastic adaptation seems to have even larger efficiency gains. We then combine these ideas to produce explicitly correlated corrected FCIQMC energies for the beryllium dimer, for which stochastic errors on the order of wavenumber accuracy are achievable.

  11. Semi-stochastic full configuration interaction quantum Monte Carlo: Developments and application.

    PubMed

    Blunt, N S; Smart, Simon D; Kersten, J A F; Spencer, J S; Booth, George H; Alavi, Ali

    2015-05-14

    We expand upon the recent semi-stochastic adaptation to full configuration interaction quantum Monte Carlo (FCIQMC). We present an alternate method for generating the deterministic space without a priori knowledge of the wave function and present stochastic efficiencies for a variety of both molecular and lattice systems. The algorithmic details of an efficient semi-stochastic implementation are presented, with particular consideration given to the effect that the adaptation has on parallel performance in FCIQMC. We further demonstrate the benefit for calculation of reduced density matrices in FCIQMC through replica sampling, where the semi-stochastic adaptation seems to have even larger efficiency gains. We then combine these ideas to produce explicitly correlated corrected FCIQMC energies for the beryllium dimer, for which stochastic errors on the order of wavenumber accuracy are achievable. PMID:25978883

  12. Empirical Analysis of Stochastic Volatility Model by Hybrid Monte Carlo Algorithm

    NASA Astrophysics Data System (ADS)

    Takaishi, Tetsuya

    2013-04-01

    The stochastic volatility model is one of volatility models which infer latent volatility of asset returns. The Bayesian inference of the stochastic volatility (SV) model is performed by the hybrid Monte Carlo (HMC) algorithm which is superior to other Markov Chain Monte Carlo methods in sampling volatility variables. We perform the HMC simulations of the SV model for two liquid stock returns traded on the Tokyo Stock Exchange and measure the volatilities of those stock returns. Then we calculate the accuracy of the volatility measurement using the realized volatility as a proxy of the true volatility and compare the SV model with the GARCH model which is one of other volatility models. Using the accuracy calculated with the realized volatility we find that empirically the SV model performs better than the GARCH model.

  13. A Hybrid Monte Carlo-Deterministic Method for Global Binary Stochastic Medium Transport Problems

    SciTech Connect

    Keady, K P; Brantley, P

    2010-03-04

    Global deep-penetration transport problems are difficult to solve using traditional Monte Carlo techniques. In these problems, the scalar flux distribution is desired at all points in the spatial domain (global nature), and the scalar flux typically drops by several orders of magnitude across the problem (deep-penetration nature). As a result, few particle histories may reach certain regions of the domain, producing a relatively large variance in tallies in those regions. Implicit capture (also known as survival biasing or absorption suppression) can be used to increase the efficiency of the Monte Carlo transport algorithm to some degree. A hybrid Monte Carlo-deterministic technique has previously been developed by Cooper and Larsen to reduce variance in global problems by distributing particles more evenly throughout the spatial domain. This hybrid method uses an approximate deterministic estimate of the forward scalar flux distribution to automatically generate weight windows for the Monte Carlo transport simulation, avoiding the necessity for the code user to specify the weight window parameters. In a binary stochastic medium, the material properties at a given spatial location are known only statistically. The most common approach to solving particle transport problems involving binary stochastic media is to use the atomic mix (AM) approximation in which the transport problem is solved using ensemble-averaged material properties. The most ubiquitous deterministic model developed specifically for solving binary stochastic media transport problems is the Levermore-Pomraning (L-P) model. Zimmerman and Adams proposed a Monte Carlo algorithm (Algorithm A) that solves the Levermore-Pomraning equations and another Monte Carlo algorithm (Algorithm B) that is more accurate as a result of improved local material realization modeling. Recent benchmark studies have shown that Algorithm B is often significantly more accurate than Algorithm A (and therefore the L-P model

  14. Stochastic Monte-Carlo Markov Chain Inversions on Models Regionalized Using Receiver Functions

    NASA Astrophysics Data System (ADS)

    Larmat, C. S.; Maceira, M.; Kato, Y.; Bodin, T.; Calo, M.; Romanowicz, B. A.; Chai, C.; Ammon, C. J.

    2014-12-01

    There is currently a strong interest in stochastic approaches to seismic modeling - versus deterministic methods such as gradient methods - due to the ability of these methods to better deal with highly non-linear problems. Another advantage of stochastic methods is that they allow the estimation of the a posteriori probability distribution of the derived parameters, meaning the envisioned Bayesian inversion of Tarantola allowing the quantification of the solution error. The cost to pay of stochastic methods is that they require testing thousands of variations of each unknown parameter and their associated weights to ensure reliable probabilistic inferences. Even with the best High-Performance Computing resources available, 3D stochastic full waveform modeling at the regional scale still remains out-of-reach. We are exploring regionalization as one way to reduce the dimension of the parameter space, allowing the identification of areas in the models that can be treated as one block in a subsequent stochastic inversion. Regionalization is classically performed through the identification of tectonic or structural elements. Lekic & Romanowicz (2011) proposed a new approach with a cluster analysis of the tomographic velocity models instead. Here we present the results of a clustering analysis on the P-wave receiver-functions used in the subsequent inversion. Different clustering algorithms and quality of clustering are tested for different datasets of North America and China. Preliminary results with the kmean clustering algorithm show that an interpolated receiver function wavefield (Chai et al., GRL, in review) improve the agreement with the geological and tectonic regions of North America compared to the traditional approach of stacked receiver functions. After regionalization, 1D profile for each region is stochastically inferred using a parallelized code based on Monte-Carlo Markov Chains (MCMC), and modeling surfacewave-dispersion and receiver

  15. A Monte Carlo simulation based inverse propagation method for stochastic model updating

    NASA Astrophysics Data System (ADS)

    Bao, Nuo; Wang, Chunjie

    2015-08-01

    This paper presents an efficient stochastic model updating method based on statistical theory. Significant parameters have been selected implementing the F-test evaluation and design of experiments, and then the incomplete fourth-order polynomial response surface model (RSM) has been developed. Exploiting of the RSM combined with Monte Carlo simulation (MCS), reduces the calculation amount and the rapid random sampling becomes possible. The inverse uncertainty propagation is given by the equally weighted sum of mean and covariance matrix objective functions. The mean and covariance of parameters are estimated synchronously by minimizing the weighted objective function through hybrid of particle-swarm and Nelder-Mead simplex optimization method, thus the better correlation between simulation and test is achieved. Numerical examples of a three degree-of-freedom mass-spring system under different conditions and GARTEUR assembly structure validated the feasibility and effectiveness of the proposed method.

  16. Improving multilevel Monte Carlo for stochastic differential equations with application to the Langevin equation

    PubMed Central

    Müller, Eike H.; Scheichl, Rob; Shardlow, Tony

    2015-01-01

    This paper applies several well-known tricks from the numerical treatment of deterministic differential equations to improve the efficiency of the multilevel Monte Carlo (MLMC) method for stochastic differential equations (SDEs) and especially the Langevin equation. We use modified equations analysis as an alternative to strong-approximation theory for the integrator, and we apply this to introduce MLMC for Langevin-type equations with integrators based on operator splitting. We combine this with extrapolation and investigate the use of discrete random variables in place of the Gaussian increments, which is a well-known technique for the weak approximation of SDEs. We show that, for small-noise problems, discrete random variables can lead to an increase in efficiency of almost two orders of magnitude for practical levels of accuracy.

  17. Fuel temperature reactivity coefficient calculation by Monte Carlo perturbation techniques

    SciTech Connect

    Shim, H. J.; Kim, C. H.

    2013-07-01

    We present an efficient method to estimate the fuel temperature reactivity coefficient (FTC) by the Monte Carlo adjoint-weighted correlated sampling method. In this method, a fuel temperature change is regarded as variations of the microscopic cross sections and the temperature in the free gas model which is adopted to correct the asymptotic double differential scattering kernel. The effectiveness of the new method is examined through the continuous energy MC neutronics calculations for PWR pin cell problems. The isotope-wise and reaction-type-wise contributions to the FTCs are investigated for two free gas models - the constant scattering cross section model and the exact model. It is shown that the proposed method can efficiently predict the reactivity change due to the fuel temperature variation. (authors)

  18. Stochastic sensitivity analysis of the biosphere model for Canadian nuclear fuel waste management

    SciTech Connect

    Reid, J.A.K.; Corbett, B.J. . Whiteshell Labs.)

    1993-01-01

    The biosphere model, BIOTRAC, was constructed to assess Canada's concept for nuclear fuel waste disposal in a vault deep in crystalline rock at some as yet undetermined location in the Canadian Shield. The model is therefore very general and based on the shield as a whole. BIOTRAC is made up of four linked submodels for surface water, soil, atmosphere, and food chain and dose. The model simulates physical conditions and radionuclide flows from the discharge of a hypothetical nuclear fuel waste disposal vault through groundwater, a well, a lake, air, soil, and plants to a critical group of individuals, i.e., those who are most exposed and therefore receive the highest dose. This critical group is totally self-sufficient and is represented by the International Commission for Radiological Protection reference man for dose prediction. BIOTRAC is a dynamic model that assumes steady-state physical conditions for each simulation, and deals with variation and uncertainty through Monte Carlo simulation techniques. This paper describes SENSYV, a technique for analyzing pathway and parameter sensitivities for the BIOTRAC code run in stochastic mode. Results are presented for [sup 129]I from the disposal of used fuel, and they confirm the importance of doses via the soil/plant/man and the air/plant/man ingestion pathways. The results also indicate that the lake/well water use switch, the aquatic iodine mass loading parameter, the iodine soil evasion rate, and the iodine plant/soil concentration ratio are important parameters.

  19. A stochastic model updating method for parameter variability quantification based on response surface models and Monte Carlo simulation

    NASA Astrophysics Data System (ADS)

    Fang, Sheng-En; Ren, Wei-Xin; Perera, Ricardo

    2012-11-01

    Stochastic model updating must be considered for quantifying uncertainties inherently existing in real-world engineering structures. By this means the statistical properties, instead of deterministic values, of structural parameters can be sought indicating the parameter variability. However, the implementation of stochastic model updating is much more complicated than that of deterministic methods particularly in the aspects of theoretical complexity and low computational efficiency. This study attempts to propose a simple and cost-efficient method by decomposing a stochastic updating process into a series of deterministic ones with the aid of response surface models and Monte Carlo simulation. The response surface models are used as surrogates for original FE models in the interest of programming simplification, fast response computation and easy inverse optimization. Monte Carlo simulation is adopted for generating samples from the assumed or measured probability distributions of responses. Each sample corresponds to an individual deterministic inverse process predicting the deterministic values of parameters. Then the parameter means and variances can be statistically estimated based on all the parameter predictions by running all the samples. Meanwhile, the analysis of variance approach is employed for the evaluation of parameter variability significance. The proposed method has been demonstrated firstly on a numerical beam and then a set of nominally identical steel plates tested in the laboratory. It is found that compared with the existing stochastic model updating methods, the proposed method presents similar accuracy while its primary merits consist in its simple implementation and cost efficiency in response computation and inverse optimization.

  20. Chaotic versus nonchaotic stochastic dynamics in Monte Carlo simulations: a route for accurate energy differences in N-body systems.

    PubMed

    Assaraf, Roland; Caffarel, Michel; Kollias, A C

    2011-04-15

    We present a method to efficiently evaluate small energy differences of two close N-body systems by employing stochastic processes having a stability versus chaos property. By using the same random noise, energy differences are computed from close trajectories without reweighting procedures. The approach is presented for quantum systems but can be applied to classical N-body systems as well. It is exemplified with diffusion Monte Carlo simulations for long chains of hydrogen atoms and molecules for which it is shown that the long-standing problem of computing energy derivatives is solved. PMID:21568537

  1. Evaluation of Monte Carlo Electron-Transport Algorithms in the Integrated Tiger Series Codes for Stochastic-Media Simulations

    NASA Astrophysics Data System (ADS)

    Franke, Brian C.; Kensek, Ronald P.; Prinja, Anil K.

    2014-06-01

    Stochastic-media simulations require numerous boundary crossings. We consider two Monte Carlo electron transport approaches and evaluate accuracy with numerous material boundaries. In the condensed-history method, approximations are made based on infinite-medium solutions for multiple scattering over some track length. Typically, further approximations are employed for material-boundary crossings where infinite-medium solutions become invalid. We have previously explored an alternative "condensed transport" formulation, a Generalized Boltzmann-Fokker-Planck GBFP method, which requires no special boundary treatment but instead uses approximations to the electron-scattering cross sections. Some limited capabilities for analog transport and a GBFP method have been implemented in the Integrated Tiger Series (ITS) codes. Improvements have been made to the condensed history algorithm. The performance of the ITS condensed-history and condensed-transport algorithms are assessed for material-boundary crossings. These assessments are made both by introducing artificial material boundaries and by comparison to analog Monte Carlo simulations.

  2. Monte Carlo method based radiative transfer simulation of stochastic open forest generated by circle packing application

    NASA Astrophysics Data System (ADS)

    Jin, Shengye; Tamura, Masayuki

    2013-10-01

    Monte Carlo Ray Tracing (MCRT) method is a versatile application for simulating radiative transfer regime of the Solar - Atmosphere - Landscape system. Moreover, it can be used to compute the radiation distribution over a complex landscape configuration, as an example like a forest area. Due to its robustness to the complexity of the 3-D scene altering, MCRT method is also employed for simulating canopy radiative transfer regime as the validation source of other radiative transfer models. In MCRT modeling within vegetation, one basic step is the canopy scene set up. 3-D scanning application was used for representing canopy structure as accurately as possible, but it is time consuming. Botanical growth function can be used to model the single tree growth, but cannot be used to express the impaction among trees. L-System is also a functional controlled tree growth simulation model, but it costs large computing memory. Additionally, it only models the current tree patterns rather than tree growth during we simulate the radiative transfer regime. Therefore, it is much more constructive to use regular solid pattern like ellipsoidal, cone, cylinder etc. to indicate single canopy. Considering the allelopathy phenomenon in some open forest optical images, each tree in its own `domain' repels other trees. According to this assumption a stochastic circle packing algorithm is developed to generate the 3-D canopy scene in this study. The canopy coverage (%) and the tree amount (N) of the 3-D scene are declared at first, similar to the random open forest image. Accordingly, we randomly generate each canopy radius (rc). Then we set the circle central coordinate on XY-plane as well as to keep circles separate from each other by the circle packing algorithm. To model the individual tree, we employ the Ishikawa's tree growth regressive model to set the tree parameters including DBH (dt), tree height (H). However, the relationship between canopy height (Hc) and trunk height (Ht) is

  3. Comparing three stochastic search algorithms for computational protein design: Monte Carlo, replica exchange Monte Carlo, and a multistart, steepest-descent heuristic.

    PubMed

    Mignon, David; Simonson, Thomas

    2016-07-15

    Computational protein design depends on an energy function and an algorithm to search the sequence/conformation space. We compare three stochastic search algorithms: a heuristic, Monte Carlo (MC), and a Replica Exchange Monte Carlo method (REMC). The heuristic performs a steepest-descent minimization starting from thousands of random starting points. The methods are applied to nine test proteins from three structural families, with a fixed backbone structure, a molecular mechanics energy function, and with 1, 5, 10, 20, 30, or all amino acids allowed to mutate. Results are compared to an exact, "Cost Function Network" method that identifies the global minimum energy conformation (GMEC) in favorable cases. The designed sequences accurately reproduce experimental sequences in the hydrophobic core. The heuristic and REMC agree closely and reproduce the GMEC when it is known, with a few exceptions. Plain MC performs well for most cases, occasionally departing from the GMEC by 3-4 kcal/mol. With REMC, the diversity of the sequences sampled agrees with exact enumeration where the latter is possible: up to 2 kcal/mol above the GMEC. Beyond, room temperature replicas sample sequences up to 10 kcal/mol above the GMEC, providing thermal averages and a solution to the inverse protein folding problem. © 2016 Wiley Periodicals, Inc. PMID:27197555

  4. Nano-structural analysis of effective transport paths in fuel-cell catalyst layers by using stochastic material network methods

    NASA Astrophysics Data System (ADS)

    Shin, Seungho; Kim, Ah-Reum; Um, Sukkee

    2016-02-01

    A two-dimensional material network model has been developed to visualize the nano-structures of fuel-cell catalysts and to search for effective transport paths for the optimal performance of fuel cells in randomly-disordered composite catalysts. Stochastic random modeling based on the Monte Carlo method is developed using random number generation processes over a catalyst layer domain at a 95% confidence level. After the post-determination process of the effective connectivity, particularly for mass transport, the effective catalyst utilization factors are introduced to determine the extent of catalyst utilization in the fuel cells. The results show that the superficial pore volume fractions of 600 trials approximate a normal distribution curve with a mean of 0.5. In contrast, the estimated volume fraction of effectively inter-connected void clusters ranges from 0.097 to 0.420, which is much smaller than the superficial porosity of 0.5 before the percolation process. Furthermore, the effective catalyst utilization factor is determined to be linearly proportional to the effective porosity. More importantly, this study reveals that the average catalyst utilization is less affected by the variations of the catalyst's particle size and the absolute catalyst loading at a fixed volume fraction of void spaces.

  5. Comparison of Ensemble Kalman Filter groundwater-data assimilation methods based on stochastic moment equations and Monte Carlo simulation

    NASA Astrophysics Data System (ADS)

    Panzeri, M.; Riva, M.; Guadagnini, A.; Neuman, S. P.

    2014-04-01

    Traditional Ensemble Kalman Filter (EnKF) data assimilation requires computationally intensive Monte Carlo (MC) sampling, which suffers from filter inbreeding unless the number of simulations is large. Recently we proposed an alternative EnKF groundwater-data assimilation method that obviates the need for sampling and is free of inbreeding issues. In our new approach, theoretical ensemble moments are approximated directly by solving a system of corresponding stochastic groundwater flow equations. Like MC-based EnKF, our moment equations (ME) approach allows Bayesian updating of system states and parameters in real-time as new data become available. Here we compare the performances and accuracies of the two approaches on two-dimensional transient groundwater flow toward a well pumping water in a synthetic, randomly heterogeneous confined aquifer subject to prescribed head and flux boundary conditions.

  6. Stochastic Inversion of Electrical Resistivity Changes Using a Markov Chain, Monte Carlo Approach

    SciTech Connect

    Ramirez, A; Nitao, J; Hanley, W; Aines, R; Glaser, R; Sengupta, S; Dyer, K; Hickling, T; Daily, W

    2004-09-21

    We describe a stochastic inversion method for mapping subsurface regions where the electrical resistivity is changing. The technique combines prior information, electrical resistance data and forward models to produce subsurface resistivity models that are most consistent with all available data. Bayesian inference and a Metropolis simulation algorithm form the basis for this approach. Attractive features include its ability to: (1) provide quantitative measures of the uncertainty of a generated estimate and, (2) allow alternative model estimates to be identified, compared and ranked. Methods that monitor convergence and summarize important trends of the posterior distribution are introduced. Results from a physical model test and a field experiment were used to assess performance. The stochastic inversions presented provide useful estimates of the most probable location, shape, and volume of the changing region, and the most likely resistivity change. The proposed method is computationally expensive, requiring the use of extensive computational resources to make its application practical.

  7. Stochastic method for accommodation of equilibrating basins in kinetic Monte Carlo simulations

    SciTech Connect

    Van Siclen, Clinton D

    2007-02-01

    A computationally simple way to accommodate "basins" of trapping states in standard kinetic Monte Carlo simulations is presented. By assuming the system is effectively equilibrated in the basin, the residence time (time spent in the basin before escape) and the probabilities for transition to states outside the basin may be calculated. This is demonstrated for point defect diffusion over a periodic grid of sites containing a complex basin.

  8. Stochastic theory of interfacial enzyme kinetics: A kinetic Monte Carlo study

    NASA Astrophysics Data System (ADS)

    Das, Biswajit; Gangopadhyay, Gautam

    2012-01-01

    In the spirit of Gillespie's stochastic approach we have formulated a theory to explore the advancement of the interfacial enzyme kinetics at the single enzyme level which is ultimately utilized to obtain the ensemble average macroscopic feature, lag-burst kinetics. We have provided a theory of the transition from the lag phase to the burst phase kinetics by considering the gradual development of electrostatic interaction among the positively charged enzyme and negatively charged product molecules deposited on the phospholipid surface. It is shown that the different diffusion time scales of the enzyme over the fluid and product regions are responsible for the memory effect in the correlation of successive turnover events of the hopping mode in the single trajectory analysis which again is reflected on the non-Gaussian distribution of turnover times on the macroscopic kinetics in the lag phase unlike the burst phase kinetics.

  9. Stochastic modeling of polarized light scattering using a Monte Carlo based stencil method.

    PubMed

    Sormaz, Milos; Stamm, Tobias; Jenny, Patrick

    2010-05-01

    This paper deals with an efficient and accurate simulation algorithm to solve the vector Boltzmann equation for polarized light transport in scattering media. The approach is based on a stencil method, which was previously developed for unpolarized light scattering and proved to be much more efficient (speedup factors of up to 10 were reported) than the classical Monte Carlo while being equally accurate. To validate what we believe to be the new stencil method, a substrate composed of spherical non-absorbing particles embedded in a non-absorbing medium was considered. The corresponding single scattering Mueller matrix, which is required to model scattering of polarized light, was determined based on the Lorenz-Mie theory. From simulations of a reflected polarized laser beam, the Mueller matrix of the substrate was computed and compared with an established reference. The agreement is excellent, and it could be demonstrated that a significant speedup of the simulations is achieved due to the stencil approach compared with the classical Monte Carlo. PMID:20448777

  10. A Monte Carlo based spent fuel analysis safeguards strategy assessment

    SciTech Connect

    Fensin, Michael L; Tobin, Stephen J; Swinhoe, Martyn T; Menlove, Howard O; Sandoval, Nathan P

    2009-01-01

    Safeguarding nuclear material involves the detection of diversions of significant quantities of nuclear materials, and the deterrence of such diversions by the risk of early detection. There are a variety of motivations for quantifying plutonium in spent fuel assemblies by means of nondestructive assay (NDA) including the following: strengthening the capabilities of the International Atomic Energy Agencies ability to safeguards nuclear facilities, shipper/receiver difference, input accountability at reprocessing facilities and burnup credit at repositories. Many NDA techniques exist for measuring signatures from spent fuel; however, no single NDA technique can, in isolation, quantify elemental plutonium and other actinides of interest in spent fuel. A study has been undertaken to determine the best integrated combination of cost effective techniques for quantifying plutonium mass in spent fuel for nuclear safeguards. A standardized assessment process was developed to compare the effective merits and faults of 12 different detection techniques in order to integrate a few techniques and to down-select among the techniques in preparation for experiments. The process involves generating a basis burnup/enrichment/cooling time dependent spent fuel assembly library, creating diversion scenarios, developing detector models and quantifying the capability of each NDA technique. Because hundreds of input and output files must be managed in the couplings of data transitions for the different facets of the assessment process, a graphical user interface (GUI) was development that automates the process. This GUI allows users to visually create diversion scenarios with varied replacement materials, and generate a MCNPX fixed source detector assessment input file. The end result of the assembly library assessment is to select a set of common source terms and diversion scenarios for quantifying the capability of each of the 12 NDA techniques. We present here the generalized

  11. a Monte Carlo Assisted Simulation of Stochastic Molecular Dynamics for Folding of the Protein Crambin in a Viscous Environment

    NASA Astrophysics Data System (ADS)

    Taneri, Sencer

    We investigate the folding dynamics of the plant-seed protein Crambin in a liquid environment, that usually happens to be water with some certain viscosity. To take into account the viscosity, necessitates a stochastic approach. This can be summarized by a 2D-Langevin equation, even though the simulation is still carried out in 3D. Solution of the Langevin equation will be the basic task in order to proceed with a Molecular Dynamics simulation, which will accompany a delicate Monte Carlo technique. The potential wells, used to engineer the energy space assuming the interaction of monomers constituting the protein-chain, are simply modeled by a combination of two parabola. This combination will approximate the real physical interactions, that is given by the well known Lennard-Jones potential. Contributions to the total potential from torsion, bending and distance dependent potentials are good to the fourth nearest neighbor. The final image is in a very good geometric agreement with the real shape of the protein chain, which can be obtained from the protein data bank. The quantitative measure of this agreement is the similarity parameter with the native structure, which is found to be 0.91 < 1 for the best sample. The folding time can be determined from Debye-relaxation process. We apply two regimes and calculate the folding time, corresponding to the elastic domain mode, which yields 5.2 ps for the same sample.

  12. Monte Carlo Simulation of the TRIGA Mark II Benchmark Experiment with Burned Fuel

    SciTech Connect

    Jeraj, Robert; Zagar, Tomaz; Ravnik, Matjaz

    2002-03-15

    Monte Carlo calculations of a criticality experiment with burned fuel on the TRIGA Mark II research reactor are presented. The main objective was to incorporate burned fuel composition calculated with the WIMSD4 deterministic code into the MCNP4B Monte Carlo code and compare the calculated k{sub eff} with the measurements. The criticality experiment was performed in 1998 at the ''Jozef Stefan'' Institute TRIGA Mark II reactor in Ljubljana, Slovenia, with the same fuel elements and loading pattern as in the TRIGA criticality benchmark experiment with fresh fuel performed in 1991. The only difference was that in 1998, the fuel elements had on average burnup of {approx}3%, corresponding to 1.3-MWd energy produced in the core in the period between 1991 and 1998. The fuel element burnup accumulated during 1991-1998 was calculated with the TRIGLAV in-house-developed fuel management two-dimensional multigroup diffusion code. The burned fuel isotopic composition was calculated with the WIMSD4 code and compared to the ORIGEN2 calculations. Extensive comparison of burned fuel material composition was performed for both codes for burnups up to 20% burned {sup 235}U, and the differences were evaluated in terms of reactivity. The WIMSD4 and ORIGEN2 results agreed well for all isotopes important in reactivity calculations, giving increased confidence in the WIMSD4 calculation of the burned fuel material composition. The k{sub eff} calculated with the combined WIMSD4 and MCNP4B calculations showed good agreement with the experimental values. This shows that linking of WIMSD4 with MCNP4B for criticality calculations with burned fuel is feasible and gives reliable results.

  13. A stochastic Markov chain approach for tennis: Monte Carlo simulation and modeling

    NASA Astrophysics Data System (ADS)

    Aslam, Kamran

    This dissertation describes the computational formulation of probability density functions (pdfs) that facilitate head-to-head match simulations in tennis along with ranking systems developed from their use. A background on the statistical method used to develop the pdfs , the Monte Carlo method, and the resulting rankings are included along with a discussion on ranking methods currently being used both in professional sports and in other applications. Using an analytical theory developed by Newton and Keller in [34] that defines a tennis player's probability of winning a game, set, match and single elimination tournament, a computational simulation has been developed in Matlab that allows further modeling not previously possible with the analytical theory alone. Such experimentation consists of the exploration of non-iid effects, considers the concept the varying importance of points in a match and allows an unlimited number of matches to be simulated between unlikely opponents. The results of these studies have provided pdfs that accurately model an individual tennis player's ability along with a realistic, fair and mathematically sound platform for ranking them.

  14. Solution of deterministic-stochastic epidemic models by dynamical Monte Carlo method

    NASA Astrophysics Data System (ADS)

    Aièllo, O. E.; Haas, V. J.; daSilva, M. A. A.; Caliri, A.

    2000-07-01

    This work is concerned with dynamical Monte Carlo (MC) method and its application to models originally formulated in a continuous-deterministic approach. Specifically, a susceptible-infected-removed-susceptible (SIRS) model is used in order to analyze aspects of the dynamical MC algorithm and achieve its applications in epidemic contexts. We first examine two known approaches to the dynamical interpretation of the MC method and follow with the application of one of them in the SIRS model. The working method chosen is based on the Poisson process where hierarchy of events, properly calculated waiting time between events, and independence of the events simulated, are the basic requirements. To verify the consistence of the method, some preliminary MC results are compared against exact steady-state solutions and other general numerical results (provided by Runge-Kutta method): good agreement is found. Finally, a space-dependent extension of the SIRS model is introduced and treated by MC. The results are interpreted under and in accordance with aspects of the herd-immunity concept.

  15. Stochastic geometrical model and Monte Carlo optimization methods for building reconstruction from InSAR data

    NASA Astrophysics Data System (ADS)

    Zhang, Yue; Sun, Xian; Thiele, Antje; Hinz, Stefan

    2015-10-01

    Synthetic aperture radar (SAR) systems, such as TanDEM-X, TerraSAR-X and Cosmo-SkyMed, acquire imagery with high spatial resolution (HR), making it possible to observe objects in urban areas with high detail. In this paper, we propose a new top-down framework for three-dimensional (3D) building reconstruction from HR interferometric SAR (InSAR) data. Unlike most methods proposed before, we adopt a generative model and utilize the reconstruction process by maximizing a posteriori estimation (MAP) through Monte Carlo methods. The reason for this strategy refers to the fact that the noisiness of SAR images calls for a thorough prior model to better cope with the inherent amplitude and phase fluctuations. In the reconstruction process, according to the radar configuration and the building geometry, a 3D building hypothesis is mapped to the SAR image plane and decomposed to feature regions such as layover, corner line, and shadow. Then, the statistical properties of intensity, interferometric phase and coherence of each region are explored respectively, and are included as region terms. Roofs are not directly considered as they are mixed with wall into layover area in most cases. When estimating the similarity between the building hypothesis and the real data, the prior, the region term, together with the edge term related to the contours of layover and corner line, are taken into consideration. In the optimization step, in order to achieve convergent reconstruction outputs and get rid of local extrema, special transition kernels are designed. The proposed framework is evaluated on the TanDEM-X dataset and performs well for buildings reconstruction.

  16. Developments in Stochastic Fuel Efficient Cruise Control and Constrained Control with Applications to Aircraft

    NASA Astrophysics Data System (ADS)

    McDonough, Kevin K.

    The dissertation presents contributions to fuel-efficient control of vehicle speed and constrained control with applications to aircraft. In the first part of this dissertation a stochastic approach to fuel-efficient vehicle speed control is developed. This approach encompasses stochastic modeling of road grade and traffic speed, modeling of fuel consumption through the use of a neural network, and the application of stochastic dynamic programming to generate vehicle speed control policies that are optimized for the trade-off between fuel consumption and travel time. The fuel economy improvements with the proposed policies are quantified through simulations and vehicle experiments. It is shown that the policies lead to the emergence of time-varying vehicle speed patterns that are referred to as time-varying cruise. Through simulations and experiments it is confirmed that these time-varying vehicle speed profiles are more fuel-efficient than driving at a comparable constant speed. Motivated by these results, a simpler implementation strategy that is more appealing for practical implementation is also developed. This strategy relies on a finite state machine and state transition threshold optimization, and its benefits are quantified through model-based simulations and vehicle experiments. Several additional contributions are made to approaches for stochastic modeling of road grade and vehicle speed that include the use of Kullback-Liebler divergence and divergence rate and a stochastic jump-like model for the behavior of the road grade. In the second part of the dissertation, contributions to constrained control with applications to aircraft are described. Recoverable sets and integral safe sets of initial states of constrained closed-loop systems are introduced first and computational procedures of such sets based on linear discrete-time models are given. The use of linear discrete-time models is emphasized as they lead to fast computational procedures. Examples of

  17. Improvements of MCOR: A Monte Carlo depletion code system for fuel assembly reference calculations

    SciTech Connect

    Tippayakul, C.; Ivanov, K.; Misu, S.

    2006-07-01

    This paper presents the improvements of MCOR, a Monte Carlo depletion code system for fuel assembly reference calculations. The improvements of MCOR were initiated by the cooperation between the Penn State Univ. and AREVA NP to enhance the original Penn State Univ. MCOR version in order to be used as a new Monte Carlo depletion analysis tool. Essentially, a new depletion module using KORIGEN is utilized to replace the existing ORIGEN-S depletion module in MCOR. Furthermore, the online burnup cross section generation by the Monte Carlo calculation is implemented in the improved version instead of using the burnup cross section library pre-generated by a transport code. Other code features have also been added to make the new MCOR version easier to use. This paper, in addition, presents the result comparisons of the original and the improved MCOR versions against CASMO-4 and OCTOPUS. It was observed in the comparisons that there were quite significant improvements of the results in terms of k{sub inf}, fission rate distributions and isotopic contents. (authors)

  18. Properties of Solar Thermal Fuels by Accurate Quantum Monte Carlo Calculations

    NASA Astrophysics Data System (ADS)

    Saritas, Kayahan; Ataca, Can; Grossman, Jeffrey C.

    2014-03-01

    Efficient utilization of the sun as a renewable and clean energy source is one of the major goals of this century due to increasing energy demand and environmental impact. Solar thermal fuels are materials that capture and store the sun's energy in the form of chemical bonds, which can then be released as heat on demand and charged again. Previous work on solar thermal fuels faced challenges related to the cyclability of the fuel over time, as well as the need for higher energy densities. Recently, it was shown that by templating photoswitches onto carbon nanostructures, both high energy density as well as high stability can be achieved. In this work, we explore alternative molecules to azobenzene in such a nano-templated system. We employ the highly accurate quantum Monte Carlo (QMC) method to predict the energy storage potential for each molecule. Our calculations show that in many cases the level of accuracy provided by density functional theory (DFT) is sufficient. However, in some cases, such as dihydroazulene, the drastic change in conjugation upon light absorption causes the DFT predictions to be inconsistent and incorrect. For this case, we compare our QMC results for the geometric structure, band gap and reaction enthalpy with different DFT functionals.

  19. Monte carlo Techniques for the Comprehensive Modeling of Isotopic Inventories in Future Nuclear Systems and Fuel Cycles

    SciTech Connect

    Paul P.H. Wilson

    2005-07-30

    The development of Monte Carlo techniques for isotopic inventory analysis has been explored in order to facilitate the modeling of systems with flowing streams of material through varying neutron irradiation environments. This represents a novel application of Monte Carlo methods to a field that has traditionally relied on deterministic solutions to systems of first-order differential equations. The Monte Carlo techniques were based largely on the known modeling techniques of Monte Carlo radiation transport, but with important differences, particularly in the area of variance reduction and efficiency measurement. The software that was developed to implement and test these methods now provides a basis for validating approximate modeling techniques that are available to deterministic methodologies. The Monte Carlo methods have been shown to be effective in reproducing the solutions of simple problems that are possible using both stochastic and deterministic methods. The Monte Carlo methods are also effective for tracking flows of materials through complex systems including the ability to model removal of individual elements or isotopes in the system. Computational performance is best for flows that have characteristic times that are large fractions of the system lifetime. As the characteristic times become short, leading to thousands or millions of passes through the system, the computational performance drops significantly. Further research is underway to determine modeling techniques to improve performance within this range of problems. This report describes the technical development of Monte Carlo techniques for isotopic inventory analysis. The primary motivation for this solution methodology is the ability to model systems of flowing material being exposed to varying and stochastically varying radiation environments. The methodology was developed in three stages: analog methods which model each atom with true reaction probabilities (Section 2), non-analog methods

  20. A Stochastic Method for Estimating the Effect of Isotopic Uncertainties in Spent Nuclear Fuel

    SciTech Connect

    DeHart, M.D.

    2001-08-24

    This report describes a novel approach developed at the Oak Ridge National Laboratory (ORNL) for the estimation of the uncertainty in the prediction of the neutron multiplication factor for spent nuclear fuel. This technique focuses on burnup credit, where credit is taken in criticality safety analysis for the reduced reactivity of fuel irradiated in and discharged from a reactor. Validation methods for burnup credit have attempted to separate the uncertainty associated with isotopic prediction methods from that of criticality eigenvalue calculations. Biases and uncertainties obtained in each step are combined additively. This approach, while conservative, can be excessive because of a physical assumptions employed. This report describes a statistical approach based on Monte Carlo sampling to directly estimate the total uncertainty in eigenvalue calculations resulting from uncertainties in isotopic predictions. The results can also be used to demonstrate the relative conservatism and statistical confidence associated with the method of additively combining uncertainties. This report does not make definitive conclusions on the magnitude of biases and uncertainties associated with isotopic predictions in a burnup credit analysis. These terms will vary depending on system design and the set of isotopic measurements used as a basis for estimating isotopic variances. Instead, the report describes a method that can be applied with a given design and set of isotopic data for estimating design-specific biases and uncertainties.

  1. Application of stochastic approach based on Monte Carlo (MC) simulation for life cycle inventory (LCI) to the steel process chain: case study.

    PubMed

    Bieda, Bogusław

    2014-05-15

    The purpose of the paper is to present the results of application of stochastic approach based on Monte Carlo (MC) simulation for life cycle inventory (LCI) data of Mittal Steel Poland (MSP) complex in Kraków, Poland. In order to assess the uncertainty, the software CrystalBall® (CB), which is associated with Microsoft® Excel spreadsheet model, is used. The framework of the study was originally carried out for 2005. The total production of steel, coke, pig iron, sinter, slabs from continuous steel casting (CSC), sheets from hot rolling mill (HRM) and blast furnace gas, collected in 2005 from MSP was analyzed and used for MC simulation of the LCI model. In order to describe random nature of all main products used in this study, normal distribution has been applied. The results of the simulation (10,000 trials) performed with the use of CB consist of frequency charts and statistical reports. The results of this study can be used as the first step in performing a full LCA analysis in the steel industry. Further, it is concluded that the stochastic approach is a powerful method for quantifying parameter uncertainty in LCA/LCI studies and it can be applied to any steel industry. The results obtained from this study can help practitioners and decision-makers in the steel production management. PMID:24290145

  2. A Monte Carlo simulation method for assessing biotransformation effects on groundwater fuel hydrocarbon plume lengths

    NASA Astrophysics Data System (ADS)

    McNab, Walt W.

    2001-02-01

    Biotransformation of dissolved groundwater hydrocarbon plumes emanating from leaking underground fuel tanks should, in principle, result in plume length stabilization over relatively short distances, thus diminishing the environmental risk. However, because the behavior of hydrocarbon plumes is usually poorly constrained at most leaking underground fuel tank sites in terms of release history, groundwater velocity, dispersion, as well as the biotransformation rate, demonstrating such a limitation in plume length is problematic. Biotransformation signatures in the aquifer geochemistry, most notably elevated bicarbonate, may offer a means of constraining the relationship between plume length and the mean biotransformation rate. In this study, modeled plume lengths and spatial bicarbonate differences among a population of synthetic hydrocarbon plumes, generated through Monte Carlo simulation of an analytical solute transport model, are compared to field observations from six underground storage tank (UST) sites at military bases in California. Simulation results indicate that the relationship between plume length and the distribution of bicarbonate is best explained by biotransformation rates that are consistent with ranges commonly reported in the literature. This finding suggests that bicarbonate can indeed provide an independent means for evaluating limitations in hydrocarbon plume length resulting from biotransformation.

  3. Gamma-ray spectrometry analysis of pebble bed reactor fuel using Monte Carlo simulations

    NASA Astrophysics Data System (ADS)

    Chen, Jianwei; Hawari, Ayman I.; Zhao, Zhongxiang; Su, Bingjing

    2003-06-01

    Monte Carlo simulations were used to study the gamma-ray spectra of pebble bed reactor fuel at various levels of burnup. A fuel depletion calculation was performed using the ORIGEN2.1 code, which yielded the gamma-ray source term that was introduced into the input of an MCNP4C simulation. The simulation assumed the use of a 100% efficient high-purity coaxial germanium (HPGe) detector, a pebble placed at a distance of 100 cm from the detector, and accounted for Gaussian broadening of the gamma-ray peaks. Previously, it was shown that 137Cs, 60Co (introduced as a dopant), and 134Cs are the relevant burnup indicators. The results show that the 662 keV line of 137Cs lies in close proximity to the intense 658 keV of 197Nb, which results in spectral interference between the lines. However, the 1333 keV line of 60Co, and selected 134Cs lines (e.g., at 605 keV) are free from spectral interference, which enhances the possibility of their utilization as relative burnup indicators.

  4. An extended stochastic reconstruction method for catalyst layers in proton exchange membrane fuel cells

    NASA Astrophysics Data System (ADS)

    Kang, Jinfen; Moriyama, Koji; Kim, Seung Hyun

    2016-09-01

    This paper presents an extended, stochastic reconstruction method for catalyst layers (CLs) of Proton Exchange Membrane Fuel Cells (PEMFCs). The focus is placed on the reconstruction of customized, low platinum (Pt) loading CLs where the microstructure of CLs can substantially influence the performance. The sphere-based simulated annealing (SSA) method is extended to generate the CL microstructures with specified and controllable structural properties for agglomerates, ionomer, and Pt catalysts. In the present method, the agglomerate structures are controlled by employing a trial two-point correlation function used in the simulated annealing process. An off-set method is proposed to generate more realistic ionomer structures. The variations of ionomer structures at different humidity conditions are considered to mimic the swelling effects. A method to control Pt loading, distribution, and utilization is presented. The extension of the method to consider heterogeneity in structural properties, which can be found in manufactured CL samples, is presented. Various reconstructed CLs are generated to demonstrate the capability of the proposed method. Proton transport properties of the reconstructed CLs are calculated and validated with experimental data.

  5. SENSITIVITY STUDIES FOR AN IN-SITU PARTIAL DEFECT DETECTOR (PDET) IN SPENT FUEL USING MONTE CARLO TECHNIQUES

    SciTech Connect

    Sitaraman, S; Ham, Y S

    2008-04-28

    This study presents results from Monte Carlo radiation transport calculations aimed at characterizing a novel methodology being developed to detect partial defects in Pressurized Water Reactor (PWR) spent fuel assemblies (SFAs). The methodology uses a combination of measured neutron and gamma fields inside a spent fuel assembly in an in-situ condition where no movement of the fuel assembly is required. Previous studies performed on single isolated assemblies resulted in a unique base signature that would change when some of the fuel in the assembly is replaced with dummy fuel. These studies indicate that this signature is still valid in the in-situ condition enhancing the prospect of building a practical tool, Partial Defect Detector (PDET), which can be used in the field for partial defect detection.

  6. Effects of fuel cetane number on the structure of diesel spray combustion: An accelerated Eulerian stochastic fields method

    NASA Astrophysics Data System (ADS)

    Jangi, Mehdi; Lucchini, Tommaso; Gong, Cheng; Bai, Xue-Song

    2015-09-01

    An Eulerian stochastic fields (ESF) method accelerated with the chemistry coordinate mapping (CCM) approach for modelling spray combustion is formulated, and applied to model diesel combustion in a constant volume vessel. In ESF-CCM, the thermodynamic states of the discretised stochastic fields are mapped into a low-dimensional phase space. Integration of the chemical stiff ODEs is performed in the phase space and the results are mapped back to the physical domain. After validating the ESF-CCM, the method is used to investigate the effects of fuel cetane number on the structure of diesel spray combustion. It is shown that, depending of the fuel cetane number, liftoff length is varied, which can lead to a change in combustion mode from classical diesel spray combustion to fuel-lean premixed burned combustion. Spray combustion with a shorter liftoff length exhibits the characteristics of the classical conceptual diesel combustion model proposed by Dec in 1997 (http://dx.doi.org/10.4271/970873), whereas in a case with a lower cetane number the liftoff length is much larger and the spray combustion probably occurs in a fuel-lean-premixed mode of combustion. Nevertheless, the transport budget at the liftoff location shows that stabilisation at all cetane numbers is governed primarily by the auto-ignition process.

  7. Predicting fissile content of spent nuclear fuel assemblies with the passive neutron Albedo reactivity technique and Monte Carlo code emulation

    SciTech Connect

    Conlin, Jeremy Lloyd; Tobin, Stephen J

    2010-10-13

    There is a great need in the safeguards community to be able to nondestructively quantify the mass of plutonium of a spent nuclear fuel assembly. As part of the Next Generation of Safeguards Initiative, we are investigating several techniques, or detector systems, which, when integrated, will be capable of quantifying the plutonium mass of a spent fuel assembly without dismantling the assembly. This paper reports on the simulation of one of these techniques, the Passive Neutron Albedo Reactivity with Fission Chambers (PNAR-FC) system. The response of this system over a wide range of spent fuel assemblies with different burnup, initial enrichment, and cooling time characteristics is shown. A Monte Carlo method of using these modeled results to estimate the fissile content of a spent fuel assembly has been developed. A few numerical simulations of using this method are shown. Finally, additional developments still needed and being worked on are discussed.

  8. Sampling errors for satellite-derived tropical rainfall: Monte Carlo study using a space-time stochastic model

    SciTech Connect

    Bell, T.L. ); Abdullah, A.; Martin, R.L. ); North, G.R. )

    1990-02-28

    Estimates of monthly average rainfall based on satellite observations from a low Earth orbit will differ from the true monthly average because the satellite observes a given area only intermittently. This sampling error inherent in satellite monitoring of rainfall would occur even if the satellite instruments could measure rainfall perfectly. The authors estimate the size of this error for a satellite system being studied at NASA, the Tropical Rainfall Measuring Mission (TRMM). They first examine in detail the statistical description of rainfall on scales from 1 to 10{sup 3} km, based on rainfall data from the Global Atmospheric Research Project Atlantic Tropical Experiment (GATE). A TRMM-like satellite is flown over a two-dimensional time-evolving simulation of rainfall using a stochastic model with statistics tuned to agree with GATE statistics. The distribution of sampling errors found from many months of simulated observations is found to be nearly normal, even though the distribution of area-averaged rainfall is far from normal. For a range of orbits likely to be employed in TRMM, sampling error is found to be less than 10% of the mean for rainfall averaged over a 500 {times} 500 km{sup 2} area.

  9. Sampling errors for satellite-derived tropical rainfall - Monte Carlo study using a space-time stochastic model

    NASA Technical Reports Server (NTRS)

    Bell, Thomas L.; Abdullah, A.; Martin, Russell L.; North, Gerald R.

    1990-01-01

    Estimates of monthly average rainfall based on satellite observations from a low earth orbit will differ from the true monthly average because the satellite observes a given area only intermittently. This sampling error inherent in satellite monitoring of rainfall would occur even if the satellite instruments could measure rainfall perfectly. The size of this error is estimated for a satellite system being studied at NASA, the Tropical Rainfall Measuring Mission (TRMM). First, the statistical description of rainfall on scales from 1 to 1000 km is examined in detail, based on rainfall data from the Global Atmospheric Research Project Atlantic Tropical Experiment (GATE). A TRMM-like satellite is flown over a two-dimensional time-evolving simulation of rainfall using a stochastic model with statistics tuned to agree with GATE statistics. The distribution of sampling errors found from many months of simulated observations is found to be nearly normal, even though the distribution of area-averaged rainfall is far from normal. For a range of orbits likely to be employed in TRMM, sampling error is found to be less than 10 percent of the mean for rainfall averaged over a 500 x 500 sq km area.

  10. Stochastic Convection Parameterizations

    NASA Technical Reports Server (NTRS)

    Teixeira, Joao; Reynolds, Carolyn; Suselj, Kay; Matheou, Georgios

    2012-01-01

    computational fluid dynamics, radiation, clouds, turbulence, convection, gravity waves, surface interaction, radiation interaction, cloud and aerosol microphysics, complexity (vegetation, biogeochemistry, radiation versus turbulence/convection stochastic approach, non-linearities, Monte Carlo, high resolutions, large-Eddy Simulations, cloud structure, plumes, saturation in tropics, forecasting, parameterizations, stochastic, radiation-clod interaction, hurricane forecasts

  11. Development of a practical fuel management system for PSBR based on advanced three-dimensional Monte Carlo coupled depletion methodology

    NASA Astrophysics Data System (ADS)

    Tippayakul, Chanatip

    The main objective of this research is to develop a practical fuel management system for the Pennsylvania State University Breazeale research reactor (PSBR) based on several advanced Monte Carlo coupled depletion methodologies. Primarily, this research involved two major activities: model and method developments and analyses and validations of the developed models and methods. The starting point of this research was the utilization of the earlier developed fuel management tool, TRIGSIM, to create the Monte Carlo model of core loading 51 (end of the core loading). It was found when comparing the normalized power results of the Monte Carlo model to those of the current fuel management system (using HELIOS/ADMARC-H) that they agreed reasonably well (within 2%--3% differences on average). Moreover, the reactivity of some fuel elements was calculated by the Monte Carlo model and it was compared with measured data. It was also found that the fuel element reactivity results of the Monte Carlo model were in good agreement with the measured data. However, the subsequent task of analyzing the conversion from the core loading 51 to the core loading 52 using TRIGSIM showed quite significant difference of each control rod worth between the Monte Carlo model and the current methodology model. The differences were mainly caused by inconsistent absorber atomic number densities between the two models. Hence, the model of the first operating core (core loading 2) was revised in light of new information about the absorber atomic densities to validate the Monte Carlo model with the measured data. With the revised Monte Carlo model, the results agreed better to the measured data. Although TRIGSIM showed good modeling and capabilities, the accuracy of TRIGSIM could be further improved by adopting more advanced algorithms. Therefore, TRIGSIM was planned to be upgraded. The first task of upgrading TRIGSIM involved the improvement of the temperature modeling capability. The new TRIGSIM was

  12. A Comparison of Three Stochastic Approaches for Parameter Estimation and Prediction of Steady-State Groundwater Flow: Nonlocal Moment Equations and Monte Carlo Method Coupled with Ensemble Kalman Filter and Geostatistical Stochastic Inversion.

    NASA Astrophysics Data System (ADS)

    Morales-Casique, E.; Briseño-Ruiz, J. V.; Hernández, A. F.; Herrera, G. S.; Escolero-Fuentes, O.

    2014-12-01

    We present a comparison of three stochastic approaches for estimating log hydraulic conductivity (Y) and predicting steady-state groundwater flow. Two of the approaches are based on the data assimilation technique known as ensemble Kalman filter (EnKF) and differ in the way prior statistical moment estimates (PSME) (required to build the Kalman gain matrix) are obtained. In the first approach, the Monte Carlo method is employed to compute PSME of the variables and parameters; we denote this approach by EnKFMC. In the second approach PSME are computed through the direct solution of approximate nonlocal (integrodifferential) equations that govern the spatial conditional ensemble means (statistical expectations) and covariances of hydraulic head (h) and fluxes; we denote this approach by EnKFME. The third approach consists of geostatistical stochastic inversion of the same nonlocal moment equations; we denote this approach by IME. In addition to testing the EnKFMC and EnKFME methods in the traditional manner that estimate Y over the entire grid, we propose novel corresponding algorithms that estimate Y at a few selected locations and then interpolate over all grid elements via kriging as done in the IME method. We tested these methods to estimate Y and h in steady-state groundwater flow in a synthetic two-dimensional domain with a well pumping at a constant rate, located at the center of the domain. In addition, to evaluate the performance of the estimation methods, we generated four unconditional different realizations that served as "true" fields. The results of our numerical experiments indicate that the three methods were effective in estimating h, reaching at least 80% of predictive coverage, although both EnKF were superior to the IME method. With respect to estimating Y, the three methods reached similar accuracy in terms of the mean absolute value error. Coupling the EnKF methods with kriging to estimate Y reduces to one fourth the CPU time required for data

  13. Stochastic simulation of fission product activity in primary coolant due to fuel rod failures in typical PWRs under power transients

    NASA Astrophysics Data System (ADS)

    Javed Iqbal, M.; Mirza, Nasir M.; Mirza, Sikander M.

    2008-01-01

    During normal operation of PWRs, routine fuel rods failures result in release of radioactive fission products (RFPs) in the primary coolant of PWRs. In this work, a stochastic model has been developed for simulation of failure time sequences and release rates for the estimation of fission product activity in primary coolant of a typical PWR under power perturbations. In the first part, a stochastic approach is developed, based on generation of fuel failure event sequences by sampling the time dependent intensity functions. Then a three-stage model based deterministic methodology of the FPCART code has been extended to include failure sequences and random release rates in a computer code FPCART-ST, which uses state-of-the-art LEOPARD and ODMUG codes as its subroutines. The value of the 131I activity in primary coolant predicted by FPCART-ST code has been found in good agreement with the corresponding values measured at ANGRA-1 nuclear power plant. The predictions of FPCART-ST code with constant release option have also been found to have good agreement with corresponding experimental values for time dependent 135I, 135Xe and 89Kr concentrations in primary coolant measured during EDITHMOX-1 experiments.

  14. Monte Carlo characterization of PWR spent fuel assemblies to determine the detectability of pin diversion

    NASA Astrophysics Data System (ADS)

    Burdo, James S.

    This research is based on the concept that the diversion of nuclear fuel pins from Light Water Reactor (LWR) spent fuel assemblies is feasible by a careful comparison of spontaneous fission neutron and gamma levels in the guide tube locations of the fuel assemblies. The goal is to be able to determine whether some of the assembly fuel pins are either missing or have been replaced with dummy or fresh fuel pins. It is known that for typical commercial power spent fuel assemblies, the dominant spontaneous neutron emissions come from Cm-242 and Cm-244. Because of the shorter half-life of Cm-242 (0.45 yr) relative to that of Cm-244 (18.1 yr), Cm-244 is practically the only neutron source contributing to the neutron source term after the spent fuel assemblies are more than two years old. Initially, this research focused upon developing MCNP5 models of PWR fuel assemblies, modeling their depletion using the MONTEBURNS code, and by carrying out a preliminary depletion of a ¼ model 17x17 assembly from the TAKAHAMA-3 PWR. Later, the depletion and more accurate isotopic distribution in the pins at discharge was modeled using the TRITON depletion module of the SCALE computer code. Benchmarking comparisons were performed with the MONTEBURNS and TRITON results. Subsequently, the neutron flux in each of the guide tubes of the TAKAHAMA-3 PWR assembly at two years after discharge as calculated by the MCNP5 computer code was determined for various scenarios. Cases were considered for all spent fuel pins present and for replacement of a single pin at a position near the center of the assembly (10,9) and at the corner (17,1). Some scenarios were duplicated with a gamma flux calculation for high energies associated with Cm-244. For each case, the difference between the flux (neutron or gamma) for all spent fuel pins and with a pin removed or replaced is calculated for each guide tube. Different detection criteria were established. The first was whether the relative error of the

  15. Decadal climatic variability and regional weather simulation: stochastic nature of forest fuel moisture and climatic forcing

    NASA Astrophysics Data System (ADS)

    Tsinko, Y.; Johnson, E. A.; Martin, Y. E.

    2014-12-01

    Natural range of variability of forest fire frequency is of great interest due to the current changing climate and seeming increase in the number of fires. The variability of the annual area burned in Canada has not been stable in the 20th century. Recently, these changes have been linked to large scale climate cycles, such as Pacific Decadal Oscillation (PDO) phases and El Nino Southern Oscillation (ENSO). The positive phase of the PDO was associated with the increased probability of hot dry spells leading to drier fuels and increased area burned. However, so far only one historical timeline was used to assess correlations between the natural climate oscillations and forest fire frequency. To counteract similar problems, weather generators are extensively used in hydrological and agricultural modeling to extend short instrumental record and to synthesize long sequences of daily weather parameters that are different from but statistically similar to historical weather. In the current study synthetic weather models were used to assess effects of alternative weather timelines on fuel moisture in Canada by using Canadian Forest Fire Weather Index moisture codes and potential fire frequency. The variability of fuel moisture codes was found to increase with the increased length of simulated series, thus indicating that the natural range of variability of forest fire frequency may be larger than that calculated from available short records. It may be viewed as a manifestation of a Hurst effect. Since PDO phases are thought to be caused by diverse mechanisms including overturning oceanic circulation, some of the lower frequency signals may be attributed to the long term memory of the oceanic system. Thus, care must be taken when assessing natural variability of climate dependent processes without accounting for potential long-term mechanisms.

  16. A stochastic model and Monte Carlo algorithm for fluctuation-induced H2 formation on the surface of interstellar dust grains

    NASA Astrophysics Data System (ADS)

    Sabelfeld, K. K.

    2015-09-01

    A stochastic algorithm for simulation of fluctuation-induced kinetics of H2 formation on grain surfaces is suggested as a generalization of the technique developed in our recent studies [1] where this method was developed to describe the annihilation of spatially separate electrons and holes in a disordered semiconductor. The stochastic model is based on the spatially inhomogeneous, nonlinear integro-differential Smoluchowski equations with random source term. In this paper we derive the general system of Smoluchowski type equations for the formation of H2 from two hydrogen atoms on the surface of interstellar dust grains with physisorption and chemisorption sites. We focus in this study on the spatial distribution, and numerically investigate the segregation in the case of a source with a continuous generation in time and randomly distributed in space. The stochastic particle method presented is based on a probabilistic interpretation of the underlying process as a stochastic Markov process of interacting particle system in discrete but randomly progressed time instances. The segregation is analyzed through the correlation analysis of the vector random field of concentrations which appears to be isotropic in space and stationary in time.

  17. Preliminary TRIGA fuel burn-up evaluation by means of Monte Carlo code and computation based on total energy released during reactor operation

    SciTech Connect

    Borio Di Tigliole, A.; Bruni, J.; Panza, F.; Alloni, D.; Cagnazzo, M.; Magrotti, G.; Manera, S.; Prata, M.; Salvini, A.; Chiesa, D.; Clemenza, M.; Pattavina, L.; Previtali, E.; Sisti, M.; Cammi, A.

    2012-07-01

    Aim of this work was to perform a rough preliminary evaluation of the burn-up of the fuel of TRIGA Mark II research reactor of the Applied Nuclear Energy Laboratory (LENA) of the Univ. of Pavia. In order to achieve this goal a computation of the neutron flux density in each fuel element was performed by means of Monte Carlo code MCNP (Version 4C). The results of the simulations were used to calculate the effective cross sections (fission and capture) inside fuel and, at the end, to evaluate the burn-up and the uranium consumption in each fuel element. The evaluation, showed a fair agreement with the computation for fuel burn-up based on the total energy released during reactor operation. (authors)

  18. Neutron analysis of spent fuel storage installation using parallel computing and advance discrete ordinates and Monte Carlo techniques.

    PubMed

    Shedlock, Daniel; Haghighat, Alireza

    2005-01-01

    In the United States, the Nuclear Waste Policy Act of 1982 mandated centralised storage of spent nuclear fuel by 1988. However, the Yucca Mountain project is currently scheduled to start accepting spent nuclear fuel in 2010. Since many nuclear power plants were only designed for -10 y of spent fuel pool storage, > 35 plants have been forced into alternate means of spent fuel storage. In order to continue operation and make room in spent fuel pools, nuclear generators are turning towards independent spent fuel storage installations (ISFSIs). Typical vertical concrete ISFSIs are -6.1 m high and 3.3 m in diameter. The inherently large system, and the presence of thick concrete shields result in difficulties for both Monte Carlo (MC) and discrete ordinates (SN) calculations. MC calculations require significant variance reduction and multiple runs to obtain a detailed dose distribution. SN models need a large number of spatial meshes to accurately model the geometry and high quadrature orders to reduce ray effects, therefore, requiring significant amounts of computer memory and time. The use of various differencing schemes is needed to account for radial heterogeneity in material cross sections and densities. Two P3, S12, discrete ordinate, PENTRAN (parallel environment neutral-particle TRANsport) models were analysed and different MC models compared. A multigroup MCNP model was developed for direct comparison to the SN models. The biased A3MCNP (automated adjoint accelerated MCNP) and unbiased (MCNP) continuous energy MC models were developed to assess the adequacy of the CASK multigroup (22 neutron, 18 gamma) cross sections. The PENTRAN SN results are in close agreement (5%) with the multigroup MC results; however, they differ by -20-30% from the continuous-energy MC predictions. This large difference can be attributed to the expected difference between multigroup and continuous energy cross sections, and the fact that the CASK library is based on the old ENDF

  19. Stochastic Optimization of Complex Systems

    SciTech Connect

    Birge, John R.

    2014-03-20

    This project focused on methodologies for the solution of stochastic optimization problems based on relaxation and penalty methods, Monte Carlo simulation, parallel processing, and inverse optimization. The main results of the project were the development of a convergent method for the solution of models that include expectation constraints as in equilibrium models, improvement of Monte Carlo convergence through the use of a new method of sample batch optimization, the development of new parallel processing methods for stochastic unit commitment models, and the development of improved methods in combination with parallel processing for incorporating automatic differentiation methods into optimization.

  20. Estimation of water distribution and degradation mechanisms in polymer electrolyte membrane fuel cell gas diffusion layers using a 3D Monte Carlo model

    NASA Astrophysics Data System (ADS)

    Seidenberger, K.; Wilhelm, F.; Schmitt, T.; Lehnert, W.; Scholta, J.

    Understanding of both water management in PEM fuel cells and degradation mechanisms of the gas diffusion layer (GDL) and their mutual impact is still at least incomplete. Different modelling approaches contribute to gain deeper insight into the processes occurring during fuel cell operation. Considering the GDL, the models can help to obtain information about the distribution of liquid water within the material. Especially, flooded regions can be identified, and the water distribution can be linked to the system geometry. Employed for material development, this information can help to increase the life time of the GDL as a fuel cell component and the fuel cell as the entire system. The Monte Carlo (MC) model presented here helps to simulate and analyse the water household in PEM fuel cell GDLs. This model comprises a three-dimensional, voxel-based representation of the GDL substrate, a section of the flowfield channel and the corresponding rib. Information on the water distribution within the substrate part of the GDL can be estimated.

  1. Stochastic microstructural modeling of fuel cell gas diffusion layers and numerical determination of transport properties in different liquid water saturation levels

    NASA Astrophysics Data System (ADS)

    Tayarani-Yoosefabadi, Z.; Harvey, D.; Bellerive, J.; Kjeang, E.

    2016-01-01

    Gas diffusion layer (GDL) materials in polymer electrolyte membrane fuel cells (PEMFCs) are commonly made hydrophobic to enhance water management by avoiding liquid water blockage of the pores and facilitating reactant gas transport to the adjacent catalyst layer. In this work, a stochastic microstructural modeling approach is developed to simulate the transport properties of a commercial carbon paper based GDL under a range of PTFE loadings and liquid water saturation levels. The proposed novel stochastic method mimics the GDL manufacturing process steps and resolves all relevant phases including fiber, binder, PTFE, liquid water, and gas. After thorough validation of the general microstructure with literature and in-house data, a comprehensive set of anisotropic transport properties is simulated for the reconstructed GDL in different PTFE loadings and liquid water saturation levels and validated through a comparison with in-house ex situ experimental data and empirical formulations. In general, the results show good agreement between simulated and measured data. Decreasing trends in porosity, gas diffusivity, and permeability is obtained by increasing the PTFE loading and liquid water content, while the thermal conductivity is found to increase with liquid water saturation. Using the validated model, new correlations for saturation dependent GDL properties are proposed.

  2. Accelerating population balance-Monte Carlo simulation for coagulation dynamics from the Markov jump model, stochastic algorithm and GPU parallel computing

    NASA Astrophysics Data System (ADS)

    Xu, Zuwei; Zhao, Haibo; Zheng, Chuguang

    2015-01-01

    This paper proposes a comprehensive framework for accelerating population balance-Monte Carlo (PBMC) simulation of particle coagulation dynamics. By combining Markov jump model, weighted majorant kernel and GPU (graphics processing unit) parallel computing, a significant gain in computational efficiency is achieved. The Markov jump model constructs a coagulation-rule matrix of differentially-weighted simulation particles, so as to capture the time evolution of particle size distribution with low statistical noise over the full size range and as far as possible to reduce the number of time loopings. Here three coagulation rules are highlighted and it is found that constructing appropriate coagulation rule provides a route to attain the compromise between accuracy and cost of PBMC methods. Further, in order to avoid double looping over all simulation particles when considering the two-particle events (typically, particle coagulation), the weighted majorant kernel is introduced to estimate the maximum coagulation rates being used for acceptance-rejection processes by single-looping over all particles, and meanwhile the mean time-step of coagulation event is estimated by summing the coagulation kernels of rejected and accepted particle pairs. The computational load of these fast differentially-weighted PBMC simulations (based on the Markov jump model) is reduced greatly to be proportional to the number of simulation particles in a zero-dimensional system (single cell). Finally, for a spatially inhomogeneous multi-dimensional (multi-cell) simulation, the proposed fast PBMC is performed in each cell, and multiple cells are parallel processed by multi-cores on a GPU that can implement the massively threaded data-parallel tasks to obtain remarkable speedup ratio (comparing with CPU computation, the speedup ratio of GPU parallel computing is as high as 200 in a case of 100 cells with 10 000 simulation particles per cell). These accelerating approaches of PBMC are

  3. Accelerating population balance-Monte Carlo simulation for coagulation dynamics from the Markov jump model, stochastic algorithm and GPU parallel computing

    SciTech Connect

    Xu, Zuwei; Zhao, Haibo Zheng, Chuguang

    2015-01-15

    This paper proposes a comprehensive framework for accelerating population balance-Monte Carlo (PBMC) simulation of particle coagulation dynamics. By combining Markov jump model, weighted majorant kernel and GPU (graphics processing unit) parallel computing, a significant gain in computational efficiency is achieved. The Markov jump model constructs a coagulation-rule matrix of differentially-weighted simulation particles, so as to capture the time evolution of particle size distribution with low statistical noise over the full size range and as far as possible to reduce the number of time loopings. Here three coagulation rules are highlighted and it is found that constructing appropriate coagulation rule provides a route to attain the compromise between accuracy and cost of PBMC methods. Further, in order to avoid double looping over all simulation particles when considering the two-particle events (typically, particle coagulation), the weighted majorant kernel is introduced to estimate the maximum coagulation rates being used for acceptance–rejection processes by single-looping over all particles, and meanwhile the mean time-step of coagulation event is estimated by summing the coagulation kernels of rejected and accepted particle pairs. The computational load of these fast differentially-weighted PBMC simulations (based on the Markov jump model) is reduced greatly to be proportional to the number of simulation particles in a zero-dimensional system (single cell). Finally, for a spatially inhomogeneous multi-dimensional (multi-cell) simulation, the proposed fast PBMC is performed in each cell, and multiple cells are parallel processed by multi-cores on a GPU that can implement the massively threaded data-parallel tasks to obtain remarkable speedup ratio (comparing with CPU computation, the speedup ratio of GPU parallel computing is as high as 200 in a case of 100 cells with 10 000 simulation particles per cell). These accelerating approaches of PBMC are

  4. A multilevel stochastic collocation method for SPDEs

    SciTech Connect

    Gunzburger, Max; Jantsch, Peter; Teckentrup, Aretha; Webster, Clayton

    2015-03-10

    We present a multilevel stochastic collocation method that, as do multilevel Monte Carlo methods, uses a hierarchy of spatial approximations to reduce the overall computational complexity when solving partial differential equations with random inputs. For approximation in parameter space, a hierarchy of multi-dimensional interpolants of increasing fidelity are used. Rigorous convergence and computational cost estimates for the new multilevel stochastic collocation method are derived and used to demonstrate its advantages compared to standard single-level stochastic collocation approximations as well as multilevel Monte Carlo methods.

  5. Monte Carlo simulations of differential die-away instrument for determination of fissile content in spent fuel assemblies

    NASA Astrophysics Data System (ADS)

    Lee, Tae-Hoon; Menlove, Howard O.; Swinhoe, Martyn T.; Tobin, Stephen J.

    2011-10-01

    The differential die-away (DDA) technique has been simulated by using the MCNPX code to quantify its capability of measuring the fissile content in spent fuel assemblies. For 64 different spent fuel cases of various initial enrichment, burnup and cooling time, the count rate and signal to background ratios of the DDA system were obtained, where neutron backgrounds are mainly coming from the 244Cm of the spent fuel. To quantify the total fissile mass of spent fuel, a concept of the effective 239Pu mass was introduced by weighing the relative contribution to the signal of 235U and 241Pu compared to 239Pu and the calibration curves of DDA count rate vs. 239Pu eff were obtained by using the MCNPX code. With a deuterium-tritium (DT) neutron generator of 10 9 n/s strength, signal to background ratios of sufficient magnitude are acquired for a DDA system with the spent fuel assembly in water.

  6. Stochastic Feedforward Control Technique

    NASA Technical Reports Server (NTRS)

    Halyo, Nesim

    1990-01-01

    Class of commanded trajectories modeled as stochastic process. Advanced Transport Operating Systems (ATOPS) research and development program conducted by NASA Langley Research Center aimed at developing capabilities for increases in capacities of airports, safe and accurate flight in adverse weather conditions including shear, winds, avoidance of wake vortexes, and reduced consumption of fuel. Advances in techniques for design of modern controls and increased capabilities of digital flight computers coupled with accurate guidance information from Microwave Landing System (MLS). Stochastic feedforward control technique developed within context of ATOPS program.

  7. Stochastic games

    PubMed Central

    Solan, Eilon; Vieille, Nicolas

    2015-01-01

    In 1953, Lloyd Shapley contributed his paper “Stochastic games” to PNAS. In this paper, he defined the model of stochastic games, which were the first general dynamic model of a game to be defined, and proved that it admits a stationary equilibrium. In this Perspective, we summarize the historical context and the impact of Shapley’s contribution. PMID:26556883

  8. Monte Carlo Modeling of Fast Sub-critical Assembly with MOX Fuel for Research of Accelerator-Driven Systems

    NASA Astrophysics Data System (ADS)

    Polanski, A.; Barashenkov, V.; Puzynin, I.; Rakhno, I.; Sissakian, A.

    It is considered a sub-critical assembly driven with existing 660 MeV JINR proton accelerator. The assembly consists of a central cylindrical lead target surrounded with a mixed-oxide (MOX) fuel (PuO2 + UO2) and with reflector made of beryllium. Dependence of the energetic gain on the proton energy, the neutron multiplication coefficient, and the neutron energetic spectra have been calculated. It is shown that for subcritical assembly with a mixed-oxide (MOX) BN-600 fuel (28%PuO 2 + 72%UO2) with effective density of fuel material equal to 9 g/cm 3 , the multiplication coefficient keff is equal to 0.945, the energetic gain is equal to 27, and the neutron flux density is 1012 cm˜2 s˜x for the protons with energy of 660 MeV and accelerator beam current of 1 uA.

  9. Stochastic models: theory and simulation.

    SciTech Connect

    Field, Richard V., Jr.

    2008-03-01

    Many problems in applied science and engineering involve physical phenomena that behave randomly in time and/or space. Examples are diverse and include turbulent flow over an aircraft wing, Earth climatology, material microstructure, and the financial markets. Mathematical models for these random phenomena are referred to as stochastic processes and/or random fields, and Monte Carlo simulation is the only general-purpose tool for solving problems of this type. The use of Monte Carlo simulation requires methods and algorithms to generate samples of the appropriate stochastic model; these samples then become inputs and/or boundary conditions to established deterministic simulation codes. While numerous algorithms and tools currently exist to generate samples of simple random variables and vectors, no cohesive simulation tool yet exists for generating samples of stochastic processes and/or random fields. There are two objectives of this report. First, we provide some theoretical background on stochastic processes and random fields that can be used to model phenomena that are random in space and/or time. Second, we provide simple algorithms that can be used to generate independent samples of general stochastic models. The theory and simulation of random variables and vectors is also reviewed for completeness.

  10. Experiments and Theoretical Data for Studying the Impact of Fission Yield Uncertainties on the Nuclear Fuel Cycle with TALYS/GEF and the Total Monte Carlo Method

    NASA Astrophysics Data System (ADS)

    Pomp, S.; Al-Adili, A.; Alhassan, E.; Gustavsson, C.; Helgesson, P.; Hellesen, C.; Koning, A. J.; Lantz, M.; Österlund, M.; Rochman, D.; Simutkin, V.; Sjöstrand, H.; Solders, A.

    2015-01-01

    We describe the research program of the nuclear reactions research group at Uppsala University concerning experimental and theoretical efforts to quantify and reduce nuclear data uncertainties relevant for the nuclear fuel cycle. We briefly describe the Total Monte Carlo (TMC) methodology and how it can be used to study fuel cycle and accident scenarios, and summarize our relevant experimental activities. Input from the latter is to be used to guide the nuclear models and constrain parameter space for TMC. The TMC method relies on the availability of good nuclear models. For this we use the TALYS code which is currently being extended to include the GEF model for the fission channel. We present results from TALYS-1.6 using different versions of GEF with both default and randomized input parameters and compare calculations with experimental data for 234U(n,f) in the fast energy range. These preliminary studies reveal some systematic differences between experimental data and calculations but give overall good and promising results.

  11. Experiments and Theoretical Data for Studying the Impact of Fission Yield Uncertainties on the Nuclear Fuel Cycle with TALYS/GEF and the Total Monte Carlo Method

    SciTech Connect

    Pomp, S.; Al-Adili, A.; Alhassan, E.; Gustavsson, C.; Helgesson, P.; Hellesen, C.; Koning, A.J.; Lantz, M.; Österlund, M.; Rochman, D.; Simutkin, V.; Sjöstrand, H.; Solders, A.

    2015-01-15

    We describe the research program of the nuclear reactions research group at Uppsala University concerning experimental and theoretical efforts to quantify and reduce nuclear data uncertainties relevant for the nuclear fuel cycle. We briefly describe the Total Monte Carlo (TMC) methodology and how it can be used to study fuel cycle and accident scenarios, and summarize our relevant experimental activities. Input from the latter is to be used to guide the nuclear models and constrain parameter space for TMC. The TMC method relies on the availability of good nuclear models. For this we use the TALYS code which is currently being extended to include the GEF model for the fission channel. We present results from TALYS-1.6 using different versions of GEF with both default and randomized input parameters and compare calculations with experimental data for {sup 234}U(n,f) in the fast energy range. These preliminary studies reveal some systematic differences between experimental data and calculations but give overall good and promising results.

  12. QB1 - Stochastic Gene Regulation

    SciTech Connect

    Munsky, Brian

    2012-07-23

    Summaries of this presentation are: (1) Stochastic fluctuations or 'noise' is present in the cell - Random motion and competition between reactants, Low copy, quantization of reactants, Upstream processes; (2) Fluctuations may be very important - Cell-to-cell variability, Cell fate decisions (switches), Signal amplification or damping, stochastic resonances; and (3) Some tools are available to mode these - Kinetic Monte Carlo simulations (SSA and variants), Moment approximation methods, Finite State Projection. We will see how modeling these reactions can tell us more about the underlying processes of gene regulation.

  13. Stochastic kinetic mean field model

    NASA Astrophysics Data System (ADS)

    Erdélyi, Zoltán; Pasichnyy, Mykola; Bezpalchuk, Volodymyr; Tomán, János J.; Gajdics, Bence; Gusak, Andriy M.

    2016-07-01

    This paper introduces a new model for calculating the change in time of three-dimensional atomic configurations. The model is based on the kinetic mean field (KMF) approach, however we have transformed that model into a stochastic approach by introducing dynamic Langevin noise. The result is a stochastic kinetic mean field model (SKMF) which produces results similar to the lattice kinetic Monte Carlo (KMC). SKMF is, however, far more cost-effective and easier to implement the algorithm (open source program code is provided on

  14. The Effect of Stochastic Perturbation of Fuel Distribution on the Criticality of a One Speed Reactor and the Development of Multi-Material Multinomial Line Statistics

    NASA Technical Reports Server (NTRS)

    Jahshan, S. N.; Singleterry, R. C.

    2001-01-01

    The effect of random fuel redistribution on the eigenvalue of a one-speed reactor is investigated. An ensemble of such reactors that are identical to a homogeneous reference critical reactor except for the fissile isotope density distribution is constructed such that it meets a set of well-posed redistribution requirements. The average eigenvalue, , is evaluated when the total fissile loading per ensemble element, or realization, is conserved. The perturbation is proven to increase the reactor criticality on average when it is uniformly distributed. The various causes of the change in reactivity, and their relative effects are identified and ranked. From this, a path towards identifying the causes. and relative effects of reactivity fluctuations for the energy dependent problem is pointed to. The perturbation method of using multinomial distributions for representing the perturbed reactor is developed. This method has some advantages that can be of use in other stochastic problems. Finally, some of the features of this perturbation problem are related to other techniques that have been used for addressing similar problems.

  15. A refined model of the thyrotropin-releasing hormone (TRH) receptor binding pocket. Novel mixed mode Monte Carlo/stochastic dynamics simulations of the complex between TRH and TRH receptor.

    PubMed

    Laakkonen, L J; Guarnieri, F; Perlman, J H; Gershengorn, M C; Osman, R

    1996-06-18

    Previous mutational and computational studies of the thyrotropin-releasing hormone (TRH) receptor identified several residues in its binding pocket [see accompanying paper, Perlman et al. (1996) Biochemistry 35, 7643-7650]. On the basis of the initial model constructed with standard energy minimization techniques, we have conducted 15 mixed mode Monte Carlo/stochastic dynamics (MC-SD) simulations to allow for extended sampling of the conformational states of the ligand and the receptor in the complex. A simulated annealing protocol was adopted in which the complex was cooled from 600 to 310 K in segments of 30 ps of the MC-SD simulations for each change of 100 K. Analysis of the simulation results demonstrated that the mixed mode MC-SD protocol maintained the desired temperature in the constant temperature simulation segments. The elevated temperature and the repeating simulations allowed for adequate sampling of the torsional space of the complex with successful conservation of the general structure and good helicity of the receptor. For the analysis of the interaction between TRH and the binding pocket, TRH was divided into four groups consisting of pyroGlu, His, ProNH2, and the backbone. The pairwise interaction energies of the four separate portions of TRH with the corresponding residues in the receptor provide a physicochemical basis for the understanding of ligand-receptor complexes. The interaction of pyroGlu with Tyr106 shows a bimodal distribution that represents two populations: one with a H-bond and another without it. Asp195 was shown to compete with pyroGlu for the H-bond to Tyr106. Simulations in which Asp195 was interacting with Arg283, thus removing it from the vicinity of Tyr106, resulted in a stable H-bond to pyroGlu. In all simulations His showed a van der Waals attraction to Tyr282 and a weak electrostatic repulsion from Arg 306. The ProNH2 had a strong and frequent H-bonding interaction with Arg306. The backbone carbonyls show a frequent H

  16. Monte Carlo techniques for real-time quantum dynamics

    SciTech Connect

    Dowling, Mark R. . E-mail: dowling@physics.uq.edu.au; Davis, Matthew J.; Drummond, Peter D.; Corney, Joel F.

    2007-01-10

    The stochastic-gauge representation is a method of mapping the equation of motion for the quantum mechanical density operator onto a set of equivalent stochastic differential equations. One of the stochastic variables is termed the 'weight', and its magnitude is related to the importance of the stochastic trajectory. We investigate the use of Monte Carlo algorithms to improve the sampling of the weighted trajectories and thus reduce sampling error in a simulation of quantum dynamics. The method can be applied to calculations in real time, as well as imaginary time for which Monte Carlo algorithms are more-commonly used. The Monte-Carlo algorithms are applicable when the weight is guaranteed to be real, and we demonstrate how to ensure this is the case. Examples are given for the anharmonic oscillator, where large improvements over stochastic sampling are observed.

  17. Stochastic thermodynamics

    NASA Astrophysics Data System (ADS)

    Eichhorn, Ralf; Aurell, Erik

    2014-04-01

    'Stochastic thermodynamics as a conceptual framework combines the stochastic energetics approach introduced a decade ago by Sekimoto [1] with the idea that entropy can consistently be assigned to a single fluctuating trajectory [2]'. This quote, taken from Udo Seifert's [3] 2008 review, nicely summarizes the basic ideas behind stochastic thermodynamics: for small systems, driven by external forces and in contact with a heat bath at a well-defined temperature, stochastic energetics [4] defines the exchanged work and heat along a single fluctuating trajectory and connects them to changes in the internal (system) energy by an energy balance analogous to the first law of thermodynamics. Additionally, providing a consistent definition of trajectory-wise entropy production gives rise to second-law-like relations and forms the basis for a 'stochastic thermodynamics' along individual fluctuating trajectories. In order to construct meaningful concepts of work, heat and entropy production for single trajectories, their definitions are based on the stochastic equations of motion modeling the physical system of interest. Because of this, they are valid even for systems that are prevented from equilibrating with the thermal environment by external driving forces (or other sources of non-equilibrium). In that way, the central notions of equilibrium thermodynamics, such as heat, work and entropy, are consistently extended to the non-equilibrium realm. In the (non-equilibrium) ensemble, the trajectory-wise quantities acquire distributions. General statements derived within stochastic thermodynamics typically refer to properties of these distributions, and are valid in the non-equilibrium regime even beyond the linear response. The extension of statistical mechanics and of exact thermodynamic statements to the non-equilibrium realm has been discussed from the early days of statistical mechanics more than 100 years ago. This debate culminated in the development of linear response

  18. Stochastic cooling

    SciTech Connect

    Bisognano, J.; Leemann, C.

    1982-03-01

    Stochastic cooling is the damping of betatron oscillations and momentum spread of a particle beam by a feedback system. In its simplest form, a pickup electrode detects the transverse positions or momenta of particles in a storage ring, and the signal produced is amplified and applied downstream to a kicker. The time delay of the cable and electronics is designed to match the transit time of particles along the arc of the storage ring between the pickup and kicker so that an individual particle receives the amplified version of the signal it produced at the pick-up. If there were only a single particle in the ring, it is obvious that betatron oscillations and momentum offset could be damped. However, in addition to its own signal, a particle receives signals from other beam particles. In the limit of an infinite number of particles, no damping could be achieved; we have Liouville's theorem with constant density of the phase space fluid. For a finite, albeit large number of particles, there remains a residue of the single particle damping which is of practical use in accumulating low phase space density beams of particles such as antiprotons. It was the realization of this fact that led to the invention of stochastic cooling by S. van der Meer in 1968. Since its conception, stochastic cooling has been the subject of much theoretical and experimental work. The earliest experiments were performed at the ISR in 1974, with the subsequent ICE studies firmly establishing the stochastic cooling technique. This work directly led to the design and construction of the Antiproton Accumulator at CERN and the beginnings of p anti p colliding beam physics at the SPS. Experiments in stochastic cooling have been performed at Fermilab in collaboration with LBL, and a design is currently under development for a anti p accumulator for the Tevatron.

  19. An advanced deterministic method for spent fuel criticality safety analysis

    SciTech Connect

    DeHart, M.D.

    1998-01-01

    Over the past two decades, criticality safety analysts have come to rely to a large extent on Monte Carlo methods for criticality calculations. Monte Carlo has become popular because of its capability to model complex, non-orthogonal configurations or fissile materials, typical of real world problems. Over the last few years, however, interest in determinist transport methods has been revived, due shortcomings in the stochastic nature of Monte Carlo approaches for certain types of analyses. Specifically, deterministic methods are superior to stochastic methods for calculations requiring accurate neutron density distributions or differential fluxes. Although Monte Carlo methods are well suited for eigenvalue calculations, they lack the localized detail necessary to assess uncertainties and sensitivities important in determining a range of applicability. Monte Carlo methods are also inefficient as a transport solution for multiple pin depletion methods. Discrete ordinates methods have long been recognized as one of the most rigorous and accurate approximations used to solve the transport equation. However, until recently, geometric constraints in finite differencing schemes have made discrete ordinates methods impractical for non-orthogonal configurations such as reactor fuel assemblies. The development of an extended step characteristic (ESC) technique removes the grid structure limitations of traditional discrete ordinates methods. The NEWT computer code, a discrete ordinates code built upon the ESC formalism, is being developed as part of the SCALE code system. This paper will demonstrate the power, versatility, and applicability of NEWT as a state-of-the-art solution for current computational needs.

  20. Radiation Transport Computation in Stochastic Media: Method and Application

    NASA Astrophysics Data System (ADS)

    Liang, Chao

    Stochastic media, characterized by the stochastic distribution of inclusions in a background medium, are typical radiation transport media encountered in natural or engineering systems. In the community of radiation transport computation, there is always a demand of accurate and efficient methods that can account for the nature of the stochastic distribution. In this dissertation, we focus on methodology development for the radiation transport computation that is applied to neutronic analyses of nuclear reactor designs characterized by the stochastic distribution of particle fuel. Reactor concepts with the employment of a fuel design consisting of a random heterogeneous mixture of fissile material and non-fissile moderator are constantly proposed. Key physical quantities such as core criticality and power distribution, reactivity control design parameters, depletion and fuel burn-up need to be carefully evaluated. In order to meet these practical requirements, we first need to develop accurate and fast computational methods that can effectively account for the stochastic nature of double heterogeneity configuration. A Monte Carlo based method called Chord Length Sampling (CLS) method is considered to be a promising method for analyzing those TRISO-type fueled reactors. Although the CLS method has been proposed for more than two decades and much research has been conducted to enhance its applicability, further efforts are still needed to address some key research gaps that exist for the CLS method. (1) There is a general lack of thorough investigation of the factors that give rise to the inaccuracy of the CLS method found by many researchers. The accuracy of the CLS method depends on the optical and geometric properties of the system. In some specific scenarios, considerable inaccuracies have been reported. However, no research has been providing a clear interpretation of the reasons responsible for the inaccuracy in the reported scenarios. Furthermore, no any

  1. Stochastic-field cavitation model

    SciTech Connect

    Dumond, J.; Magagnato, F.; Class, A.

    2013-07-15

    Nonlinear phenomena can often be well described using probability density functions (pdf) and pdf transport models. Traditionally, the simulation of pdf transport requires Monte-Carlo codes based on Lagrangian “particles” or prescribed pdf assumptions including binning techniques. Recently, in the field of combustion, a novel formulation called the stochastic-field method solving pdf transport based on Eulerian fields has been proposed which eliminates the necessity to mix Eulerian and Lagrangian techniques or prescribed pdf assumptions. In the present work, for the first time the stochastic-field method is applied to multi-phase flow and, in particular, to cavitating flow. To validate the proposed stochastic-field cavitation model, two applications are considered. First, sheet cavitation is simulated in a Venturi-type nozzle. The second application is an innovative fluidic diode which exhibits coolant flashing. Agreement with experimental results is obtained for both applications with a fixed set of model constants. The stochastic-field cavitation model captures the wide range of pdf shapes present at different locations.

  2. Stochastic solution to quantum dynamics

    NASA Technical Reports Server (NTRS)

    John, Sarah; Wilson, John W.

    1994-01-01

    The quantum Liouville equation in the Wigner representation is solved numerically by using Monte Carlo methods. For incremental time steps, the propagation is implemented as a classical evolution in phase space modified by a quantum correction. The correction, which is a momentum jump function, is simulated in the quasi-classical approximation via a stochastic process. The technique, which is developed and validated in two- and three- dimensional momentum space, extends an earlier one-dimensional work. Also, by developing a new algorithm, the application to bound state motion in an anharmonic quartic potential shows better agreement with exact solutions in two-dimensional phase space.

  3. Stochastic Cooling

    SciTech Connect

    Blaskiewicz, M.

    2011-01-01

    Stochastic Cooling was invented by Simon van der Meer and was demonstrated at the CERN ISR and ICE (Initial Cooling Experiment). Operational systems were developed at Fermilab and CERN. A complete theory of cooling of unbunched beams was developed, and was applied at CERN and Fermilab. Several new and existing rings employ coasting beam cooling. Bunched beam cooling was demonstrated in ICE and has been observed in several rings designed for coasting beam cooling. High energy bunched beams have proven more difficult. Signal suppression was achieved in the Tevatron, though operational cooling was not pursued at Fermilab. Longitudinal cooling was achieved in the RHIC collider. More recently a vertical cooling system in RHIC cooled both transverse dimensions via betatron coupling.

  4. Planning under uncertainty solving large-scale stochastic linear programs

    SciTech Connect

    Infanger, G. . Dept. of Operations Research Technische Univ., Vienna . Inst. fuer Energiewirtschaft)

    1992-12-01

    For many practical problems, solutions obtained from deterministic models are unsatisfactory because they fail to hedge against certain contingencies that may occur in the future. Stochastic models address this shortcoming, but up to recently seemed to be intractable due to their size. Recent advances both in solution algorithms and in computer technology now allow us to solve important and general classes of practical stochastic problems. We show how large-scale stochastic linear programs can be efficiently solved by combining classical decomposition and Monte Carlo (importance) sampling techniques. We discuss the methodology for solving two-stage stochastic linear programs with recourse, present numerical results of large problems with numerous stochastic parameters, show how to efficiently implement the methodology on a parallel multi-computer and derive the theory for solving a general class of multi-stage problems with dependency of the stochastic parameters within a stage and between different stages.

  5. Binomial moment equations for stochastic reaction systems.

    PubMed

    Barzel, Baruch; Biham, Ofer

    2011-04-15

    A highly efficient formulation of moment equations for stochastic reaction networks is introduced. It is based on a set of binomial moments that capture the combinatorics of the reaction processes. The resulting set of equations can be easily truncated to include moments up to any desired order. The number of equations is dramatically reduced compared to the master equation. This formulation enables the simulation of complex reaction networks, involving a large number of reactive species much beyond the feasibility limit of any existing method. It provides an equation-based paradigm to the analysis of stochastic networks, complementing the commonly used Monte Carlo simulations. PMID:21568538

  6. Algorithmic advances in stochastic programming

    SciTech Connect

    Morton, D.P.

    1993-07-01

    Practical planning problems with deterministic forecasts of inherently uncertain parameters often yield unsatisfactory solutions. Stochastic programming formulations allow uncertain parameters to be modeled as random variables with known distributions, but the size of the resulting mathematical programs can be formidable. Decomposition-based algorithms take advantage of special structure and provide an attractive approach to such problems. We consider two classes of decomposition-based stochastic programming algorithms. The first type of algorithm addresses problems with a ``manageable`` number of scenarios. The second class incorporates Monte Carlo sampling within a decomposition algorithm. We develop and empirically study an enhanced Benders decomposition algorithm for solving multistage stochastic linear programs within a prespecified tolerance. The enhancements include warm start basis selection, preliminary cut generation, the multicut procedure, and decision tree traversing strategies. Computational results are presented for a collection of ``real-world`` multistage stochastic hydroelectric scheduling problems. Recently, there has been an increased focus on decomposition-based algorithms that use sampling within the optimization framework. These approaches hold much promise for solving stochastic programs with many scenarios. A critical component of such algorithms is a stopping criterion to ensure the quality of the solution. With this as motivation, we develop a stopping rule theory for algorithms in which bounds on the optimal objective function value are estimated by sampling. Rules are provided for selecting sample sizes and terminating the algorithm under which asymptotic validity of confidence interval statements for the quality of the proposed solution can be verified. Issues associated with the application of this theory to two sampling-based algorithms are considered, and preliminary empirical coverage results are presented.

  7. Shell model Monte Carlo methods

    SciTech Connect

    Koonin, S.E.; Dean, D.J.

    1996-10-01

    We review quantum Monte Carlo methods for dealing with large shell model problems. These methods reduce the imaginary-time many-body evolution operator to a coherent superposition of one-body evolutions in fluctuating one-body fields; resultant path integral is evaluated stochastically. We first discuss the motivation, formalism, and implementation of such Shell Model Monte Carlo methods. There then follows a sampler of results and insights obtained from a number of applications. These include the ground state and thermal properties of pf-shell nuclei, thermal behavior of {gamma}-soft nuclei, and calculation of double beta-decay matrix elements. Finally, prospects for further progress in such calculations are discussed. 87 refs.

  8. An advanced deterministic method for spent-fuel criticality safety analysis

    SciTech Connect

    DeHart, M.D.

    1998-09-01

    Over the past two decades, criticality safety analysts have come to rely to a large extent on Monte Carlo methods for criticality calculations. Monte Carlo has become popular because of its capability to model complex, nonorthogonal configurations or fissile materials, typical of real-world problems. In the last few years, however, interest in determinist transport methods has been revived, due to shortcomings in the stochastic nature of Monte Carlo approaches for certain types of analyses. Specifically, deterministic methods are superior to stochastic methods for calculations requiring accurate neutron density distributions or differential fluxes. Although Monte Carlo methods are well suited for eigenvalue calculations, they lack the localized detail necessary to assess uncertainties and sensitivities important in determining a range of applicability. Monte Carlo methods are also inefficient as a transport solution for multiple-pin depletion methods. Discrete ordinates methods have long been recognized as one of the most rigorous and accurate approximations used to solve the transport equation. However, until recently, geometric constrains in finite differencing schemes have made discrete ordinates methods impractical for nonorthogonal configurations such as reactor fuel assemblies. The development of an extended step characteristic (ESC) technique removes the grid structure limitation of traditional discrete ordinates methods. The NEWT computer code, a discrete ordinates code built on the ESC formalism, is being developed as part of the SCALE code system. This paper demonstrates the power, versatility, and applicability of NEWT as a state-of-the-art solution for current computational needs.

  9. Interaction picture density matrix quantum Monte Carlo

    SciTech Connect

    Malone, Fionn D. Lee, D. K. K.; Foulkes, W. M. C.; Blunt, N. S.; Shepherd, James J.; Spencer, J. S.

    2015-07-28

    The recently developed density matrix quantum Monte Carlo (DMQMC) algorithm stochastically samples the N-body thermal density matrix and hence provides access to exact properties of many-particle quantum systems at arbitrary temperatures. We demonstrate that moving to the interaction picture provides substantial benefits when applying DMQMC to interacting fermions. In this first study, we focus on a system of much recent interest: the uniform electron gas in the warm dense regime. The basis set incompleteness error at finite temperature is investigated and extrapolated via a simple Monte Carlo sampling procedure. Finally, we provide benchmark calculations for a four-electron system, comparing our results to previous work where possible.

  10. Efficiency of Health Care Production in Low-Resource Settings: A Monte-Carlo Simulation to Compare the Performance of Data Envelopment Analysis, Stochastic Distance Functions, and an Ensemble Model

    PubMed Central

    Giorgio, Laura Di; Flaxman, Abraham D.; Moses, Mark W.; Fullman, Nancy; Hanlon, Michael; Conner, Ruben O.; Wollum, Alexandra; Murray, Christopher J. L.

    2016-01-01

    Low-resource countries can greatly benefit from even small increases in efficiency of health service provision, supporting a strong case to measure and pursue efficiency improvement in low- and middle-income countries (LMICs). However, the knowledge base concerning efficiency measurement remains scarce for these contexts. This study shows that current estimation approaches may not be well suited to measure technical efficiency in LMICs and offers an alternative approach for efficiency measurement in these settings. We developed a simulation environment which reproduces the characteristics of health service production in LMICs, and evaluated the performance of Data Envelopment Analysis (DEA) and Stochastic Distance Function (SDF) for assessing efficiency. We found that an ensemble approach (ENS) combining efficiency estimates from a restricted version of DEA (rDEA) and restricted SDF (rSDF) is the preferable method across a range of scenarios. This is the first study to analyze efficiency measurement in a simulation setting for LMICs. Our findings aim to heighten the validity and reliability of efficiency analyses in LMICs, and thus inform policy dialogues about improving the efficiency of health service production in these settings. PMID:26812685

  11. Monte Carlo Methods in the Physical Sciences

    SciTech Connect

    Kalos, M H

    2007-06-06

    I will review the role that Monte Carlo methods play in the physical sciences. They are very widely used for a number of reasons: they permit the rapid and faithful transformation of a natural or model stochastic process into a computer code. They are powerful numerical methods for treating the many-dimensional problems that derive from important physical systems. Finally, many of the methods naturally permit the use of modern parallel computers in efficient ways. In the presentation, I will emphasize four aspects of the computations: whether or not the computation derives from a natural or model stochastic process; whether the system under study is highly idealized or realistic; whether the Monte Carlo methodology is straightforward or mathematically sophisticated; and finally, the scientific role of the computation.

  12. Primal and Dual Integrated Force Methods Used for Stochastic Analysis

    NASA Technical Reports Server (NTRS)

    Patnaik, Surya N.

    2005-01-01

    At the NASA Glenn Research Center, the primal and dual integrated force methods are being extended for the stochastic analysis of structures. The stochastic simulation can be used to quantify the consequence of scatter in stress and displacement response because of a specified variation in input parameters such as load (mechanical, thermal, and support settling loads), material properties (strength, modulus, density, etc.), and sizing design variables (depth, thickness, etc.). All the parameters are modeled as random variables with given probability distributions, means, and covariances. The stochastic response is formulated through a quadratic perturbation theory, and it is verified through a Monte Carlo simulation.

  13. Collisionally induced stochastic dynamics of fast ions in solids

    SciTech Connect

    Burgdoerfer, J.

    1989-01-01

    Recent developments in the theory of excited state formation in collisions of fast highly charged ions with solids are reviewed. We discuss a classical transport theory employing Monte-Carlo sampling of solutions of a microscopic Langevin equation. Dynamical screening by the dielectric medium as well as multiple collisions are incorporated through the drift and stochastic forces in the Langevin equation. The close relationship between the extrinsically stochastic dynamics described by the Langevin and the intrinsic stochasticity in chaotic nonlinear dynamical systems is stressed. Comparison with experimental data and possible modification by quantum corrections are discussed. 49 refs., 11 figs.

  14. A heterogeneous stochastic FEM framework for elliptic PDEs

    SciTech Connect

    Hou, Thomas Y. Liu, Pengfei

    2015-01-15

    We introduce a new concept of sparsity for the stochastic elliptic operator −div(a(x,ω)∇(⋅)), which reflects the compactness of its inverse operator in the stochastic direction and allows for spatially heterogeneous stochastic structure. This new concept of sparsity motivates a heterogeneous stochastic finite element method (HSFEM) framework for linear elliptic equations, which discretizes the equations using the heterogeneous coupling of spatial basis with local stochastic basis to exploit the local stochastic structure of the solution space. We also provide a sampling method to construct the local stochastic basis for this framework using the randomized range finding techniques. The resulting HSFEM involves two stages and suits the multi-query setting: in the offline stage, the local stochastic structure of the solution space is identified; in the online stage, the equation can be efficiently solved for multiple forcing functions. An online error estimation and correction procedure through Monte Carlo sampling is given. Numerical results for several problems with high dimensional stochastic input are presented to demonstrate the efficiency of the HSFEM in the online stage.

  15. The theory of hybrid stochastic algorithms

    SciTech Connect

    Kennedy, A.D. . Supercomputer Computations Research Inst.)

    1989-11-21

    These lectures introduce the family of Hybrid Stochastic Algorithms for performing Monte Carlo calculations in Quantum Field Theory. After explaining the basic concepts of Monte Carlo integration we discuss the properties of Markov processes and one particularly useful example of them: the Metropolis algorithm. Building upon this framework we consider the Hybrid and Langevin algorithms from the viewpoint that they are approximate versions of the Hybrid Monte Carlo method; and thus we are led to consider Molecular Dynamics using the Leapfrog algorithm. The lectures conclude by reviewing recent progress in these areas, explaining higher-order integration schemes, the asymptotic large-volume behaviour of the various algorithms, and some simple exact results obtained by applying them to free field theory. It is attempted throughout to give simple yet correct proofs of the various results encountered. 38 refs.

  16. Stochastic cooling in RHIC

    SciTech Connect

    Brennan,J.M.; Blaskiewicz, M. M.; Severino, F.

    2009-05-04

    After the success of longitudinal stochastic cooling of bunched heavy ion beam in RHIC, transverse stochastic cooling in the vertical plane of Yellow ring was installed and is being commissioned with proton beam. This report presents the status of the effort and gives an estimate, based on simulation, of the RHIC luminosity with stochastic cooling in all planes.

  17. Stochastic volatility models and Kelvin waves

    NASA Astrophysics Data System (ADS)

    Lipton, Alex; Sepp, Artur

    2008-08-01

    We use stochastic volatility models to describe the evolution of an asset price, its instantaneous volatility and its realized volatility. In particular, we concentrate on the Stein and Stein model (SSM) (1991) for the stochastic asset volatility and the Heston model (HM) (1993) for the stochastic asset variance. By construction, the volatility is not sign definite in SSM and is non-negative in HM. It is well known that both models produce closed-form expressions for the prices of vanilla option via the Lewis-Lipton formula. However, the numerical pricing of exotic options by means of the finite difference and Monte Carlo methods is much more complex for HM than for SSM. Until now, this complexity was considered to be an acceptable price to pay for ensuring that the asset volatility is non-negative. We argue that having negative stochastic volatility is a psychological rather than financial or mathematical problem, and advocate using SSM rather than HM in most applications. We extend SSM by adding volatility jumps and obtain a closed-form expression for the density of the asset price and its realized volatility. We also show that the current method of choice for solving pricing problems with stochastic volatility (via the affine ansatz for the Fourier-transformed density function) can be traced back to the Kelvin method designed in the 19th century for studying wave motion problems arising in fluid dynamics.

  18. The Analysis of the Patterns of Radiation-Induced DNA Damage Foci by a Stochastic Monte Carlo Model of DNA Double Strand Breaks Induction by Heavy Ions and Image Segmentation Software

    NASA Technical Reports Server (NTRS)

    Ponomarev, Artem; Cucinotta, F.

    2011-01-01

    To create a generalized mechanistic model of DNA damage in human cells that will generate analytical and image data corresponding to experimentally observed DNA damage foci and will help to improve the experimental foci yields by simulating spatial foci patterns and resolving problems with quantitative image analysis. Material and Methods: The analysis of patterns of RIFs (radiation-induced foci) produced by low- and high-LET (linear energy transfer) radiation was conducted by using a Monte Carlo model that combines the heavy ion track structure with characteristics of the human genome on the level of chromosomes. The foci patterns were also simulated in the maximum projection plane for flat nuclei. Some data analysis was done with the help of image segmentation software that identifies individual classes of RIFs and colocolized RIFs, which is of importance to some experimental assays that assign DNA damage a dual phosphorescent signal. Results: The model predicts the spatial and genomic distributions of DNA DSBs (double strand breaks) and associated RIFs in a human cell nucleus for a particular dose of either low- or high-LET radiation. We used the model to do analyses for different irradiation scenarios. In the beam-parallel-to-the-disk-of-a-flattened-nucleus scenario we found that the foci appeared to be merged due to their high density, while, in the perpendicular-beam scenario, the foci appeared as one bright spot per hit. The statistics and spatial distribution of regions of densely arranged foci, termed DNA foci chains, were predicted numerically using this model. Another analysis was done to evaluate the number of ion hits per nucleus, which were visible from streaks of closely located foci. In another analysis, our image segmentaiton software determined foci yields directly from images with single-class or colocolized foci. Conclusions: We showed that DSB clustering needs to be taken into account to determine the true DNA damage foci yield, which helps to

  19. Monte Carlo Benchmark

    Energy Science and Technology Software Center (ESTSC)

    2010-10-20

    The "Monte Carlo Benchmark" (MCB) is intended to model the computatiional performance of Monte Carlo algorithms on parallel architectures. It models the solution of a simple heuristic transport equation using a Monte Carlo technique. The MCB employs typical features of Monte Carlo algorithms such as particle creation, particle tracking, tallying particle information, and particle destruction. Particles are also traded among processors using MPI calls.

  20. Fast Quantum Algorithms for Numerical Integrals and Stochastic Processes

    NASA Technical Reports Server (NTRS)

    Abrams, D.; Williams, C.

    1999-01-01

    We discuss quantum algorithms that calculate numerical integrals and descriptive statistics of stochastic processes. With either of two distinct approaches, one obtains an exponential speed increase in comparison to the fastest known classical deterministic algotithms and a quadratic speed increase incomparison to classical Monte Carlo methods.

  1. Algebraic, geometric, and stochastic aspects of genetic operators

    NASA Technical Reports Server (NTRS)

    Foo, N. Y.; Bosworth, J. L.

    1972-01-01

    Genetic algorithms for function optimization employ genetic operators patterned after those observed in search strategies employed in natural adaptation. Two of these operators, crossover and inversion, are interpreted in terms of their algebraic and geometric properties. Stochastic models of the operators are developed which are employed in Monte Carlo simulations of their behavior.

  2. Stochastic Collocation Method for Three-dimensional Groundwater Flow

    NASA Astrophysics Data System (ADS)

    Shi, L.; Zhang, D.

    2008-12-01

    The stochastic collocation method (SCM) has recently gained extensive attention in several disciplines. The numerical implementation of SCM only requires repetitive runs of an existing deterministic solver or code as in the Monte Carlo simulation. But it is generally much more efficient than the Monte Carlo method. In this paper, the stochastic collocation method is used to efficiently qualify uncertainty of three-dimensional groundwater flow. We introduce the basic principles of common collocation methods, i.e., the tensor product collocation method (TPCM), Smolyak collocation method (SmCM), Stround-2 collocation method (StCM), and probability collocation method (PCM). Their accuracy, computational cost, and limitation are discussed. Illustrative examples reveal that the seamless combination of collocation techniques and existing simulators makes the new framework possible to efficiently handle complex stochastic problems.

  3. Digital simulation and modeling of nonlinear stochastic systems

    SciTech Connect

    Richardson, J M; Rowland, J R

    1981-04-01

    Digitally generated solutions of nonlinear stochastic systems are not unique but depend critically on the numerical integration algorithm used. Some theoretical and practical implications of this dependence are examined. The Ito-Stratonovich controversy concerning the solution of nonlinear stochastic systems is shown to be more than a theoretical debate on maintaining Markov properties as opposed to utilizing the computational rules of ordinary calculus. The theoretical arguments give rise to practical considerations in the formation and solution of discrete models from continuous stochastic systems. Well-known numerical integration algorithms are shown not only to provide different solutions for the same stochastic system but also to correspond to different stochastic integral definitions. These correspondences are proved by considering first and second moments of solutions that result from different integration algorithms and then comparing the moments to those arising from various stochastic integral definitions. This algorithm-dependence of solutions is in sharp contrast to the deterministic and linear stochastic cases in which unique solutions are determined by any convergent numerical algorithm. Consequences of the relationship between stochastic system solutions and simulation procedures are presented for a nonlinear filtering example. Monte Carlo simulations and statistical tests are applied to the example to illustrate the determining role which computational procedures play in generating solutions.

  4. Sensitivity Analysis and Stochastic Simulations of Non-equilibrium Plasma Flow

    SciTech Connect

    Lin, Guang; Karniadakis, George E.

    2009-11-05

    We study parametric uncertainties involved in plasma flows and apply stochastic sensitivity analysis to rank the importance of all inputs to guide large-scale stochastic simulations. Specifically, we employ different gradient-based sensitivity methods, namely Morris, multi-element probabilistic collocation method (ME-PCM) on sparse grids, Quasi-Monte Carlo, and Monte Carlo methods. These approaches go beyond the standard ``One-At-a-Time" sensitivity analysis and provide a measure of the nonlinear interaction effects for the uncertain inputs. The objective is to perform systematic stochastic simulations of plasma flows treating only as {\\em stochastic processes} the inputs with the highest sensitivity index, hence reducing substantially the computational cost. Two plasma flow examples are presented to demonstrate the capability and efficiency of the stochastic sensitivity analysis. The first one is a two-fluid model in a shock tube while the second one is a one-fluid/two-temperature model in flow past a cylinder.

  5. Fluctuations as stochastic deformation.

    PubMed

    Kazinski, P O

    2008-04-01

    A notion of stochastic deformation is introduced and the corresponding algebraic deformation procedure is developed. This procedure is analogous to the deformation of an algebra of observables like deformation quantization, but for an imaginary deformation parameter (the Planck constant). This method is demonstrated on diverse relativistic and nonrelativistic models with finite and infinite degrees of freedom. It is shown that under stochastic deformation the model of a nonrelativistic particle interacting with the electromagnetic field on a curved background passes into the stochastic model described by the Fokker-Planck equation with the diffusion tensor being the inverse metric tensor. The first stochastic correction to the Newton equations for this system is found. The Klein-Kramers equation is also derived as the stochastic deformation of a certain classical model. Relativistic generalizations of the Fokker-Planck and Klein-Kramers equations are obtained by applying the procedure of stochastic deformation to appropriate relativistic classical models. The analog of the Fokker-Planck equation associated with the stochastic Lorentz-Dirac equation is derived too. The stochastic deformation of the models of a free scalar field and an electromagnetic field is investigated. It turns out that in the latter case the obtained stochastic model describes a fluctuating electromagnetic field in a transparent medium. PMID:18517590

  6. Fluctuations as stochastic deformation

    NASA Astrophysics Data System (ADS)

    Kazinski, P. O.

    2008-04-01

    A notion of stochastic deformation is introduced and the corresponding algebraic deformation procedure is developed. This procedure is analogous to the deformation of an algebra of observables like deformation quantization, but for an imaginary deformation parameter (the Planck constant). This method is demonstrated on diverse relativistic and nonrelativistic models with finite and infinite degrees of freedom. It is shown that under stochastic deformation the model of a nonrelativistic particle interacting with the electromagnetic field on a curved background passes into the stochastic model described by the Fokker-Planck equation with the diffusion tensor being the inverse metric tensor. The first stochastic correction to the Newton equations for this system is found. The Klein-Kramers equation is also derived as the stochastic deformation of a certain classical model. Relativistic generalizations of the Fokker-Planck and Klein-Kramers equations are obtained by applying the procedure of stochastic deformation to appropriate relativistic classical models. The analog of the Fokker-Planck equation associated with the stochastic Lorentz-Dirac equation is derived too. The stochastic deformation of the models of a free scalar field and an electromagnetic field is investigated. It turns out that in the latter case the obtained stochastic model describes a fluctuating electromagnetic field in a transparent medium.

  7. A Stochastic Diffusion Process for the Dirichlet Distribution

    DOE PAGESBeta

    Bakosi, J.; Ristorcelli, J. R.

    2013-01-01

    The method of potential solutions of Fokker-Planck equations is used to develop a transport equation for the joint probability of N coupled stochastic variables with the Dirichlet distribution as its asymptotic solution. To ensure a bounded sample space, a coupled nonlinear diffusion process is required: the Wiener processes in the equivalent system of stochastic differential equations are multiplicative with coefficients dependent on all the stochastic variables. Individual samples of a discrete ensemble, obtained from the stochastic process, satisfy a unit-sum constraint at all times. The process may be used to represent realizations of a fluctuating ensemble of N variablesmore » subject to a conservation principle. Similar to the multivariate Wright-Fisher process, whose invariant is also Dirichlet, the univariate case yields a process whose invariant is the beta distribution. As a test of the results, Monte Carlo simulations are used to evolve numerical ensembles toward the invariant Dirichlet distribution.« less

  8. Stochastic Simulation Tool for Aerospace Structural Analysis

    NASA Technical Reports Server (NTRS)

    Knight, Norman F.; Moore, David F.

    2006-01-01

    Stochastic simulation refers to incorporating the effects of design tolerances and uncertainties into the design analysis model and then determining their influence on the design. A high-level evaluation of one such stochastic simulation tool, the MSC.Robust Design tool by MSC.Software Corporation, has been conducted. This stochastic simulation tool provides structural analysts with a tool to interrogate their structural design based on their mathematical description of the design problem using finite element analysis methods. This tool leverages the analyst's prior investment in finite element model development of a particular design. The original finite element model is treated as the baseline structural analysis model for the stochastic simulations that are to be performed. A Monte Carlo approach is used by MSC.Robust Design to determine the effects of scatter in design input variables on response output parameters. The tool was not designed to provide a probabilistic assessment, but to assist engineers in understanding cause and effect. It is driven by a graphical-user interface and retains the engineer-in-the-loop strategy for design evaluation and improvement. The application problem for the evaluation is chosen to be a two-dimensional shell finite element model of a Space Shuttle wing leading-edge panel under re-entry aerodynamic loading. MSC.Robust Design adds value to the analysis effort by rapidly being able to identify design input variables whose variability causes the most influence in response output parameters.

  9. Monte Carlo methods in ICF

    SciTech Connect

    Zimmerman, G.B.

    1997-06-24

    Monte Carlo methods appropriate to simulate the transport of x-rays, neutrons, ion and electrons in Inertial Confinement Fusion targets are described and analyzed. The Implicit Monte Carlo method of x-ray transport handles symmetry within indirect drive ICF hohlraums well, but can be improved 50X in efficiency by angular biasing the x-rays towards the fuel capsule. Accurate simulation of thermonuclear burns nd burn diagnostics involves detailed particle source spectra, charged particle ranges, inflight reaction kinematics, corrections for bulk and thermal Doppler effects and variance reduction to obtain adequate statistics for rare events. It is found that the effects of angular Coulomb scattering must be included in models of charged particle transport through heterogeneous materials.

  10. Monte Carlo Example Programs

    Energy Science and Technology Software Center (ESTSC)

    2006-05-09

    The Monte Carlo example programs VARHATOM and DMCATOM are two small, simple FORTRAN programs that illustrate the use of the Monte Carlo Mathematical technique for calculating the ground state energy of the hydrogen atom.

  11. Stochastic robustness of linear control systems

    NASA Technical Reports Server (NTRS)

    Stengel, Robert F.; Ryan, Laura E.

    1990-01-01

    A simple numerical procedure for estimating the stochastic robustness of a linear, time-invariant system is described. Monte Carlo evaluation of the system's eigenvalues allows the probability of instability and the related stochastic root locus to be estimated. This definition of robustness is an alternative to existing deterministic definitions that address both structured and unstructured parameter variations directly. This analysis approach treats not only Gaussian parameter uncertainties but non-Gaussian cases, including uncertain-but-bounded variations. Trivial extensions of the procedure admit alternate discriminants to be considered. Thus, the probabilities that stipulated degrees of instability will be exceeded or that closed-loop roots will leave desirable regions also can be estimated. Results are particularly amenable to graphical presentation.

  12. A Stochastic Employment Problem

    ERIC Educational Resources Information Center

    Wu, Teng

    2013-01-01

    The Stochastic Employment Problem(SEP) is a variation of the Stochastic Assignment Problem which analyzes the scenario that one assigns balls into boxes. Balls arrive sequentially with each one having a binary vector X = (X[subscript 1], X[subscript 2],...,X[subscript n]) attached, with the interpretation being that if X[subscript i] = 1 the ball…

  13. The isolation limits of stochastic vibration

    NASA Technical Reports Server (NTRS)

    Knopse, C. R.; Allaire, P. E.

    1993-01-01

    The vibration isolation problem is formulated as a 1D kinematic problem. The geometry of the stochastic wall trajectories arising from the stroke constraint is defined in terms of their significant extrema. An optimal control solution for the minimum acceleration return path determines a lower bound on platform mean square acceleration. This bound is expressed in terms of the probability density function on the significant maxima and the conditional fourth moment of the first passage time inverse. The first of these is found analytically while the second is found using a Monte Carlo simulation. The rms acceleration lower bound as a function of available space is then determined through numerical quadrature.

  14. Scattering of light by stochastically rough particles

    NASA Technical Reports Server (NTRS)

    Peltoniemi, Jouni I.; Lumme, Kari; Muinonen, Karri; Irvine, William M.

    1989-01-01

    The single particle phase function and the linear polarization for large stochastically deformed spheres have been calculated by Monte Carlo simulation using the geometrical optics approximation. The radius vector of a particle is assumed to obey a bivariate lognormal distribution with three free parameters: mean radius, its standard deviation and the coherence length of the autocorrelation function. All reflections/refractions which include sufficient energy have been included. Real and imaginary parts of the refractive index can be varied without any restrictions. Results and comparisons with some earlier less general theories are presented. Applications of this theory to the photometric properties of atmosphereless bodies and interplanetary dust are discussed.

  15. A Monte Carlo approach to water management

    NASA Astrophysics Data System (ADS)

    Koutsoyiannis, D.

    2012-04-01

    Common methods for making optimal decisions in water management problems are insufficient. Linear programming methods are inappropriate because hydrosystems are nonlinear with respect to their dynamics, operation constraints and objectives. Dynamic programming methods are inappropriate because water management problems cannot be divided into sequential stages. Also, these deterministic methods cannot properly deal with the uncertainty of future conditions (inflows, demands, etc.). Even stochastic extensions of these methods (e.g. linear-quadratic-Gaussian control) necessitate such drastic oversimplifications of hydrosystems that may make the obtained results irrelevant to the real world problems. However, a Monte Carlo approach is feasible and can form a general methodology applicable to any type of hydrosystem. This methodology uses stochastic simulation to generate system inputs, either unconditional or conditioned on a prediction, if available, and represents the operation of the entire system through a simulation model as faithful as possible, without demanding a specific mathematical form that would imply oversimplifications. Such representation fully respects the physical constraints, while at the same time it evaluates the system operation constraints and objectives in probabilistic terms, and derives their distribution functions and statistics through Monte Carlo simulation. As the performance criteria of a hydrosystem operation will generally be highly nonlinear and highly nonconvex functions of the control variables, a second Monte Carlo procedure, implementing stochastic optimization, is necessary to optimize system performance and evaluate the control variables of the system. The latter is facilitated if the entire representation is parsimonious, i.e. if the number of control variables is kept at a minimum by involving a suitable system parameterization. The approach is illustrated through three examples for (a) a hypothetical system of two reservoirs

  16. Solution of the stochastic control problem in unbounded domains.

    NASA Technical Reports Server (NTRS)

    Robinson, P.; Moore, J.

    1973-01-01

    Bellman's dynamic programming equation for the optimal index and control law for stochastic control problems is a parabolic or elliptic partial differential equation frequently defined in an unbounded domain. Existing methods of solution require bounded domain approximations, the application of singular perturbation techniques or Monte Carlo simulation procedures. In this paper, using the fact that Poisson impulse noise tends to a Gaussian process under certain limiting conditions, a method which achieves an arbitrarily good approximate solution to the stochastic control problem is given. The method uses the two iterative techniques of successive approximation and quasi-linearization and is inherently more efficient than existing methods of solution.

  17. Interaction picture density matrix quantum Monte Carlo.

    PubMed

    Malone, Fionn D; Blunt, N S; Shepherd, James J; Lee, D K K; Spencer, J S; Foulkes, W M C

    2015-07-28

    The recently developed density matrix quantum Monte Carlo (DMQMC) algorithm stochastically samples the N-body thermal density matrix and hence provides access to exact properties of many-particle quantum systems at arbitrary temperatures. We demonstrate that moving to the interaction picture provides substantial benefits when applying DMQMC to interacting fermions. In this first study, we focus on a system of much recent interest: the uniform electron gas in the warm dense regime. The basis set incompleteness error at finite temperature is investigated and extrapolated via a simple Monte Carlo sampling procedure. Finally, we provide benchmark calculations for a four-electron system, comparing our results to previous work where possible. PMID:26233116

  18. Stochastic Processes in Electrochemistry.

    PubMed

    Singh, Pradyumna S; Lemay, Serge G

    2016-05-17

    Stochastic behavior becomes an increasingly dominant characteristic of electrochemical systems as we probe them on the smallest scales. Advances in the tools and techniques of nanoelectrochemistry dictate that stochastic phenomena will become more widely manifest in the future. In this Perspective, we outline the conceptual tools that are required to analyze and understand this behavior. We draw on examples from several specific electrochemical systems where important information is encoded in, and can be derived from, apparently random signals. This Perspective attempts to serve as an accessible introduction to understanding stochastic phenomena in electrochemical systems and outlines why they cannot be understood with conventional macroscopic descriptions. PMID:27120701

  19. Quantum Stochastic Processes

    SciTech Connect

    Spring, William Joseph

    2009-04-13

    We consider quantum analogues of n-parameter stochastic processes, associated integrals and martingale properties extending classical results obtained in [1, 2, 3], and quantum results in [4, 5, 6, 7, 8, 9, 10].

  20. Microgrid Reliability Modeling and Battery Scheduling Using Stochastic Linear Programming

    SciTech Connect

    Cardoso, Goncalo; Stadler, Michael; Siddiqui, Afzal; Marnay, Chris; DeForest, Nicholas; Barbosa-Povoa, Ana; Ferrao, Paulo

    2013-05-23

    This paper describes the introduction of stochastic linear programming into Operations DER-CAM, a tool used to obtain optimal operating schedules for a given microgrid under local economic and environmental conditions. This application follows previous work on optimal scheduling of a lithium-iron-phosphate battery given the output uncertainty of a 1 MW molten carbonate fuel cell. Both are in the Santa Rita Jail microgrid, located in Dublin, California. This fuel cell has proven unreliable, partially justifying the consideration of storage options. Several stochastic DER-CAM runs are executed to compare different scenarios to values obtained by a deterministic approach. Results indicate that using a stochastic approach provides a conservative yet more lucrative battery schedule. Lower expected energy bills result, given fuel cell outages, in potential savings exceeding 6percent.

  1. Monte Carlo fundamentals

    SciTech Connect

    Brown, F.B.; Sutton, T.M.

    1996-02-01

    This report is composed of the lecture notes from the first half of a 32-hour graduate-level course on Monte Carlo methods offered at KAPL. These notes, prepared by two of the principle developers of KAPL`s RACER Monte Carlo code, cover the fundamental theory, concepts, and practices for Monte Carlo analysis. In particular, a thorough grounding in the basic fundamentals of Monte Carlo methods is presented, including random number generation, random sampling, the Monte Carlo approach to solving transport problems, computational geometry, collision physics, tallies, and eigenvalue calculations. Furthermore, modern computational algorithms for vector and parallel approaches to Monte Carlo calculations are covered in detail, including fundamental parallel and vector concepts, the event-based algorithm, master/slave schemes, parallel scaling laws, and portability issues.

  2. Dynamics of Double Stochastic Operators

    NASA Astrophysics Data System (ADS)

    Saburov, Mansoor

    2016-03-01

    A double stochastic operator is a generalization of a double stochastic matrix. In this paper, we study the dynamics of double stochastic operators. We give a criterion for a regularity of a double stochastic operator in terms of absences of its periodic points. We provide some examples to insure that, in general, a trajectory of a double stochastic operator may converge to any interior point of the simplex.

  3. A Stochastic Cratering Model for Asteroid Surfaces

    NASA Technical Reports Server (NTRS)

    Richardson, J. E.; Melosh, H. J.; Greenberg, R. J.

    2005-01-01

    The observed cratering records on asteroid surfaces (four so far: Gaspra, Ida, Mathilde, and Eros [1-4]) provide us with important clues to their past bombardment histories. Previous efforts toward interpreting these records have led to two basic modeling styles for reproducing the statistics of the observed crater populations. The first, and most direct, method is to use Monte Carlo techniques [5] to stochastically populate a matrix-model test surface with craters as a function of time [6,7]. The second method is to use a more general, parameterized approach to duplicate the statistics of the observed crater population [8,9]. In both methods, several factors must be included beyond the simple superposing of circular features: (1) crater erosion by subsequent impacts, (2) infilling of craters by impact ejecta, and (3) crater degradation and era- sure due to the seismic effects of subsequent impacts. Here we present an updated Monte Carlo (stochastic) modeling approach, designed specifically with small- to medium-sized asteroids in mind.

  4. Applying the stochastic Galerkin method to epidemic models with uncertainty in the parameters.

    PubMed

    Harman, David B; Johnston, Peter R

    2016-07-01

    Parameters in modelling are not always known with absolute certainty. In epidemic modelling, this is true of many of the parameters. It is important for this uncertainty to be included in any model. This paper looks at using the stochastic Galerkin method to solve an SIR model with uncertainty in the parameters. The results obtained from the stochastic Galerkin method are then compared with results obtained through Monte Carlo sampling. The computational cost of each method is also compared. It is shown that the stochastic Galerkin method produces good results, even at low order expansions, that are much less computationally expensive than Monte Carlo sampling. It is also shown that the stochastic Galerkin method does not always converge and this non-convergence is explored. PMID:27091743

  5. Renormalization of stochastic lattice models: basic formulation.

    PubMed

    Haselwandter, Christoph A; Vvedensky, Dimitri D

    2007-10-01

    We describe a general method for the multiscale analysis of stochastic lattice models. Beginning with a lattice Langevin formulation of site fluctuations, we derive stochastic partial differential equations by regularizing the transition rules of the model. Subsequent coarse graining is accomplished by calculating renormalization-group (RG) trajectories from initial conditions determined by the regularized atomistic models. The RG trajectories correspond to hierarchies of continuum equations describing lattice models over expanding length and time scales. These continuum equations retain a quantitative connection over different scales, as well as to the underlying atomistic dynamics. This provides a systematic method for the derivation of continuum equations from the transition rules of lattice models for any length and time scales. As an illustration we consider the one-dimensional (1D) Wolf-Villain (WV) model [Europhys. Lett. 13, 389 (1990)]. The RG analysis of this model, which we develop in detail, is generic and can be applied to a wide range of conservative lattice models. The RG trajectory of the 1D WV model shows a complex crossover sequence of linear and nonlinear stochastic differential equations, which is in excellent agreement with kinetic Monte Carlo simulations of this model. We conclude by discussing possible applications of the multiscale method described here to other nonequilibrium systems. PMID:17994944

  6. Monte Carlo Form-Finding Method for Tensegrity Structures

    NASA Astrophysics Data System (ADS)

    Li, Yue; Feng, Xi-Qiao; Cao, Yan-Ping

    2010-05-01

    In this paper, we propose a Monte Carlo-based approach to solve tensegrity form-finding problems. It uses a stochastic procedure to find the deterministic equilibrium configuration of a tensegrity structure. The suggested Monte Carlo form-finding (MCFF) method is highly efficient because it does not involve complicated matrix operations and symmetry analysis and it works for arbitrary initial configurations. Both regular and non-regular tensegrity problems of large scale can be solved. Some representative examples are presented to demonstrate the efficiency and accuracy of this versatile method.

  7. An adaptive high-dimensional stochastic model representation technique for the solution of stochastic partial differential equations

    SciTech Connect

    Ma Xiang; Zabaras, Nicholas

    2010-05-20

    A computational methodology is developed to address the solution of high-dimensional stochastic problems. It utilizes high-dimensional model representation (HDMR) technique in the stochastic space to represent the model output as a finite hierarchical correlated function expansion in terms of the stochastic inputs starting from lower-order to higher-order component functions. HDMR is efficient at capturing the high-dimensional input-output relationship such that the behavior for many physical systems can be modeled to good accuracy only by the first few lower-order terms. An adaptive version of HDMR is also developed to automatically detect the important dimensions and construct higher-order terms using only the important dimensions. The newly developed adaptive sparse grid collocation (ASGC) method is incorporated into HDMR to solve the resulting sub-problems. By integrating HDMR and ASGC, it is computationally possible to construct a low-dimensional stochastic reduced-order model of the high-dimensional stochastic problem and easily perform various statistic analysis on the output. Several numerical examples involving elementary mathematical functions and fluid mechanics problems are considered to illustrate the proposed method. The cases examined show that the method provides accurate results for stochastic dimensionality as high as 500 even with large-input variability. The efficiency of the proposed method is examined by comparing with Monte Carlo (MC) simulation.

  8. A probabilistic lower bound for two-stage stochastic programs

    SciTech Connect

    Dantzig, G.B.; Infanger, G.

    1995-11-01

    In the framework of Benders decomposition for two-stage stochastic linear programs, the authors estimate the coefficients and right-hand sides of the cutting planes using Monte Carlo sampling. The authors present a new theory for estimating a lower bound for the optimal objective value and they compare (using various test problems whose true optimal value is known) the predicted versus the observed rate of coverage of the optimal objective by the lower bound confidence interval.

  9. MonteCUBES

    SciTech Connect

    Blennow, Mattias

    2010-03-30

    We introduce the software package MonteCUBES, which is designed to easily and effectively perform Markov Chain Monte Carlo simulations for analyzing neutrino oscillation experiments. We discuss the methods used in the software as well as why we believe that it is particularly useful for simulating new physics effects.

  10. Stochastic modeling of driver behavior by Langevin equations

    NASA Astrophysics Data System (ADS)

    Langner, Michael; Peinke, Joachim

    2015-06-01

    A procedure based on stochastic Langevin equations is presented and shows how a stochastic model of driver behavior can be estimated directly from given data. The Langevin analysis allows the separation of a given data-set into a stochastic diffusion- and a deterministic drift field. Form the drift field a potential can be derived. In particular the method is here applied on driving data from a simulator. We overcome typical problems like varying sampling rates, low noise levels, low data amounts, inefficient coordinate systems, and non-stationary situations. From the estimation of the drift- and diffusion vector-fields derived from the data, we show different ways how to set up Monte-Carlo simulations for the driver behavior.

  11. Clustering of extreme events in typical stochastic models

    NASA Astrophysics Data System (ADS)

    Mystegniotis, Antonios; Vasilaki, Vasileia; Pappa, Ioanna; Curceac, Stelian; Saltouridou, Despina; Efthimiou, Nikos; Papatsoutsos, Giannis; Papalexiou, Simon Michael; Koutsoyiannis, Demetris

    2013-04-01

    We study the clustering properties of extreme events as produced by typical stochastic models and compare the results with the ones of observed data. Specifically the stochastic models that we use are the AR(1), AR(2), ARMA(1,1), as well as the Hurst-Kolmogorov model. In terms of data, we use instrumental and proxy hydroclimatic time series. To quantify clustering we study the multi scale properties of each process and in particular the variation of standard deviation with time scale as well of the frequencies of similar events (e.g. those exceeding a certain threshold with time scale). To calculate these properties we use either analytical methods when possible, or Monte Carlo simulation. Acknowledgement: This research is conducted within the frame of the undergraduate course "Stochastic Methods in Water Resources" of the National Technical University of Athens (NTUA). The School of Civil Engineering of NTUA provided moral support for the participation of the students in the Assembly.

  12. Spatial Correlations in Monte Carlo Criticality Simulations

    NASA Astrophysics Data System (ADS)

    Dumonteil, E.; Malvagi, F.; Zoia, A.; Mazzolo, A.; Artusio, D.; Dieudonné, C.; De Mulatier, C.

    2014-06-01

    Temporal correlations arising in Monte Carlo criticality codes have focused the attention of both developers and practitioners for a long time. Those correlations affects the evaluation of tallies of loosely coupled systems, where the system's typical size is very large compared to the diffusion/absorption length scale of the neutrons. These time correlations are closely related to spatial correlations, both variables being linked by the transport equation. Therefore this paper addresses the question of diagnosing spatial correlations in Monte Carlo criticality simulations. In that aim, we will propose a spatial correlation function well suited to Monte Carlo simulations, and show its use while simulating a fuel pin-cell. The results will be discussed, modeled and interpreted using the tools of branching processes of statistical mechanics. A mechanism called "neutron clustering", affecting simulations, will be discussed in this frame.

  13. Stochastic Thermal Convection

    NASA Astrophysics Data System (ADS)

    Venturi, Daniele

    2005-11-01

    Stochastic bifurcations and stability of natural convective flows in 2d and 3d enclosures are investigated by the multi-element generalized polynomial chaos (ME-gPC) method (Xiu and Karniadakis, SISC, vol. 24, 2002). The Boussinesq approximation for the variation of physical properties is assumed. The stability analysis is first carried out in a deterministic sense, to determine steady state solutions and primary and secondary bifurcations. Stochastic simulations are then conducted around discontinuities and transitional regimes. It is found that these highly non-linear phenomena can be efficiently captured by the ME-gPC method. Finally, the main findings of the stochastic analysis and their implications for heat transfer will be discussed.

  14. Stochastic Gauss equations

    NASA Astrophysics Data System (ADS)

    Pierret, Frédéric

    2016-02-01

    We derived the equations of Celestial Mechanics governing the variation of the orbital elements under a stochastic perturbation, thereby generalizing the classical Gauss equations. Explicit formulas are given for the semimajor axis, the eccentricity, the inclination, the longitude of the ascending node, the pericenter angle, and the mean anomaly, which are expressed in term of the angular momentum vector H per unit of mass and the energy E per unit of mass. Together, these formulas are called the stochastic Gauss equations, and they are illustrated numerically on an example from satellite dynamics.

  15. Stochastic modeling of rainfall

    SciTech Connect

    Guttorp, P.

    1996-12-31

    We review several approaches in the literature for stochastic modeling of rainfall, and discuss some of their advantages and disadvantages. While stochastic precipitation models have been around at least since the 1850`s, the last two decades have seen an increased development of models based (more or less) on the physical processes involved in precipitation. There are interesting questions of scale and measurement that pertain to these modeling efforts. Recent modeling efforts aim at including meteorological variables, and may be useful for regional down-scaling of general circulation models.

  16. STOCHASTIC COOLING FOR BUNCHED BEAMS.

    SciTech Connect

    BLASKIEWICZ, M.

    2005-05-16

    Problems associated with bunched beam stochastic cooling are reviewed. A longitudinal stochastic cooling system for RHIC is under construction and has been partially commissioned. The state of the system and future plans are discussed.

  17. MULTILEVEL ACCELERATION OF STOCHASTIC COLLOCATION METHODS FOR PDE WITH RANDOM INPUT DATA

    SciTech Connect

    Webster, Clayton G; Jantsch, Peter A; Teckentrup, Aretha L; Gunzburger, Max D

    2013-01-01

    Stochastic Collocation (SC) methods for stochastic partial differential equa- tions (SPDEs) suffer from the curse of dimensionality, whereby increases in the stochastic dimension cause an explosion of computational effort. To combat these challenges, multilevel approximation methods seek to decrease computational complexity by balancing spatial and stochastic discretization errors. As a form of variance reduction, multilevel techniques have been successfully applied to Monte Carlo (MC) methods, but may be extended to accelerate other methods for SPDEs in which the stochastic and spatial degrees of freedom are de- coupled. This article presents general convergence and computational complexity analysis of a multilevel method for SPDEs, demonstrating its advantages with regard to standard, single level approximation. The numerical results will highlight conditions under which multilevel sparse grid SC is preferable to the more traditional MC and SC approaches.

  18. Stochastic entrainment of a stochastic oscillator.

    PubMed

    Wang, Guanyu; Peskin, Charles S

    2015-11-01

    In this work, we consider a stochastic oscillator described by a discrete-state continuous-time Markov chain, in which the states are arranged in a circle, and there is a constant probability per unit time of jumping from one state to the next in a specified direction around the circle. At each of a sequence of equally spaced times, the oscillator has a specified probability of being reset to a particular state. The focus of this work is the entrainment of the oscillator by this periodic but stochastic stimulus. We consider a distinguished limit, in which (i) the number of states of the oscillator approaches infinity, as does the probability per unit time of jumping from one state to the next, so that the natural mean period of the oscillator remains constant, (ii) the resetting probability approaches zero, and (iii) the period of the resetting signal approaches a multiple, by a ratio of small integers, of the natural mean period of the oscillator. In this distinguished limit, we use analytic and numerical methods to study the extent to which entrainment occurs. PMID:26651734

  19. MORSE Monte Carlo code

    SciTech Connect

    Cramer, S.N.

    1984-01-01

    The MORSE code is a large general-use multigroup Monte Carlo code system. Although no claims can be made regarding its superiority in either theoretical details or Monte Carlo techniques, MORSE has been, since its inception at ORNL in the late 1960s, the most widely used Monte Carlo radiation transport code. The principal reason for this popularity is that MORSE is relatively easy to use, independent of any installation or distribution center, and it can be easily customized to fit almost any specific need. Features of the MORSE code are described.

  20. Application of tabu search to deterministic and stochastic optimization problems

    NASA Astrophysics Data System (ADS)

    Gurtuna, Ozgur

    During the past two decades, advances in computer science and operations research have resulted in many new optimization methods for tackling complex decision-making problems. One such method, tabu search, forms the basis of this thesis. Tabu search is a very versatile optimization heuristic that can be used for solving many different types of optimization problems. Another research area, real options, has also gained considerable momentum during the last two decades. Real options analysis is emerging as a robust and powerful method for tackling decision-making problems under uncertainty. Although the theoretical foundations of real options are well-established and significant progress has been made in the theory side, applications are lagging behind. A strong emphasis on practical applications and a multidisciplinary approach form the basic rationale of this thesis. The fundamental concepts and ideas behind tabu search and real options are investigated in order to provide a concise overview of the theory supporting both of these two fields. This theoretical overview feeds into the design and development of algorithms that are used to solve three different problems. The first problem examined is a deterministic one: finding the optimal servicing tours that minimize energy and/or duration of missions for servicing satellites around Earth's orbit. Due to the nature of the space environment, this problem is modeled as a time-dependent, moving-target optimization problem. Two solution methods are developed: an exhaustive method for smaller problem instances, and a method based on tabu search for larger ones. The second and third problems are related to decision-making under uncertainty. In the second problem, tabu search and real options are investigated together within the context of a stochastic optimization problem: option valuation. By merging tabu search and Monte Carlo simulation, a new method for studying options, Tabu Search Monte Carlo (TSMC) method, is

  1. Stochastic Models of Human Growth.

    ERIC Educational Resources Information Center

    Goodrich, Robert L.

    Stochastic difference equations of the Box-Jenkins form provide an adequate family of models on which to base the stochastic theory of human growth processes, but conventional time series identification methods do not apply to available data sets. A method to identify structure and parameters of stochastic difference equation models of human…

  2. Elementary stochastic cooling

    SciTech Connect

    Tollestrup, A.V.; Dugan, G

    1983-12-01

    Major headings in this review include: proton sources; antiproton production; antiproton sources and Liouville, the role of the Debuncher; transverse stochastic cooling, time domain; the accumulator; frequency domain; pickups and kickers; Fokker-Planck equation; calculation of constants in the Fokker-Planck equation; and beam feedback. (GHT)

  3. Focus on stochastic thermodynamics

    NASA Astrophysics Data System (ADS)

    Van den Broeck, Christian; Sasa, Shin-ichi; Seifert, Udo

    2016-02-01

    We introduce the thirty papers collected in this ‘focus on’ issue. The contributions explore conceptual issues within and around stochastic thermodynamics, use this framework for the theoretical modeling and experimental investigation of specific systems, and provide further perspectives on and for this active field.

  4. Stochastic finite-difference time-domain

    NASA Astrophysics Data System (ADS)

    Smith, Steven Michael

    2011-12-01

    This dissertation presents the derivation of an approximate method to determine the mean and the variance of electro-magnetic fields in the body using the Finite-Difference Time-Domain (FDTD) method. Unlike Monte Carlo analysis, which requires repeated FDTD simulations, this method directly computes the variance of the fields at every point in space at every sample of time in the simulation. This Stochastic FDTD simulation (S-FDTD) has at its root a new wave called the Variance wave, which is computed in the time domain along with the mean properties of the model space in the FDTD simulation. The Variance wave depends on the electro-magnetic fields, the reflections and transmission though the different dielectrics, and the variances of the electrical properties of the surrounding materials. Like the electro-magnetic fields, the Variance wave begins at zero (there is no variance before the source is turned on) and is computed in the time domain until all fields reach steady state. This process is performed in a fraction of the time of a Monte Carlo simulation and yields the first two statistical parameters (mean and variance). The mean of the field is computed using the traditional FDTD equations. Variance is computed by approximating the correlation coefficients between the constituitive properties and the use of the S-FDTD equations. The impetus for this work was the simulation time it takes to perform 3D Specific Absorption Rate (SAR) FDTD analysis of the human head model for cell phone power absorption in the human head due to the proximity of a cell phone being used. In many instances, Monte Carlo analysis is not performed due to the lengthy simulation times required. With the development of S-FDTD, these statistical analyses could be performed providing valuable statistical information with this information being provided in a small fraction of the time it would take to perform a Monte Carlo analysis.

  5. Monte Carlo variance reduction

    NASA Technical Reports Server (NTRS)

    Byrn, N. R.

    1980-01-01

    Computer program incorporates technique that reduces variance of forward Monte Carlo method for given amount of computer time in determining radiation environment in complex organic and inorganic systems exposed to significant amounts of radiation.

  6. Distributed parallel computing in stochastic modeling of groundwater systems.

    PubMed

    Dong, Yanhui; Li, Guomin; Xu, Haizhen

    2013-03-01

    Stochastic modeling is a rapidly evolving, popular approach to the study of the uncertainty and heterogeneity of groundwater systems. However, the use of Monte Carlo-type simulations to solve practical groundwater problems often encounters computational bottlenecks that hinder the acquisition of meaningful results. To improve the computational efficiency, a system that combines stochastic model generation with MODFLOW-related programs and distributed parallel processing is investigated. The distributed computing framework, called the Java Parallel Processing Framework, is integrated into the system to allow the batch processing of stochastic models in distributed and parallel systems. As an example, the system is applied to the stochastic delineation of well capture zones in the Pinggu Basin in Beijing. Through the use of 50 processing threads on a cluster with 10 multicore nodes, the execution times of 500 realizations are reduced to 3% compared with those of a serial execution. Through this application, the system demonstrates its potential in solving difficult computational problems in practical stochastic modeling. PMID:22823593

  7. Stochastic Synapses Enable Efficient Brain-Inspired Learning Machines.

    PubMed

    Neftci, Emre O; Pedroni, Bruno U; Joshi, Siddharth; Al-Shedivat, Maruan; Cauwenberghs, Gert

    2016-01-01

    Recent studies have shown that synaptic unreliability is a robust and sufficient mechanism for inducing the stochasticity observed in cortex. Here, we introduce Synaptic Sampling Machines (S2Ms), a class of neural network models that uses synaptic stochasticity as a means to Monte Carlo sampling and unsupervised learning. Similar to the original formulation of Boltzmann machines, these models can be viewed as a stochastic counterpart of Hopfield networks, but where stochasticity is induced by a random mask over the connections. Synaptic stochasticity plays the dual role of an efficient mechanism for sampling, and a regularizer during learning akin to DropConnect. A local synaptic plasticity rule implementing an event-driven form of contrastive divergence enables the learning of generative models in an on-line fashion. S2Ms perform equally well using discrete-timed artificial units (as in Hopfield networks) or continuous-timed leaky integrate and fire neurons. The learned representations are remarkably sparse and robust to reductions in bit precision and synapse pruning: removal of more than 75% of the weakest connections followed by cursory re-learning causes a negligible performance loss on benchmark classification tasks. The spiking neuron-based S2Ms outperform existing spike-based unsupervised learners, while potentially offering substantial advantages in terms of power and complexity, and are thus promising models for on-line learning in brain-inspired hardware. PMID:27445650

  8. Stochastic analysis of transport in tubes with rough walls

    SciTech Connect

    Tartakovsky, Daniel M. . E-mail: dmt@lanl.gov; Xiu Dongbin . E-mail: dxiu@math.purdue.edu

    2006-09-01

    Flow and transport in tubes with rough surfaces play an important role in a variety of applications. Often the topology of such surfaces cannot be accurately described in all of its relevant details due to either insufficient data or measurement errors or both. In such cases, this topological uncertainty can be efficiently handled by treating rough boundaries as random fields, so that an underlying physical phenomenon is described by deterministic or stochastic differential equations in random domains. To deal with this class of problems, we use a computational framework, which is based on stochastic mappings to transform the original deterministic/stochastic problem in a random domain into a stochastic problem in a deterministic domain. The latter problem has been studied more extensively and existing analytical/numerical techniques can be readily applied. In this paper, we employ both a generalized polynomial chaos and Monte Carlo simulations to solve the transformed stochastic problem. We use our approach to describe transport of a passive scalar in Stokes' flow and to quantify the corresponding predictive uncertainty.

  9. Stochastic Synapses Enable Efficient Brain-Inspired Learning Machines

    PubMed Central

    Neftci, Emre O.; Pedroni, Bruno U.; Joshi, Siddharth; Al-Shedivat, Maruan; Cauwenberghs, Gert

    2016-01-01

    Recent studies have shown that synaptic unreliability is a robust and sufficient mechanism for inducing the stochasticity observed in cortex. Here, we introduce Synaptic Sampling Machines (S2Ms), a class of neural network models that uses synaptic stochasticity as a means to Monte Carlo sampling and unsupervised learning. Similar to the original formulation of Boltzmann machines, these models can be viewed as a stochastic counterpart of Hopfield networks, but where stochasticity is induced by a random mask over the connections. Synaptic stochasticity plays the dual role of an efficient mechanism for sampling, and a regularizer during learning akin to DropConnect. A local synaptic plasticity rule implementing an event-driven form of contrastive divergence enables the learning of generative models in an on-line fashion. S2Ms perform equally well using discrete-timed artificial units (as in Hopfield networks) or continuous-timed leaky integrate and fire neurons. The learned representations are remarkably sparse and robust to reductions in bit precision and synapse pruning: removal of more than 75% of the weakest connections followed by cursory re-learning causes a negligible performance loss on benchmark classification tasks. The spiking neuron-based S2Ms outperform existing spike-based unsupervised learners, while potentially offering substantial advantages in terms of power and complexity, and are thus promising models for on-line learning in brain-inspired hardware. PMID:27445650

  10. SCALE Monte Carlo Eigenvalue Methods and New Advancements

    SciTech Connect

    Goluoglu, Sedat; Leppanen, Jaakko; Petrie Jr, Lester M; Dunn, Michael E

    2010-01-01

    SCALE code system is developed and maintained by Oak Ridge National Laboratory to perform criticality safety, reactor analysis, radiation shielding, and spent fuel characterization for nuclear facilities and transportation/storage package designs. SCALE is a modular code system that includes several codes which use either Monte Carlo or discrete ordinates solution methodologies for solving relevant neutral particle transport equations. This paper describes some of the key capabilities of the Monte Carlo criticality safety codes within the SCALE code system.

  11. Adaptive stochastic cellular automata: Applications

    NASA Astrophysics Data System (ADS)

    Qian, S.; Lee, Y. C.; Jones, R. D.; Barnes, C. W.; Flake, G. W.; O'Rourke, M. K.; Lee, K.; Chen, H. H.; Sun, G. Z.; Zhang, Y. Q.; Chen, D.; Giles, C. L.

    1990-09-01

    The stochastic learning cellular automata model has been applied to the problem of controlling unstable systems. Two example unstable systems studied are controlled by an adaptive stochastic cellular automata algorithm with an adaptive critic. The reinforcement learning algorithm and the architecture of the stochastic CA controller are presented. Learning to balance a single pole is discussed in detail. Balancing an inverted double pendulum highlights the power of the stochastic CA approach. The stochastic CA model is compared to conventional adaptive control and artificial neural network approaches.

  12. Stochastic computing with biomolecular automata

    NASA Astrophysics Data System (ADS)

    Adar, Rivka; Benenson, Yaakov; Linshiz, Gregory; Rosner, Amit; Tishby, Naftali; Shapiro, Ehud

    2004-07-01

    Stochastic computing has a broad range of applications, yet electronic computers realize its basic step, stochastic choice between alternative computation paths, in a cumbersome way. Biomolecular computers use a different computational paradigm and hence afford novel designs. We constructed a stochastic molecular automaton in which stochastic choice is realized by means of competition between alternative biochemical pathways, and choice probabilities are programmed by the relative molar concentrations of the software molecules coding for the alternatives. Programmable and autonomous stochastic molecular automata have been shown to perform direct analysis of disease-related molecular indicators in vitro and may have the potential to provide in situ medical diagnosis and cure.

  13. Stochastic Inversion of 2D Magnetotelluric Data

    Energy Science and Technology Software Center (ESTSC)

    2010-07-01

    The algorithm is developed to invert 2D magnetotelluric (MT) data based on sharp boundary parametrization using a Bayesian framework. Within the algorithm, we consider the locations and the resistivity of regions formed by the interfaces are as unknowns. We use a parallel, adaptive finite-element algorithm to forward simulate frequency-domain MT responses of 2D conductivity structure. Those unknown parameters are spatially correlated and are described by a geostatistical model. The joint posterior probability distribution function ismore » explored by Markov Chain Monte Carlo (MCMC) sampling methods. The developed stochastic model is effective for estimating the interface locations and resistivity. Most importantly, it provides details uncertainty information on each unknown parameter. Hardware requirements: PC, Supercomputer, Multi-platform, Workstation; Software requirements C and Fortan; Operation Systems/version is Linux/Unix or Windows« less

  14. Stochastic Inversion of 2D Magnetotelluric Data

    SciTech Connect

    Chen, Jinsong

    2010-07-01

    The algorithm is developed to invert 2D magnetotelluric (MT) data based on sharp boundary parametrization using a Bayesian framework. Within the algorithm, we consider the locations and the resistivity of regions formed by the interfaces are as unknowns. We use a parallel, adaptive finite-element algorithm to forward simulate frequency-domain MT responses of 2D conductivity structure. Those unknown parameters are spatially correlated and are described by a geostatistical model. The joint posterior probability distribution function is explored by Markov Chain Monte Carlo (MCMC) sampling methods. The developed stochastic model is effective for estimating the interface locations and resistivity. Most importantly, it provides details uncertainty information on each unknown parameter. Hardware requirements: PC, Supercomputer, Multi-platform, Workstation; Software requirements C and Fortan; Operation Systems/version is Linux/Unix or Windows

  15. Rarefied gas dynamics using stochastic rotation dynamics

    NASA Astrophysics Data System (ADS)

    Tuzel, Erkan; Ihle, Thomas; Kroll, Daniel M.

    2003-03-01

    In the past two decades, Direct Simulation Monte Carlo (DSMC) has been the dominant predictive tool for rarefied gas dynamics. In the non-hydrodynamic regime, where continuum models fail, particle based methods have been used to model systems ranging from shuttle re-entry problems to mesoscopic flow in MEMS devices. A new method, namely stochastic rotation dynamics (SRD), which utilizes effective multiparticle collisions, will be described. It will be shown that it is possible to get the correct transport coefficients for Argon gas by tuning the collision parameters, namely the collision angle and collision probability. Simulation results comparing DSMC and SRD will be shown for equilibrium relaxation rates and Poiseuille flow. One important feature of SRD is that it coarse-grains the time scale, so that simulations in the transition regime are typically five to twenty times faster than for DSMC. Benchmarks as a function of Knudsen number will be given, and directions for further research will be discussed.

  16. Edgeworth expansions of stochastic trading time

    NASA Astrophysics Data System (ADS)

    Decamps, Marc; De Schepper, Ann

    2010-08-01

    Under most local and stochastic volatility models the underlying forward is assumed to be a positive function of a time-changed Brownian motion. It relates nicely the implied volatility smile to the so-called activity rate in the market. Following Young and DeWitt-Morette (1986) [8], we propose to apply the Duru-Kleinert process-cum-time transformation in path integral to formulate the transition density of the forward. The method leads to asymptotic expansions of the transition density around a Gaussian kernel corresponding to the average activity in the market conditional on the forward value. The approximation is numerically illustrated for pricing vanilla options under the CEV model and the popular normal SABR model. The asymptotics can also be used for Monte Carlo simulations or backward integration schemes.

  17. Stochastic-dynamic Modelling of Morphodynamics

    NASA Astrophysics Data System (ADS)

    Eppel, D. P.; Kapitza, H.

    The numerical prediction of coastal sediment motion over time spans of years and decades is hampered by the sediment's ability, when stirred by waves and currents, to often react not uniquely to the external forcing but rather to show some kind of internal dynamics whose characteristics are not directly linked to the external forcing. Analytical stability analyses of the sediment-water system indicate that instabilities of tidally forced sediment layers in shallow seas can occur on spatial scales smaller than and not related to the scales of the tidal components. The finite growth of these un- stable amplitides can be described in terms of Ginzburg-Landau equations. Examples are the formation of ripples, sand waves and sand dunes or the formation of shore- face connected ridges. Among others, analyses of time series of coastal profiles from Duck, South Carolina extending over several decades gave evidence for self-organized behaviour suggesting that some important sediment-water systems can be perceived as dissipative dynamical structures. The consequences of such behaviour for predicting morphodynamics has been pointed out: one would expect that there exist time horizons beyond which predictions in the traditional deterministic sense are not possible. One would have to look for statistical quantities containing information of some relevance such as phase-space densities of solutions, attractor sets and the like. This contribution is part of an effort to address the prediction problem of morphody- namics through process-oriented models containing stochastic parameterizations for bottom shear stresses, critical shear stresses, etc.; process-based models because they are directly related to the physical processes but in a stochastic form because it is known that the physical processes contain strong stochastic components. The final outcome of such a program would be the generation of an ensemble of solutions by Monte Carlo integrations of the stochastic model

  18. Stochastic Simulations and Sensitivity Analysis of Plasma Flow

    SciTech Connect

    Lin, Guang; Karniadakis, George E.

    2008-08-01

    For complex physical systems with large number of random inputs, it will be very expensive to perform stochastic simulations for all of the random inputs. Stochastic sensitivity analysis is introduced in this paper to rank the significance of random inputs, provide information on which random input has more influence on the system outputs and the coupling or interaction effect among different random inputs. There are two types of numerical methods in stochastic sensitivity analysis: local and global methods. The local approach, which relies on a partial derivative of output with respect to parameters, is used to measure the sensitivity around a local operating point. When the system has strong nonlinearities and parameters fluctuate within a wide range from their nominal values, the local sensitivity does not provide full information to the system operators. On the other side, the global approach examines the sensitivity from the entire range of the parameter variations. The global screening methods, based on One-At-a-Time (OAT) perturbation of parameters, rank the significant parameters and identify their interaction among a large number of parameters. Several screening methods have been proposed in literature, i.e., the Morris method, Cotter's method, factorial experimentation, and iterated fractional factorial design. In this paper, the Morris method, Monte Carlo sampling method, Quasi-Monte Carlo method and collocation method based on sparse grids are studied. Additionally, two MHD examples are presented to demonstrate the capability and efficiency of the stochastic sensitivity analysis, which can be used as a pre-screening technique for reducing the dimensionality and hence the cost in stochastic simulations.

  19. Stochastic ice stream dynamics

    NASA Astrophysics Data System (ADS)

    Mantelli, Elisa; Bertagni, Matteo Bernard; Ridolfi, Luca

    2016-08-01

    Ice streams are narrow corridors of fast-flowing ice that constitute the arterial drainage network of ice sheets. Therefore, changes in ice stream flow are key to understanding paleoclimate, sea level changes, and rapid disintegration of ice sheets during deglaciation. The dynamics of ice flow are tightly coupled to the climate system through atmospheric temperature and snow recharge, which are known exhibit stochastic variability. Here we focus on the interplay between stochastic climate forcing and ice stream temporal dynamics. Our work demonstrates that realistic climate fluctuations are able to (i) induce the coexistence of dynamic behaviors that would be incompatible in a purely deterministic system and (ii) drive ice stream flow away from the regime expected in a steady climate. We conclude that environmental noise appears to be crucial to interpreting the past behavior of ice sheets, as well as to predicting their future evolution.

  20. VAWT stochastic wind simulator

    SciTech Connect

    Strickland, J.H.

    1987-04-01

    A stochastic wind simulation for VAWTs (VSTOC) has been developed which yields turbulent wind-velocity fluctuations for rotationally sampled points. This allows three-component wind-velocity fluctuations to be simulated at specified nodal points on the wind-turbine rotor. A first-order convection scheme is used which accounts for the decrease in streamwise velocity as the flow passes through the wind-turbine rotor. The VSTOC simulation is independent of the particular analytical technique used to predict the aerodynamic and performance characteristics of the turbine. The VSTOC subroutine may be used simply as a subroutine in a particular VAWT prediction code or it may be used as a subroutine in an independent processor. The independent processor is used to interact with a version of the VAWT prediction code which is segmented into deterministic and stochastic modules. Using VSTOC in this fashion is very efficient with regard to decreasing computer time for the overall calculation process.

  1. STOCHASTIC COOLING FOR RHIC.

    SciTech Connect

    BLASKIEWICZ,M.BRENNAN,J.M.CAMERON,P.WEI,J.

    2003-05-12

    Emittance growth due to Intra-Beam Scattering significantly reduces the heavy ion luminosity lifetime in RHIC. Stochastic cooling of the stored beam could improve things considerably by counteracting IBS and preventing particles from escaping the rf bucket [1]. High frequency bunched-beam stochastic cooling is especially challenging but observations of Schottky signals in the 4-8 GHz band indicate that conditions are favorable in RHIC [2]. We report here on measurements of the longitudinal beam transfer function carried out with a pickup kicker pair on loan from FNAL TEVATRON. Results imply that for ions a coasting beam description is applicable and we outline some general features of a viable momentum cooling system for RHIC.

  2. Stochastic speculative price.

    PubMed

    Samuelson, P A

    1971-02-01

    Because a commodity like wheat can be carried forward from one period to the next, speculative arbitrage serves to link its prices at different points of time. Since, however, the size of the harvest depends on complicated probability processes impossible to forecast with certainty, the minimal model for understanding market behavior must involve stochastic processes. The present study, on the basis of the axiom that it is the expected rather than the known-for-certain prices which enter into all arbitrage relations and carryover decisions, determines the behavior of price as the solution to a stochastic-dynamic-programming problem. The resulting stationary time series possesses an ergodic state and normative properties like those often observed for real-world bourses. PMID:16591903

  3. Stochastic ice stream dynamics.

    PubMed

    Mantelli, Elisa; Bertagni, Matteo Bernard; Ridolfi, Luca

    2016-08-01

    Ice streams are narrow corridors of fast-flowing ice that constitute the arterial drainage network of ice sheets. Therefore, changes in ice stream flow are key to understanding paleoclimate, sea level changes, and rapid disintegration of ice sheets during deglaciation. The dynamics of ice flow are tightly coupled to the climate system through atmospheric temperature and snow recharge, which are known exhibit stochastic variability. Here we focus on the interplay between stochastic climate forcing and ice stream temporal dynamics. Our work demonstrates that realistic climate fluctuations are able to (i) induce the coexistence of dynamic behaviors that would be incompatible in a purely deterministic system and (ii) drive ice stream flow away from the regime expected in a steady climate. We conclude that environmental noise appears to be crucial to interpreting the past behavior of ice sheets, as well as to predicting their future evolution. PMID:27457960

  4. Patchwork sampling of stochastic differential equations.

    PubMed

    Kürsten, Rüdiger; Behn, Ulrich

    2016-03-01

    We propose a method to sample stationary properties of solutions of stochastic differential equations, which is accurate and efficient if there are rarely visited regions or rare transitions between distinct regions of the state space. The method is based on a complete, nonoverlapping partition of the state space into patches on which the stochastic process is ergodic. On each of these patches we run simulations of the process strictly truncated to the corresponding patch, which allows effective simulations also in rarely visited regions. The correct weight for each patch is obtained by counting the attempted transitions between all different patches. The results are patchworked to cover the whole state space. We extend the concept of truncated Markov chains which is originally formulated for processes which obey detailed balance to processes not fulfilling detailed balance. The method is illustrated by three examples, describing the one-dimensional diffusion of an overdamped particle in a double-well potential, a system of many globally coupled overdamped particles in double-well potentials subject to additive Gaussian white noise, and the overdamped motion of a particle on the circle in a periodic potential subject to a deterministic drift and additive noise. In an appendix we explain how other well-known Markov chain Monte Carlo algorithms can be related to truncated Markov chains. PMID:27078484

  5. Stochastic Event-Driven Molecular Dynamics

    SciTech Connect

    Donev, Aleksandar Garcia, Alejandro L.; Alder, Berni J.

    2008-02-01

    A novel Stochastic Event-Driven Molecular Dynamics (SEDMD) algorithm is developed for the simulation of polymer chains suspended in a solvent. SEDMD combines event-driven molecular dynamics (EDMD) with the Direct Simulation Monte Carlo (DSMC) method. The polymers are represented as chains of hard-spheres tethered by square wells and interact with the solvent particles with hard-core potentials. The algorithm uses EDMD for the simulation of the polymer chain and the interactions between the chain beads and the surrounding solvent particles. The interactions between the solvent particles themselves are not treated deterministically as in EDMD, rather, the momentum and energy exchange in the solvent is determined stochastically using DSMC. The coupling between the solvent and the solute is consistently represented at the particle level retaining hydrodynamic interactions and thermodynamic fluctuations. However, unlike full MD simulations of both the solvent and the solute, in SEDMD the spatial structure of the solvent is ignored. The SEDMD algorithm is described in detail and applied to the study of the dynamics of a polymer chain tethered to a hard-wall subjected to uniform shear. SEDMD closely reproduces results obtained using traditional EDMD simulations with two orders of magnitude greater efficiency. Results question the existence of periodic (cycling) motion of the polymer chain.

  6. Patchwork sampling of stochastic differential equations

    NASA Astrophysics Data System (ADS)

    Kürsten, Rüdiger; Behn, Ulrich

    2016-03-01

    We propose a method to sample stationary properties of solutions of stochastic differential equations, which is accurate and efficient if there are rarely visited regions or rare transitions between distinct regions of the state space. The method is based on a complete, nonoverlapping partition of the state space into patches on which the stochastic process is ergodic. On each of these patches we run simulations of the process strictly truncated to the corresponding patch, which allows effective simulations also in rarely visited regions. The correct weight for each patch is obtained by counting the attempted transitions between all different patches. The results are patchworked to cover the whole state space. We extend the concept of truncated Markov chains which is originally formulated for processes which obey detailed balance to processes not fulfilling detailed balance. The method is illustrated by three examples, describing the one-dimensional diffusion of an overdamped particle in a double-well potential, a system of many globally coupled overdamped particles in double-well potentials subject to additive Gaussian white noise, and the overdamped motion of a particle on the circle in a periodic potential subject to a deterministic drift and additive noise. In an appendix we explain how other well-known Markov chain Monte Carlo algorithms can be related to truncated Markov chains.

  7. Simulating stochastic dynamics using large time steps.

    PubMed

    Corradini, O; Faccioli, P; Orland, H

    2009-12-01

    We present an approach to investigate the long-time stochastic dynamics of multidimensional classical systems, in contact with a heat bath. When the potential energy landscape is rugged, the kinetics displays a decoupling of short- and long-time scales and both molecular dynamics or Monte Carlo (MC) simulations are generally inefficient. Using a field theoretic approach, we perform analytically the average over the short-time stochastic fluctuations. This way, we obtain an effective theory, which generates the same long-time dynamics of the original theory, but has a lower time-resolution power. Such an approach is used to develop an improved version of the MC algorithm, which is particularly suitable to investigate the dynamics of rare conformational transitions. In the specific case of molecular systems at room temperature, we show that elementary integration time steps used to simulate the effective theory can be chosen a factor approximately 100 larger than those used in the original theory. Our results are illustrated and tested on a simple system, characterized by a rugged energy landscape. PMID:20365123

  8. Phylogenetic Stochastic Mapping Without Matrix Exponentiation

    PubMed Central

    Irvahn, Jan; Minin, Vladimir N.

    2014-01-01

    Abstract Phylogenetic stochastic mapping is a method for reconstructing the history of trait changes on a phylogenetic tree relating species/organism carrying the trait. State-of-the-art methods assume that the trait evolves according to a continuous-time Markov chain (CTMC) and works well for small state spaces. The computations slow down considerably for larger state spaces (e.g., space of codons), because current methodology relies on exponentiating CTMC infinitesimal rate matrices—an operation whose computational complexity grows as the size of the CTMC state space cubed. In this work, we introduce a new approach, based on a CTMC technique called uniformization, which does not use matrix exponentiation for phylogenetic stochastic mapping. Our method is based on a new Markov chain Monte Carlo (MCMC) algorithm that targets the distribution of trait histories conditional on the trait data observed at the tips of the tree. The computational complexity of our MCMC method grows as the size of the CTMC state space squared. Moreover, in contrast to competing matrix exponentiation methods, if the rate matrix is sparse, we can leverage this sparsity and increase the computational efficiency of our algorithm further. Using simulated data, we illustrate advantages of our MCMC algorithm and investigate how large the state space needs to be for our method to outperform matrix exponentiation approaches. We show that even on the moderately large state space of codons our MCMC method can be significantly faster than currently used matrix exponentiation methods. PMID:24918812

  9. Entropy of stochastic flows

    SciTech Connect

    Dorogovtsev, Andrei A

    2010-06-29

    For sets in a Hilbert space the concept of quadratic entropy is introduced. It is shown that this entropy is finite for the range of a stochastic flow of Brownian particles on R. This implies, in particular, the fact that the total time of the free travel in the Arratia flow of all particles that started from a bounded interval is finite. Bibliography: 10 titles.

  10. Ultimate open pit stochastic optimization

    NASA Astrophysics Data System (ADS)

    Marcotte, Denis; Caron, Josiane

    2013-02-01

    Classical open pit optimization (maximum closure problem) is made on block estimates, without directly considering the block grades uncertainty. We propose an alternative approach of stochastic optimization. The stochastic optimization is taken as the optimal pit computed on the block expected profits, rather than expected grades, computed from a series of conditional simulations. The stochastic optimization generates, by construction, larger ore and waste tonnages than the classical optimization. Contrary to the classical approach, the stochastic optimization is conditionally unbiased for the realized profit given the predicted profit. A series of simulated deposits with different variograms are used to compare the stochastic approach, the classical approach and the simulated approach that maximizes expected profit among simulated designs. Profits obtained with the stochastic optimization are generally larger than the classical or simulated pit. The main factor controlling the relative gain of stochastic optimization compared to classical approach and simulated pit is shown to be the information level as measured by the boreholes spacing/range ratio. The relative gains of the stochastic approach over the classical approach increase with the treatment costs but decrease with mining costs. The relative gains of the stochastic approach over the simulated pit approach increase both with the treatment and mining costs. At early stages of an open pit project, when uncertainty is large, the stochastic optimization approach appears preferable to the classical approach or the simulated pit approach for fair comparison of the values of alternative projects and for the initial design and planning of the open pit.

  11. Kinetic Monte Carlo models for the study of chemical reactions in the Earth's upper atmosphere

    NASA Astrophysics Data System (ADS)

    Turchak, L. I.; Shematovich, V. I.

    2016-06-01

    A stochastic approach to study the non-equilibrium chemistry in the Earth's upper atmosphere is presented, which has been developed over a number of years. Kinetic Monte Carlo models based on this approach are an effective tool for investigating the role of suprathermal particles both in local variations of the atmospheric chemical composition and in the formation of the hot planetary corona.

  12. Stochastic multiscale modeling of polycrystalline materials

    NASA Astrophysics Data System (ADS)

    Wen, Bin

    Mechanical properties of engineering materials are sensitive to the underlying random microstructure. Quantification of mechanical property variability induced by microstructure variation is essential for the prediction of extreme properties and microstructure-sensitive design of materials. Recent advances in high throughput characterization of polycrystalline microstructures have resulted in huge data sets of microstructural descriptors and image snapshots. To utilize these large scale experimental data for computing the resulting variability of macroscopic properties, appropriate mathematical representation of microstructures is needed. By exploring the space containing all admissible microstructures that are statistically similar to the available data, one can estimate the distribution/envelope of possible properties by employing efficient stochastic simulation methodologies along with robust physics-based deterministic simulators. The focus of this thesis is on the construction of low-dimensional representations of random microstructures and the development of efficient physics-based simulators for polycrystalline materials. By adopting appropriate stochastic methods, such as Monte Carlo and Adaptive Sparse Grid Collocation methods, the variability of microstructure-sensitive properties of polycrystalline materials is investigated. The primary outcomes of this thesis include: (1) Development of data-driven reduced-order representations of microstructure variations to construct the admissible space of random polycrystalline microstructures. (2) Development of accurate and efficient physics-based simulators for the estimation of material properties based on mesoscale microstructures. (3) Investigating property variability of polycrystalline materials using efficient stochastic simulation methods in combination with the above two developments. The uncertainty quantification framework developed in this work integrates information science and materials science, and

  13. Quantum Spontaneous Stochasticity

    NASA Astrophysics Data System (ADS)

    Drivas, Theodore; Eyink, Gregory

    Classical Newtonian dynamics is expected to be deterministic, but recent fluid turbulence theory predicts that a particle advected at high Reynolds-numbers by ''nearly rough'' flows moves nondeterministically. Small stochastic perturbations to the flow velocity or to the initial data lead to persistent randomness, even in the limit where the perturbations vanish! Such ``spontaneous stochasticity'' has profound consequences for astrophysics, geophysics, and our daily lives. We show that a similar effect occurs with a quantum particle in a ''nearly rough'' force, for the semi-classical (large-mass) limit, where spreading of the wave-packet is usually expected to be negligible and dynamics to be deterministic Newtonian. Instead, there are non-zero probabilities to observe multiple, non-unique solutions of the classical equations. Although the quantum wave-function remains split, rapid phase oscillations prevent any coherent superposition of the branches. Classical spontaneous stochasticity has not yet been seen in controlled laboratory experiments of fluid turbulence, but the corresponding quantum effects may be observable by current techniques. We suggest possible experiments with neutral atomic-molecular systems in repulsive electric dipole potentials.

  14. Noncovalent Interactions by Quantum Monte Carlo.

    PubMed

    Dubecký, Matúš; Mitas, Lubos; Jurečka, Petr

    2016-05-11

    Quantum Monte Carlo (QMC) is a family of stochastic methods for solving quantum many-body problems such as the stationary Schrödinger equation. The review introduces basic notions of electronic structure QMC based on random walks in real space as well as its advances and adaptations to systems with noncovalent interactions. Specific issues such as fixed-node error cancellation, construction of trial wave functions, and efficiency considerations that allow for benchmark quality QMC energy differences are described in detail. Comprehensive overview of articles covers QMC applications to systems with noncovalent interactions over the last three decades. The current status of QMC with regard to efficiency, applicability, and usability by nonexperts together with further considerations about QMC developments, limitations, and unsolved challenges are discussed as well. PMID:27081724

  15. Chemical application of diffusion quantum Monte Carlo

    NASA Technical Reports Server (NTRS)

    Reynolds, P. J.; Lester, W. A., Jr.

    1984-01-01

    The diffusion quantum Monte Carlo (QMC) method gives a stochastic solution to the Schroedinger equation. This approach is receiving increasing attention in chemical applications as a result of its high accuracy. However, reducing statistical uncertainty remains a priority because chemical effects are often obtained as small differences of large numbers. As an example, the single-triplet splitting of the energy of the methylene molecule CH sub 2 is given. The QMC algorithm was implemented on the CYBER 205, first as a direct transcription of the algorithm running on the VAX 11/780, and second by explicitly writing vector code for all loops longer than a crossover length C. The speed of the codes relative to one another as a function of C, and relative to the VAX, are discussed. The computational time dependence obtained versus the number of basis functions is discussed and this is compared with that obtained from traditional quantum chemistry codes and that obtained from traditional computer architectures.

  16. Stochastic lag time in nucleated linear self-assembly.

    PubMed

    Tiwari, Nitin S; van der Schoot, Paul

    2016-06-21

    Protein aggregation is of great importance in biology, e.g., in amyloid fibrillation. The aggregation processes that occur at the cellular scale must be highly stochastic in nature because of the statistical number fluctuations that arise on account of the small system size at the cellular scale. We study the nucleated reversible self-assembly of monomeric building blocks into polymer-like aggregates using the method of kinetic Monte Carlo. Kinetic Monte Carlo, being inherently stochastic, allows us to study the impact of fluctuations on the polymerization reactions. One of the most important characteristic features in this kind of problem is the existence of a lag phase before self-assembly takes off, which is what we focus attention on. We study the associated lag time as a function of system size and kinetic pathway. We find that the leading order stochastic contribution to the lag time before polymerization commences is inversely proportional to the system volume for large-enough system size for all nine reaction pathways tested. Finite-size corrections to this do depend on the kinetic pathway. PMID:27334194

  17. Stochastic lag time in nucleated linear self-assembly

    NASA Astrophysics Data System (ADS)

    Tiwari, Nitin S.; van der Schoot, Paul

    2016-06-01

    Protein aggregation is of great importance in biology, e.g., in amyloid fibrillation. The aggregation processes that occur at the cellular scale must be highly stochastic in nature because of the statistical number fluctuations that arise on account of the small system size at the cellular scale. We study the nucleated reversible self-assembly of monomeric building blocks into polymer-like aggregates using the method of kinetic Monte Carlo. Kinetic Monte Carlo, being inherently stochastic, allows us to study the impact of fluctuations on the polymerization reactions. One of the most important characteristic features in this kind of problem is the existence of a lag phase before self-assembly takes off, which is what we focus attention on. We study the associated lag time as a function of system size and kinetic pathway. We find that the leading order stochastic contribution to the lag time before polymerization commences is inversely proportional to the system volume for large-enough system size for all nine reaction pathways tested. Finite-size corrections to this do depend on the kinetic pathway.

  18. Linear-scaling and parallelisable algorithms for stochastic quantum chemistry

    NASA Astrophysics Data System (ADS)

    Booth, George H.; Smart, Simon D.; Alavi, Ali

    2014-07-01

    For many decades, quantum chemical method development has been dominated by algorithms which involve increasingly complex series of tensor contractions over one-electron orbital spaces. Procedures for their derivation and implementation have evolved to require the minimum amount of logic and rely heavily on computationally efficient library-based matrix algebra and optimised paging schemes. In this regard, the recent development of exact stochastic quantum chemical algorithms to reduce computational scaling and memory overhead requires a contrasting algorithmic philosophy, but one which when implemented efficiently can achieve higher accuracy/cost ratios with small random errors. Additionally, they can exploit the continuing trend for massive parallelisation which hinders the progress of deterministic high-level quantum chemical algorithms. In the Quantum Monte Carlo community, stochastic algorithms are ubiquitous but the discrete Fock space of quantum chemical methods is often unfamiliar, and the methods introduce new concepts required for algorithmic efficiency. In this paper, we explore these concepts and detail an algorithm used for Full Configuration Interaction Quantum Monte Carlo (FCIQMC), which is implemented and available in MOLPRO and as a standalone code, and is designed for high-level parallelism and linear-scaling with walker number. Many of the algorithms are also in use in, or can be transferred to, other stochastic quantum chemical methods and implementations. We apply these algorithms to the strongly correlated chromium dimer to demonstrate their efficiency and parallelism.

  19. A Stochastic Multi-Attribute Assessment of Energy Options for Fairbanks, Alaska

    NASA Astrophysics Data System (ADS)

    Read, L.; Madani, K.; Mokhtari, S.; Hanks, C. L.; Sheets, B.

    2012-12-01

    Many competing projects have been proposed to address Interior Alaska's high cost of energy—both for electricity production and for heating. Public and private stakeholders are considering the costs associated with these competing projects which vary in fuel source, subsidy requirements, proximity, and other factors. As a result, the current projects under consideration involve a complex cost structure of potential subsidies and reliance on present and future market prices, introducing a significant amount of uncertainty associated with each selection. Multi-criteria multi-decision making (MCMDM) problems of this nature can benefit from game theory and systems engineering methods, which account for behavior and preferences of stakeholders in the analysis to produce feasible and relevant solutions. This work uses a stochastic MCMDM framework to evaluate the trade-offs of each proposed project based on a complete cost analysis, environmental impact, and long-term sustainability. Uncertainty in the model is quantified via a Monte Carlo analysis, which helps characterize the sensitivity and risk associated with each project. Based on performance measures and criteria outlined by the stakeholders, a decision matrix will inform policy on selecting a project that is both efficient and preferred by the constituents.

  20. A retrodictive stochastic simulation algorithm

    SciTech Connect

    Vaughan, T.G. Drummond, P.D.; Drummond, A.J.

    2010-05-20

    In this paper we describe a simple method for inferring the initial states of systems evolving stochastically according to master equations, given knowledge of the final states. This is achieved through the use of a retrodictive stochastic simulation algorithm which complements the usual predictive stochastic simulation approach. We demonstrate the utility of this new algorithm by applying it to example problems, including the derivation of likely ancestral states of a gene sequence given a Markovian model of genetic mutation.

  1. Multiple-time-stepping generalized hybrid Monte Carlo methods

    NASA Astrophysics Data System (ADS)

    Escribano, Bruno; Akhmatskaya, Elena; Reich, Sebastian; Azpiroz, Jon M.

    2015-01-01

    Performance of the generalized shadow hybrid Monte Carlo (GSHMC) method [1], which proved to be superior in sampling efficiency over its predecessors [2-4], molecular dynamics and hybrid Monte Carlo, can be further improved by combining it with multi-time-stepping (MTS) and mollification of slow forces. We demonstrate that the comparatively simple modifications of the method not only lead to better performance of GSHMC itself but also allow for beating the best performed methods, which use the similar force splitting schemes. In addition we show that the same ideas can be successfully applied to the conventional generalized hybrid Monte Carlo method (GHMC). The resulting methods, MTS-GHMC and MTS-GSHMC, provide accurate reproduction of thermodynamic and dynamical properties, exact temperature control during simulation and computational robustness and efficiency. MTS-GHMC uses a generalized momentum update to achieve weak stochastic stabilization to the molecular dynamics (MD) integrator. MTS-GSHMC adds the use of a shadow (modified) Hamiltonian to filter the MD trajectories in the HMC scheme. We introduce a new shadow Hamiltonian formulation adapted to force-splitting methods. The use of such Hamiltonians improves the acceptance rate of trajectories and has a strong impact on the sampling efficiency of the method. Both methods were implemented in the open-source MD package ProtoMol and were tested on a water and a protein systems. Results were compared to those obtained using a Langevin Molly (LM) method [5] on the same systems. The test results demonstrate the superiority of the new methods over LM in terms of stability, accuracy and sampling efficiency. This suggests that putting the MTS approach in the framework of hybrid Monte Carlo and using the natural stochasticity offered by the generalized hybrid Monte Carlo lead to improving stability of MTS and allow for achieving larger step sizes in the simulation of complex systems.

  2. Multiple-time-stepping generalized hybrid Monte Carlo methods

    SciTech Connect

    Escribano, Bruno; Akhmatskaya, Elena; Reich, Sebastian; Azpiroz, Jon M.

    2015-01-01

    Performance of the generalized shadow hybrid Monte Carlo (GSHMC) method [1], which proved to be superior in sampling efficiency over its predecessors [2–4], molecular dynamics and hybrid Monte Carlo, can be further improved by combining it with multi-time-stepping (MTS) and mollification of slow forces. We demonstrate that the comparatively simple modifications of the method not only lead to better performance of GSHMC itself but also allow for beating the best performed methods, which use the similar force splitting schemes. In addition we show that the same ideas can be successfully applied to the conventional generalized hybrid Monte Carlo method (GHMC). The resulting methods, MTS-GHMC and MTS-GSHMC, provide accurate reproduction of thermodynamic and dynamical properties, exact temperature control during simulation and computational robustness and efficiency. MTS-GHMC uses a generalized momentum update to achieve weak stochastic stabilization to the molecular dynamics (MD) integrator. MTS-GSHMC adds the use of a shadow (modified) Hamiltonian to filter the MD trajectories in the HMC scheme. We introduce a new shadow Hamiltonian formulation adapted to force-splitting methods. The use of such Hamiltonians improves the acceptance rate of trajectories and has a strong impact on the sampling efficiency of the method. Both methods were implemented in the open-source MD package ProtoMol and were tested on a water and a protein systems. Results were compared to those obtained using a Langevin Molly (LM) method [5] on the same systems. The test results demonstrate the superiority of the new methods over LM in terms of stability, accuracy and sampling efficiency. This suggests that putting the MTS approach in the framework of hybrid Monte Carlo and using the natural stochasticity offered by the generalized hybrid Monte Carlo lead to improving stability of MTS and allow for achieving larger step sizes in the simulation of complex systems.

  3. Decomposition and (importance) sampling techniques for multi-stage stochastic linear programs

    SciTech Connect

    Infanger, G.

    1993-11-01

    The difficulty of solving large-scale multi-stage stochastic linear programs arises from the sheer number of scenarios associated with numerous stochastic parameters. The number of scenarios grows exponentially with the number of stages and problems get easily out of hand even for very moderate numbers of stochastic parameters per stage. Our method combines dual (Benders) decomposition with Monte Carlo sampling techniques. We employ importance sampling to efficiently obtain accurate estimates of both expected future costs and gradients and right-hand sides of cuts. The method enables us to solve practical large-scale problems with many stages and numerous stochastic parameters per stage. We discuss the theory of sharing and adjusting cuts between different scenarios in a stage. We derive probabilistic lower and upper bounds, where we use importance path sampling for the upper bound estimation. Initial numerical results turned out to be promising.

  4. Hybrid stochastic simulations of intracellular reaction-diffusion systems

    PubMed Central

    Kalantzis, Georgios

    2009-01-01

    With the observation that stochasticity is important in biological systems, chemical kinetics have begun to receive wider interest. While the use of Monte Carlo discrete event simulations most accurately capture the variability of molecular species, they become computationally costly for complex reaction-diffusion systems with large populations of molecules. On the other hand, continuous time models are computationally efficient but they fail to capture any variability in the molecular species. In this study a novel hybrid stochastic approach is introduced for simulating reaction-diffusion systems. We developed a dynamic partitioning strategy using fractional propensities. In that way processes with high frequency are simulated mostly with deterministic rate-based equations, and those with low frequency mostly with the exact stochastic algorithm of Gillespie. In this way we preserve the stochastic behavior of cellular pathways while being able to apply it to large populations of molecules. In this article we describe this hybrid algorithmic approach, and we demonstrate its accuracy and efficiency compared with the Gillespie algorithm for two different systems. First, a model of intracellular viral kinetics with two steady states and second, a compartmental model of the postsynaptic spine head for studying the dynamics of Ca+2 and NMDA receptors. PMID:19414282

  5. Multi-scenario modelling of uncertainty in stochastic chemical systems

    SciTech Connect

    Evans, R. David; Ricardez-Sandoval, Luis A.

    2014-09-15

    Uncertainty analysis has not been well studied at the molecular scale, despite extensive knowledge of uncertainty in macroscale systems. The ability to predict the effect of uncertainty allows for robust control of small scale systems such as nanoreactors, surface reactions, and gene toggle switches. However, it is difficult to model uncertainty in such chemical systems as they are stochastic in nature, and require a large computational cost. To address this issue, a new model of uncertainty propagation in stochastic chemical systems, based on the Chemical Master Equation, is proposed in the present study. The uncertain solution is approximated by a composite state comprised of the averaged effect of samples from the uncertain parameter distributions. This model is then used to study the effect of uncertainty on an isomerization system and a two gene regulation network called a repressilator. The results of this model show that uncertainty in stochastic systems is dependent on both the uncertain distribution, and the system under investigation. -- Highlights: •A method to model uncertainty on stochastic systems was developed. •The method is based on the Chemical Master Equation. •Uncertainty in an isomerization reaction and a gene regulation network was modelled. •Effects were significant and dependent on the uncertain input and reaction system. •The model was computationally more efficient than Kinetic Monte Carlo.

  6. Stochastic averaging of energy envelope of Preisach hysteretic systems

    NASA Astrophysics Data System (ADS)

    Wang, Y.; Ying, Z. G.; Zhu, W. Q.

    2009-04-01

    A new stochastic averaging technique for analyzing the response of a single-degree-of-freedom Preisach hysteretic system with nonlocal memory under stationary Gaussian stochastic excitation is proposed. An equivalent nonhysteretic nonlinear system with amplitude-envelope-dependent damping and stiffness is firstly obtained from the given system by using the generalized harmonic balance technique. The relationship between the amplitude envelope and the energy envelope is then established, and the equivalent damping and stiffness coefficients are expressed as functions of the energy envelope. The available range of the yielding force of the system is extended and also the strong nonlinear stiffness of the system is incorporated so as to improve the response prediction. Finally, an averaged Itô stochastic differential equation for the energy envelope of the system as one-dimensional diffusion process is derived by using the stochastic averaging method of energy envelope, and the Fokker-Planck-Kolmogorov equation associated with the averaged Itô equation is solved to obtain stationary probability densities of the energy envelope and amplitude envelope. The approximate solutions are validated by using the Monte Carlo simulation.

  7. Stochastic optimization of multireservoir systems via reinforcement learning

    NASA Astrophysics Data System (ADS)

    Lee, Jin-Hee; Labadie, John W.

    2007-11-01

    Although several variants of stochastic dynamic programming have been applied to optimal operation of multireservoir systems, they have been plagued by a high-dimensional state space and the inability to accurately incorporate the stochastic environment as characterized by temporally and spatially correlated hydrologic inflows. Reinforcement learning has emerged as an effective approach to solving sequential decision problems by combining concepts from artificial intelligence, cognitive science, and operations research. A reinforcement learning system has a mathematical foundation similar to dynamic programming and Markov decision processes, with the goal of maximizing the long-term reward or returns as conditioned on the state of the system environment and the immediate reward obtained from operational decisions. Reinforcement learning can include Monte Carlo simulation where transition probabilities and rewards are not explicitly known a priori. The Q-Learning method in reinforcement learning is demonstrated on the two-reservoir Geum River system, South Korea, and is shown to outperform implicit stochastic dynamic programming and sampling stochastic dynamic programming methods.

  8. Stochastic transitions in a bistable reaction system on the membrane

    PubMed Central

    Kochańczyk, Marek; Jaruszewicz, Joanna; Lipniacki, Tomasz

    2013-01-01

    Transitions between steady states of a multi-stable stochastic system in the perfectly mixed chemical reactor are possible only because of stochastic switching. In realistic cellular conditions, where diffusion is limited, transitions between steady states can also follow from the propagation of travelling waves. Here, we study the interplay between the two modes of transition for a prototype bistable system of kinase–phosphatase interactions on the plasma membrane. Within microscopic kinetic Monte Carlo simulations on the hexagonal lattice, we observed that for finite diffusion the behaviour of the spatially extended system differs qualitatively from the behaviour of the same system in the well-mixed regime. Even when a small isolated subcompartment remains mostly inactive, the chemical travelling wave may propagate, leading to the activation of a larger compartment. The activating wave can be induced after a small subdomain is activated as a result of a stochastic fluctuation. Such a spontaneous onset of activity is radically more probable in subdomains characterized by slower diffusion. Our results show that a local immobilization of substrates can lead to the global activation of membrane proteins by the mechanism that involves stochastic fluctuations followed by the propagation of a semi-deterministic travelling wave. PMID:23635492

  9. Stochastic calculus in physics

    SciTech Connect

    Fox, R.F.

    1987-03-01

    The relationship of Ito-Stratonovich stochastic calculus to studies of weakly colored noise is explained. A functional calculus approach is used to obtain an effective Fokker-Planck equation for the weakly colored noise regime. In a smooth limit, this representation produces the Stratonovich version of the Ito-Stratonovich calculus for white noise. It also provides an approach to steady state behavior for strongly colored noise. Numerical simulation algorithms are explored, and a novel suggestion is made for efficient and accurate simulation of white noise equations.

  10. Stochastic ontogenetic growth model

    NASA Astrophysics Data System (ADS)

    West, B. J.; West, D.

    2012-02-01

    An ontogenetic growth model (OGM) for a thermodynamically closed system is generalized to satisfy both the first and second law of thermodynamics. The hypothesized stochastic ontogenetic growth model (SOGM) is shown to entail the interspecies allometry relation by explicitly averaging the basal metabolic rate and the total body mass over the steady-state probability density for the total body mass (TBM). This is the first derivation of the interspecies metabolic allometric relation from a dynamical model and the asymptotic steady-state distribution of the TBM is fit to data and shown to be inverse power law.

  11. Stochastic thermodynamics of resetting

    NASA Astrophysics Data System (ADS)

    Fuchs, Jaco; Goldt, Sebastian; Seifert, Udo

    2016-03-01

    Stochastic dynamics with random resetting leads to a non-equilibrium steady state. Here, we consider the thermodynamics of resetting by deriving the first and second law for resetting processes far from equilibrium. We identify the contributions to the entropy production of the system which arise due to resetting and show that they correspond to the rate with which information is either erased or created. Using Landauer's principle, we derive a bound on the amount of work that is required to maintain a resetting process. We discuss different regimes of resetting, including a Maxwell demon scenario where heat is extracted from a bath at constant temperature.

  12. Chemical application of diffusion quantum Monte Carlo

    NASA Astrophysics Data System (ADS)

    Reynolds, P. J.; Lester, W. A., Jr.

    1983-10-01

    The diffusion quantum Monte Carlo (QMC) method gives a stochastic solution to the Schroedinger equation. As an example the singlet-triplet splitting of the energy of the methylene molecule CH2 is given. The QMC algorithm was implemented on the CYBER 205, first as a direct transcription of the algorithm running on our VAX 11/780, and second by explicitly writing vector code for all loops longer than a crossover length C. The speed of the codes relative to one another as a function of C, and relative to the VAX is discussed. Since CH2 has only eight electrons, most of the loops in this application are fairly short. The longest inner loops run over the set of atomic basis functions. The CPU time dependence obtained versus the number of basis functions is discussed and compared with that obtained from traditional quantum chemistry codes and that obtained from traditional computer architectures. Finally, preliminary work on restructuring the algorithm to compute the separate Monte Carlo realizations in parallel is discussed.

  13. A Monte Carlo multimodal inversion of surface waves

    NASA Astrophysics Data System (ADS)

    Maraschini, Margherita; Foti, Sebastiano

    2010-09-01

    The analysis of surface wave propagation is often used to estimate the S-wave velocity profile at a site. In this paper, we propose a stochastic approach for the inversion of surface waves, which allows apparent dispersion curves to be inverted. The inversion method is based on the integrated use of two-misfit functions. A misfit function based on the determinant of the Haskell-Thomson matrix and a classical Euclidean distance between the dispersion curves. The former allows all the modes of the dispersion curve to be taken into account with a very limited computational cost because it avoids the explicit calculation of the dispersion curve for each tentative model. It is used in a Monte Carlo inversion with a large population of profiles. In a subsequent step, the selection of representative models is obtained by applying a Fisher test based on the Euclidean distance between the experimental and the synthetic dispersion curves to the best models of the Monte Carlo inversion. This procedure allows the set of the selected models to be identified on the basis of the data quality. It also mitigates the influence of local minima that can affect the Monte Carlo results. The effectiveness of the procedure is shown for synthetic and real experimental data sets, where the advantages of the two-stage procedure are highlighted. In particular, the determinant misfit allows the computation of large populations in stochastic algorithms with a limited computational cost.

  14. Stochastic Optimal Scheduling of Residential Appliances with Renewable Energy Sources

    SciTech Connect

    Wu, Hongyu; Pratt, Annabelle; Chakraborty, Sudipta

    2015-07-03

    This paper proposes a stochastic, multi-objective optimization model within a Model Predictive Control (MPC) framework, to determine the optimal operational schedules of residential appliances operating in the presence of renewable energy source (RES). The objective function minimizes the weighted sum of discomfort, energy cost, total and peak electricity consumption, and carbon footprint. A heuristic method is developed for combining different objective components. The proposed stochastic model utilizes Monte Carlo simulation (MCS) for representing uncertainties in electricity price, outdoor temperature, RES generation, water usage, and non-controllable loads. The proposed model is solved using a mixed integer linear programming (MILP) solver and numerical results show the validity of the model. Case studies show the benefit of using the proposed optimization model.

  15. Stochastic Effects in the Bistable Homogeneous Semenov Model

    NASA Astrophysics Data System (ADS)

    Nowakowski, B.; Lemarchand, A.; Nowakowska, E.

    2002-04-01

    We present the mesoscopic description of stochastic effects in a thermochemical bistable diluted gas system subject to the Newtonian heat exchange with a thermostat. We apply the master equation including a transition rate for the Newtonian thermal transfer process, derived on the basis of kinetic theory. As temperature is a continuous variable, this master equation has a complicated integro-differential form. We perform Monte Carlo simulations based on this equation to study the stochastic effects in a homogeneous Semenov model (which neglects reactant consumption) in the bistable regime. The mean first passage time is computed as a function of the number of particles in the system and the distance from the bifurcation associated with the emergence of bistability. An approximate analytical prediction is deduced from the Fokker--Planck equation associated with the master equation. The results of the master equation approach are successfully compared with those of direct simulations of the microscopic particle dynamics.

  16. Some variance reduction methods for numerical stochastic homogenization.

    PubMed

    Blanc, X; Le Bris, C; Legoll, F

    2016-04-28

    We give an overview of a series of recent studies devoted to variance reduction techniques for numerical stochastic homogenization. Numerical homogenization requires that a set of problems is solved at the microscale, the so-called corrector problems. In a random environment, these problems are stochastic and therefore need to be repeatedly solved, for several configurations of the medium considered. An empirical average over all configurations is then performed using the Monte Carlo approach, so as to approximate the effective coefficients necessary to determine the macroscopic behaviour. Variance severely affects the accuracy and the cost of such computations. Variance reduction approaches, borrowed from other contexts in the engineering sciences, can be useful. Some of these variance reduction techniques are presented, studied and tested here. PMID:27002065

  17. Semiparametric Stochastic Modeling of the Rate Function in Longitudinal Studies

    PubMed Central

    Zhu, Bin; Taylor, Jeremy M.G.; Song, Peter X.-K.

    2011-01-01

    In longitudinal biomedical studies, there is often interest in the rate functions, which describe the functional rates of change of biomarker profiles. This paper proposes a semiparametric approach to model these functions as the realizations of stochastic processes defined by stochastic differential equations. These processes are dependent on the covariates of interest and vary around a specified parametric function. An efficient Markov chain Monte Carlo algorithm is developed for inference. The proposed method is compared with several existing methods in terms of goodness-of-fit and more importantly the ability to forecast future functional data in a simulation study. The proposed methodology is applied to prostate-specific antigen profiles for illustration. Supplementary materials for this paper are available online. PMID:22423170

  18. Path sampling with stochastic dynamics: Some new algorithms

    SciTech Connect

    Stoltz, Gabriel . E-mail: stoltz@cermics.enpc.fr

    2007-07-01

    We propose here some new sampling algorithms for path sampling in the case when stochastic dynamics are used. In particular, we present a new proposal function for equilibrium sampling of paths with a Monte-Carlo dynamics (the so-called 'brownian tube' proposal). This proposal is based on the continuity of the dynamics with respect to the random forcing, and generalizes all previous approaches when stochastic dynamics are used. The efficiency of this proposal is demonstrated using some measure of decorrelation in path space. We also discuss a switching strategy that allows to transform ensemble of paths at a finite rate while remaining at equilibrium, in contrast with the usual Jarzynski like switching. This switching is very interesting to sample constrained paths starting from unconstrained paths, or to perform simulated annealing in a rigorous way.

  19. Stochastic analysis of complex reaction networks using binomial moment equations.

    PubMed

    Barzel, Baruch; Biham, Ofer

    2012-09-01

    The stochastic analysis of complex reaction networks is a difficult problem because the number of microscopic states in such systems increases exponentially with the number of reactive species. Direct integration of the master equation is thus infeasible and is most often replaced by Monte Carlo simulations. While Monte Carlo simulations are a highly effective tool, equation-based formulations are more amenable to analytical treatment and may provide deeper insight into the dynamics of the network. Here, we present a highly efficient equation-based method for the analysis of stochastic reaction networks. The method is based on the recently introduced binomial moment equations [Barzel and Biham, Phys. Rev. Lett. 106, 150602 (2011)]. The binomial moments are linear combinations of the ordinary moments of the probability distribution function of the population sizes of the interacting species. They capture the essential combinatorics of the reaction processes reflecting their stoichiometric structure. This leads to a simple and transparent form of the equations, and allows a highly efficient and surprisingly simple truncation scheme. Unlike ordinary moment equations, in which the inclusion of high order moments is prohibitively complicated, the binomial moment equations can be easily constructed up to any desired order. The result is a set of equations that enables the stochastic analysis of complex reaction networks under a broad range of conditions. The number of equations is dramatically reduced from the exponential proliferation of the master equation to a polynomial (and often quadratic) dependence on the number of reactive species in the binomial moment equations. The aim of this paper is twofold: to present a complete derivation of the binomial moment equations; to demonstrate the applicability of the moment equations for a representative set of example networks, in which stochastic effects play an important role. PMID:23030885

  20. Stochastic power flow modeling

    SciTech Connect

    Not Available

    1980-06-01

    The stochastic nature of customer demand and equipment failure on large interconnected electric power networks has produced a keen interest in the accurate modeling and analysis of the effects of probabilistic behavior on steady state power system operation. The principle avenue of approach has been to obtain a solution to the steady state network flow equations which adhere both to Kirchhoff's Laws and probabilistic laws, using either combinatorial or functional approximation techniques. Clearly the need of the present is to develop sound techniques for producing meaningful data to serve as input. This research has addressed this end and serves to bridge the gap between electric demand modeling, equipment failure analysis, etc., and the area of algorithm development. Therefore, the scope of this work lies squarely on developing an efficient means of producing sensible input information in the form of probability distributions for the many types of solution algorithms that have been developed. Two major areas of development are described in detail: a decomposition of stochastic processes which gives hope of stationarity, ergodicity, and perhaps even normality; and a powerful surrogate probability approach using proportions of time which allows the calculation of joint events from one dimensional probability spaces.

  1. Stochastic blind motion deblurring.

    PubMed

    Xiao, Lei; Gregson, James; Heide, Felix; Heidrich, Wolfgang

    2015-10-01

    Blind motion deblurring from a single image is a highly under-constrained problem with many degenerate solutions. A good approximation of the intrinsic image can, therefore, only be obtained with the help of prior information in the form of (often nonconvex) regularization terms for both the intrinsic image and the kernel. While the best choice of image priors is still a topic of ongoing investigation, this research is made more complicated by the fact that historically each new prior requires the development of a custom optimization method. In this paper, we develop a stochastic optimization method for blind deconvolution. Since this stochastic solver does not require the explicit computation of the gradient of the objective function and uses only efficient local evaluation of the objective, new priors can be implemented and tested very quickly. We demonstrate that this framework, in combination with different image priors produces results with Peak Signal-to-Noise Ratio (PSNR) values that match or exceed the results obtained by much more complex state-of-the-art blind motion deblurring algorithms. PMID:25974941

  2. Stochastic Quantum Gas Dynamics

    NASA Astrophysics Data System (ADS)

    Proukakis, Nick P.; Cockburn, Stuart P.

    2010-03-01

    We study the dynamics of weakly-interacting finite temperature Bose gases via the Stochastic Gross-Pitaevskii equation (SGPE). As a first step, we demonstrate [jointly with A. Negretti (Ulm, Germany) and C. Henkel (Potsdam, Germany)] that the SGPE provides a significantly better method for generating an equilibrium state than the number-conserving Bogoliubov method (except for low temperatures and small atom numbers). We then study [jointly with H. Nistazakis and D.J. Frantzeskakis (University of Athens, Greece), P.G.Kevrekidis (University of Massachusetts) and T.P. Horikis (University of Ioannina, Greece)] the dynamics of dark solitons in elongated finite temperature condensates. We demonstrate numerical shot-to-shot variations in soliton trajectories (S.P. Cockburn et al., arXiv:0909.1660.), finding individual long-lived trajectories as in experiments. In our simulations, these variations arise from fluctuations in the phase and density of the underlying medium. We provide a detailed statistical analysis, proposing regimes for the controlled experimental demonstration of this effect; we also discuss the extent to which simpler models can be used to mimic the features of ensemble-averaged stochastic trajectories.

  3. Stochastic averaging of quasi-partially integrable Hamiltonian systems under combined Gaussian and Poisson white noise excitations

    NASA Astrophysics Data System (ADS)

    Jia, Wantao; Zhu, Weiqiu

    2014-03-01

    A stochastic averaging method for predicting the response of quasi-partially integrable and non-resonant Hamiltonian systems to combined Gaussian and Poisson white noise excitations is proposed. For the case with r (1stochastic integro-differential equations (SIDEs) of the original quasi-partially integrable and non-resonant Hamiltonian systems by using the stochastic jump-diffusion chain rule and the stochastic averaging theorem. An example is given to illustrate the applications of the proposed stochastic averaging method, and a combination of the finite difference method and the successive over-relaxation method is used to solve the reduced GFPK equation to obtain the stationary probability density of the system. The results are well verified by a Monte Carlo simulation.

  4. The influence of Stochastic perturbation of Geotechnical media On Electromagnetic tomography

    NASA Astrophysics Data System (ADS)

    Song, Lei; Yang, Weihao; Huangsonglei, Jiahui; Li, HaiPeng

    2015-04-01

    Electromagnetic tomography (CT) are commonly utilized in Civil engineering to detect the structure defects or geological anomalies. CT are generally recognized as a high precision geophysical method and the accuracy of CT are expected to be several centimeters and even to be several millimeters. Then, high frequency antenna with short wavelength are utilized commonly in Civil Engineering. As to the geotechnical media, stochastic perturbation of the EM parameters are inevitably exist in geological scales, in structure scales and in local scales, et al. In those cases, the geometric dimensionings of the target body, the EM wavelength and the accuracy expected might be of the same order. When the high frequency EM wave propagated in the stochastic geotechnical media, the GPR signal would be reflected not only from the target bodies but also from the stochastic perturbation of the background media. To detect the karst caves in dissolution fracture rock, one need to assess the influence of the stochastic distributed dissolution holes and fractures; to detect the void in a concrete structure, one should master the influence of the stochastic distributed stones, et al. In this paper, on the base of stochastic media discrete realizations, the authors try to evaluate quantificationally the influence of the stochastic perturbation of Geotechnical media by Radon/Iradon Transfer through full-combined Monte Carlo numerical simulation. It is found the stochastic noise is related with transfer angle, perturbing strength, angle interval, autocorrelation length, et al. And the quantitative formula of the accuracy of the electromagnetic tomography is also established, which could help on the precision estimation of GPR tomography in stochastic perturbation Geotechnical media. Key words: Stochastic Geotechnical Media; Electromagnetic Tomography; Radon/Iradon Transfer.

  5. Stochastic series expansion simulation of the t -V model

    NASA Astrophysics Data System (ADS)

    Wang, Lei; Liu, Ye-Hua; Troyer, Matthias

    2016-04-01

    We present an algorithm for the efficient simulation of the half-filled spinless t -V model on bipartite lattices, which combines the stochastic series expansion method with determinantal quantum Monte Carlo techniques widely used in fermionic simulations. The algorithm scales linearly in the inverse temperature, cubically with the system size, and is free from the time-discretization error. We use it to map out the finite-temperature phase diagram of the spinless t -V model on the honeycomb lattice and observe a suppression of the critical temperature of the charge-density-wave phase in the vicinity of a fermionic quantum critical point.

  6. A simple chaotic neuron model: stochastic behavior of neural networks.

    PubMed

    Aydiner, Ekrem; Vural, Adil M; Ozcelik, Bekir; Kiymac, Kerim; Tan, Uner

    2003-05-01

    We have briefly reviewed the occurrence of the post-synaptic potentials between neurons, the relationship between EEG and neuron dynamics, as well as methods of signal analysis. We propose a simple stochastic model representing electrical activity of neuronal systems. The model is constructed using the Monte Carlo simulation technique. The results yielded EEG-like signals with their phase portraits in three-dimensional space. The Lyapunov exponent was positive, indicating chaotic behavior. The correlation of the EEG-like signals was.92, smaller than those reported by others. It was concluded that this neuron model may provide valuable clues about the dynamic behavior of neural systems. PMID:12745622

  7. Automated variance reduction for Monte Carlo shielding analyses with MCNP

    NASA Astrophysics Data System (ADS)

    Radulescu, Georgeta

    Variance reduction techniques are employed in Monte Carlo analyses to increase the number of particles in the space phase of interest and thereby lower the variance of statistical estimation. Variance reduction parameters are required to perform Monte Carlo calculations. It is well known that adjoint solutions, even approximate ones, are excellent biasing functions that can significantly increase the efficiency of a Monte Carlo calculation. In this study, an automated method of generating Monte Carlo variance reduction parameters, and of implementing the source energy biasing and the weight window technique in MCNP shielding calculations has been developed. The method is based on the approach used in the SAS4 module of the SCALE code system, which derives the biasing parameters from an adjoint one-dimensional Discrete Ordinates calculation. Unlike SAS4 that determines the radial and axial dose rates of a spent fuel cask in separate calculations, the present method provides energy and spatial biasing parameters for the entire system that optimize the simulation of particle transport towards all external surfaces of a spent fuel cask. The energy and spatial biasing parameters are synthesized from the adjoint fluxes of three one-dimensional Discrete Ordinates adjoint calculations. Additionally, the present method accommodates multiple source regions, such as the photon sources in light-water reactor spent nuclear fuel assemblies, in one calculation. With this automated method, detailed and accurate dose rate maps for photons, neutrons, and secondary photons outside spent fuel casks or other containers can be efficiently determined with minimal efforts.

  8. Variance decomposition in stochastic simulators

    SciTech Connect

    Le Maître, O. P.; Knio, O. M.; Moraes, A.

    2015-06-28

    This work aims at the development of a mathematical and computational approach that enables quantification of the inherent sources of stochasticity and of the corresponding sensitivities in stochastic simulations of chemical reaction networks. The approach is based on reformulating the system dynamics as being generated by independent standardized Poisson processes. This reformulation affords a straightforward identification of individual realizations for the stochastic dynamics of each reaction channel, and consequently a quantitative characterization of the inherent sources of stochasticity in the system. By relying on the Sobol-Hoeffding decomposition, the reformulation enables us to perform an orthogonal decomposition of the solution variance. Thus, by judiciously exploiting the inherent stochasticity of the system, one is able to quantify the variance-based sensitivities associated with individual reaction channels, as well as the importance of channel interactions. Implementation of the algorithms is illustrated in light of simulations of simplified systems, including the birth-death, Schlögl, and Michaelis-Menten models.

  9. On the efficacy of stochastic collocation, stochastic Galerkin, and stochastic reduced order models for solving stochastic problems

    DOE PAGESBeta

    Richard V. Field, Jr.; Emery, John M.; Grigoriu, Mircea Dan

    2015-05-19

    The stochastic collocation (SC) and stochastic Galerkin (SG) methods are two well-established and successful approaches for solving general stochastic problems. A recently developed method based on stochastic reduced order models (SROMs) can also be used. Herein we provide a comparison of the three methods for some numerical examples; our evaluation only holds for the examples considered in the paper. The purpose of the comparisons is not to criticize the SC or SG methods, which have proven very useful for a broad range of applications, nor is it to provide overall ratings of these methods as compared to the SROM method.more » Furthermore, our objectives are to present the SROM method as an alternative approach to solving stochastic problems and provide information on the computational effort required by the implementation of each method, while simultaneously assessing their performance for a collection of specific problems.« less

  10. On the efficacy of stochastic collocation, stochastic Galerkin, and stochastic reduced order models for solving stochastic problems

    SciTech Connect

    Richard V. Field, Jr.; Emery, John M.; Grigoriu, Mircea Dan

    2015-05-19

    The stochastic collocation (SC) and stochastic Galerkin (SG) methods are two well-established and successful approaches for solving general stochastic problems. A recently developed method based on stochastic reduced order models (SROMs) can also be used. Herein we provide a comparison of the three methods for some numerical examples; our evaluation only holds for the examples considered in the paper. The purpose of the comparisons is not to criticize the SC or SG methods, which have proven very useful for a broad range of applications, nor is it to provide overall ratings of these methods as compared to the SROM method. Furthermore, our objectives are to present the SROM method as an alternative approach to solving stochastic problems and provide information on the computational effort required by the implementation of each method, while simultaneously assessing their performance for a collection of specific problems.

  11. Variance decomposition in stochastic simulators

    NASA Astrophysics Data System (ADS)

    Le Maître, O. P.; Knio, O. M.; Moraes, A.

    2015-06-01

    This work aims at the development of a mathematical and computational approach that enables quantification of the inherent sources of stochasticity and of the corresponding sensitivities in stochastic simulations of chemical reaction networks. The approach is based on reformulating the system dynamics as being generated by independent standardized Poisson processes. This reformulation affords a straightforward identification of individual realizations for the stochastic dynamics of each reaction channel, and consequently a quantitative characterization of the inherent sources of stochasticity in the system. By relying on the Sobol-Hoeffding decomposition, the reformulation enables us to perform an orthogonal decomposition of the solution variance. Thus, by judiciously exploiting the inherent stochasticity of the system, one is able to quantify the variance-based sensitivities associated with individual reaction channels, as well as the importance of channel interactions. Implementation of the algorithms is illustrated in light of simulations of simplified systems, including the birth-death, Schlögl, and Michaelis-Menten models.

  12. ISDEP: Integrator of stochastic differential equations for plasmas

    NASA Astrophysics Data System (ADS)

    Velasco, J. L.; Bustos, A.; Castejón, F.; Fernández, L. A.; Martin-Mayor, V.; Tarancón, A.

    2012-09-01

    In this paper we present a general description of the ISDEP code (Integrator of Stochastic Differential Equations for Plasmas) and a brief overview of its physical results and applications so far. ISDEP is a Monte Carlo code that calculates the distribution function of a minority population of ions in a magnetized plasma. It solves the ion equations of motion taking into account the complex 3D structure of fusion devices, the confining electromagnetic field and collisions with other plasma species. The Monte Carlo method used is based on the equivalence between the Fokker-Planck and Langevin equations. This allows ISDEP to run in distributed computing platforms without communication between nodes with almost linear scaling. This paper intends to be a general description and a reference paper in ISDEP.

  13. Parallelized Stochastic Cutoff Method for Long-Range Interacting Systems

    NASA Astrophysics Data System (ADS)

    Endo, Eishin; Toga, Yuta; Sasaki, Munetaka

    2015-07-01

    We present a method of parallelizing the stochastic cutoff (SCO) method, which is a Monte-Carlo method for long-range interacting systems. After interactions are eliminated by the SCO method, we subdivide a lattice into noninteracting interpenetrating sublattices. This subdivision enables us to parallelize the Monte-Carlo calculation in the SCO method. Such subdivision is found by numerically solving the vertex coloring of a graph created by the SCO method. We use an algorithm proposed by Kuhn and Wattenhofer to solve the vertex coloring by parallel computation. This method was applied to a two-dimensional magnetic dipolar system on an L × L square lattice to examine its parallelization efficiency. The result showed that, in the case of L = 2304, the speed of computation increased about 102 times by parallel computation with 288 processors.

  14. Monte Carlo Event Generators

    NASA Astrophysics Data System (ADS)

    Dytman, Steven

    2011-10-01

    Every neutrino experiment requires a Monte Carlo event generator for various purposes. Historically, each series of experiments developed their own code which tuned to their needs. Modern experiments would benefit from a universal code (e.g. PYTHIA) which would allow more direct comparison between experiments. GENIE attempts to be that code. This paper compares most commonly used codes and provides some details of GENIE.

  15. A Monte Carlo algorithm for degenerate plasmas

    SciTech Connect

    Turrell, A.E. Sherlock, M.; Rose, S.J.

    2013-09-15

    A procedure for performing Monte Carlo calculations of plasmas with an arbitrary level of degeneracy is outlined. It has possible applications in inertial confinement fusion and astrophysics. Degenerate particles are initialised according to the Fermi–Dirac distribution function, and scattering is via a Pauli blocked binary collision approximation. The algorithm is tested against degenerate electron–ion equilibration, and the degenerate resistivity transport coefficient from unmagnetised first order transport theory. The code is applied to the cold fuel shell and alpha particle equilibration problem of inertial confinement fusion.

  16. An Advanced Neutronic Analysis Toolkit with Inline Monte Carlo capability for BHTR Analysis

    SciTech Connect

    William R. Martin; John C. Lee

    2009-12-30

    Monte Carlo capability has been combined with a production LWR lattice physics code to allow analysis of high temperature gas reactor configurations, accounting for the double heterogeneity due to the TRISO fuel. The Monte Carlo code MCNP5 has been used in conjunction with CPM3, which was the testbench lattice physics code for this project. MCNP5 is used to perform two calculations for the geometry of interest, one with homogenized fuel compacts and the other with heterogeneous fuel compacts, where the TRISO fuel kernels are resolved by MCNP5.

  17. Biochemical simulations: stochastic, approximate stochastic and hybrid approaches

    PubMed Central

    2009-01-01

    Computer simulations have become an invaluable tool to study the sometimes counterintuitive temporal dynamics of (bio-)chemical systems. In particular, stochastic simulation methods have attracted increasing interest recently. In contrast to the well-known deterministic approach based on ordinary differential equations, they can capture effects that occur due to the underlying discreteness of the systems and random fluctuations in molecular numbers. Numerous stochastic, approximate stochastic and hybrid simulation methods have been proposed in the literature. In this article, they are systematically reviewed in order to guide the researcher and help her find the appropriate method for a specific problem. PMID:19151097

  18. Stochastic techno-economic evaluation of cellulosic biofuel pathways.

    PubMed

    Zhao, Xin; Brown, Tristan R; Tyner, Wallace E

    2015-12-01

    This study evaluates the economic feasibility and stochastic dominance rank of eight cellulosic biofuel production pathways (including gasification, pyrolysis, liquefaction, and fermentation) under technological and economic uncertainty. A techno-economic assessment based financial analysis is employed to derive net present values and breakeven prices for each pathway. Uncertainty is investigated and incorporated into fuel prices and techno-economic variables: capital cost, conversion technology yield, hydrogen cost, natural gas price and feedstock cost using @Risk, a Palisade Corporation software. The results indicate that none of the eight pathways would be profitable at expected values under projected energy prices. Fast pyrolysis and hydroprocessing (FPH) has the lowest breakeven fuel price at 3.11$/gallon of gasoline equivalent (0.82$/liter of gasoline equivalent). With the projected energy prices, FPH investors could expect a 59% probability of loss. Stochastic dominance is done based on return on investment. Most risk-averse decision makers would prefer FPH to other pathways. PMID:26454041

  19. Stochastic image reconstruction for a dual-particle imaging system

    NASA Astrophysics Data System (ADS)

    Hamel, M. C.; Polack, J. K.; Poitrasson-Rivière, A.; Flaska, M.; Clarke, S. D.; Pozzi, S. A.; Tomanin, A.; Peerani, P.

    2016-02-01

    Stochastic image reconstruction has been applied to a dual-particle imaging system being designed for nuclear safeguards applications. The dual-particle imager (DPI) is a combined Compton-scatter and neutron-scatter camera capable of producing separate neutron and photon images. The stochastic origin ensembles (SOE) method was investigated as an imaging method for the DPI because only a minimal estimation of system response is required to produce images with quality that is comparable to common maximum-likelihood methods. This work contains neutron and photon SOE image reconstructions for a 252Cf point source, two mixed-oxide (MOX) fuel canisters representing point sources, and the MOX fuel canisters representing a distributed source. Simulation of the DPI using MCNPX-PoliMi is validated by comparison of simulated and measured results. Because image quality is dependent on the number of counts and iterations used, the relationship between these quantities is investigated.

  20. Stochastic reconstruction of sandstones

    PubMed

    Manwart; Torquato; Hilfer

    2000-07-01

    A simulated annealing algorithm is employed to generate a stochastic model for a Berea sandstone and a Fontainebleau sandstone, with each a prescribed two-point probability function, lineal-path function, and "pore size" distribution function, respectively. We find that the temperature decrease of the annealing has to be rather quick to yield isotropic and percolating configurations. A comparison of simple morphological quantities indicates good agreement between the reconstructions and the original sandstones. Also, the mean survival time of a random walker in the pore space is reproduced with good accuracy. However, a more detailed investigation by means of local porosity theory shows that there may be significant differences of the geometrical connectivity between the reconstructed and the experimental samples. PMID:11088546

  1. Advanced interacting sequential Monte Carlo sampling for inverse scattering

    NASA Astrophysics Data System (ADS)

    Giraud, F.; Minvielle, P.; Del Moral, P.

    2013-09-01

    The following electromagnetism (EM) inverse problem is addressed. It consists in estimating the local radioelectric properties of materials recovering an object from global EM scattering measurements, at various incidences and wave frequencies. This large scale ill-posed inverse problem is explored by an intensive exploitation of an efficient 2D Maxwell solver, distributed on high performance computing machines. Applied to a large training data set, a statistical analysis reduces the problem to a simpler probabilistic metamodel, from which Bayesian inference can be performed. Considering the radioelectric properties as a hidden dynamic stochastic process that evolves according to the frequency, it is shown how advanced Markov chain Monte Carlo methods—called sequential Monte Carlo or interacting particles—can take benefit of the structure and provide local EM property estimates.

  2. Nuclear pairing within a configuration-space Monte Carlo approach

    NASA Astrophysics Data System (ADS)

    Lingle, Mark; Volya, Alexander

    2015-06-01

    Pairing correlations in nuclei play a decisive role in determining nuclear drip lines, binding energies, and many collective properties. In this work a new configuration-space Monte Carlo (CSMC) method for treating nuclear pairing correlations is developed, implemented, and demonstrated. In CSMC the Hamiltonian matrix is stochastically generated in Krylov subspace, resulting in the Monte Carlo version of Lanczos-like diagonalization. The advantages of this approach over other techniques are discussed; the absence of the fermionic sign problem, probabilistic interpretation of quantum-mechanical amplitudes, and ability to handle truly large-scale problems with defined precision and error control are noteworthy merits of CSMC. The features of our CSMC approach are shown using models and realistic examples. Special attention is given to difficult limits: situations with nonconstant pairing strengths, cases with nearly degenerate excited states, limits when pairing correlations in finite systems are weak, and problems when the relevant configuration space is large.

  3. Monte Carlo approach to tissue-cell populations

    NASA Astrophysics Data System (ADS)

    Drasdo, D.; Kree, R.; McCaskill, J. S.

    1995-12-01

    We describe a stochastic dynamics of tissue cells with special emphasis on epithelial cells and fibro- blasts and fibrocytes of the connective tissue. Pattern formation and growth characteristics of such cell populations in culture are investigated numerically by Monte Carlo simulations for quasi-two-dimensional systems of cells. A number of quantitative predictions are obtained which may be confronted with experimental results. Furthermore we introduce several biologically motivated variants of our basic model and briefly discuss the simulation of two dimensional analogs of two complex processes in tissues: the growth of a sarcoma across an epithelial boundary and the wound healing of a skin cut. As compared to other approaches, we find the Monte Carlo approach to tissue growth and structure to be particularly simple and flexible. It allows for a hierarchy of models reaching from global description of birth-death processes to very specific features of intracellular dynamics. (c) 1995 The American Physical Society

  4. RES: Regularized Stochastic BFGS Algorithm

    NASA Astrophysics Data System (ADS)

    Mokhtari, Aryan; Ribeiro, Alejandro

    2014-12-01

    RES, a regularized stochastic version of the Broyden-Fletcher-Goldfarb-Shanno (BFGS) quasi-Newton method is proposed to solve convex optimization problems with stochastic objectives. The use of stochastic gradient descent algorithms is widespread, but the number of iterations required to approximate optimal arguments can be prohibitive in high dimensional problems. Application of second order methods, on the other hand, is impracticable because computation of objective function Hessian inverses incurs excessive computational cost. BFGS modifies gradient descent by introducing a Hessian approximation matrix computed from finite gradient differences. RES utilizes stochastic gradients in lieu of deterministic gradients for both, the determination of descent directions and the approximation of the objective function's curvature. Since stochastic gradients can be computed at manageable computational cost RES is realizable and retains the convergence rate advantages of its deterministic counterparts. Convergence results show that lower and upper bounds on the Hessian egeinvalues of the sample functions are sufficient to guarantee convergence to optimal arguments. Numerical experiments showcase reductions in convergence time relative to stochastic gradient descent algorithms and non-regularized stochastic versions of BFGS. An application of RES to the implementation of support vector machines is developed.

  5. On the evaluation of expected performance cost for partially observed closed-loop stochastic systems

    NASA Technical Reports Server (NTRS)

    Bayard, D. S.; Eslami, M.

    1985-01-01

    New methods are presented for evaluating the expected performance cost of partially observed closed-loop stochastic systems. When the variances of the process statistics are small, a linearized model of the closed-loop stochastic system is defined for which the expected cost can be evaluated by recursion on a set of purely deterministic difference equations. When the variances of the process statistics are large, the linearized model can be used in the control variate method of variance reduction for reducing the number of sample paths required for effective Monte Carlo estimation.

  6. Stochastic Hard-Sphere Dynamics for Hydrodynamics of Non-Ideal Fluids

    SciTech Connect

    Donev, A; Alder, B J; Garcia, A L

    2008-02-26

    A novel stochastic fluid model is proposed with a nonideal structure factor consistent with compressibility, and adjustable transport coefficients. This stochastic hard-sphere dynamics (SHSD) algorithm is a modification of the direct simulation Monte Carlo algorithm and has several computational advantages over event-driven hard-sphere molecular dynamics. Surprisingly, SHSD results in an equation of state and a pair correlation function identical to that of a deterministic Hamiltonian system of penetrable spheres interacting with linear core pair potentials. The fluctuating hydrodynamic behavior of the SHSD fluid is verified for the Brownian motion of a nanoparticle suspended in a compressible solvent.

  7. Adaptive hybrid simulations for multiscale stochastic reaction networks

    NASA Astrophysics Data System (ADS)

    Hepp, Benjamin; Gupta, Ankit; Khammash, Mustafa

    2015-01-01

    The probability distribution describing the state of a Stochastic Reaction Network (SRN) evolves according to the Chemical Master Equation (CME). It is common to estimate its solution using Monte Carlo methods such as the Stochastic Simulation Algorithm (SSA). In many cases, these simulations can take an impractical amount of computational time. Therefore, many methods have been developed that approximate sample paths of the underlying stochastic process and estimate the solution of the CME. A prominent class of these methods include hybrid methods that partition the set of species and the set of reactions into discrete and continuous subsets. Such a partition separates the dynamics into a discrete and a continuous part. Simulating such a stochastic process can be computationally much easier than simulating the exact discrete stochastic process with SSA. Moreover, the quasi-stationary assumption to approximate the dynamics of fast subnetworks can be applied for certain classes of networks. However, as the dynamics of a SRN evolves, these partitions may have to be adapted during the simulation. We develop a hybrid method that approximates the solution of a CME by automatically partitioning the reactions and species sets into discrete and continuous components and applying the quasi-stationary assumption on identifiable fast subnetworks. Our method does not require any user intervention and it adapts to exploit the changing timescale separation between reactions and/or changing magnitudes of copy-numbers of constituent species. We demonstrate the efficiency of the proposed method by considering examples from systems biology and showing that very good approximations to the exact probability distributions can be achieved in significantly less computational time. This is especially the case for systems with oscillatory dynamics, where the system dynamics change considerably throughout the time-period of interest.

  8. Stochastic Evaluation of Riparian Vegetation Dynamics in River Channels

    NASA Astrophysics Data System (ADS)

    Miyamoto, H.; Kimura, R.; Toshimori, N.

    2013-12-01

    Vegetation overgrowth in sand bars and floodplains has been a serious problem for river management in Japan. From the viewpoints of flood control and ecological conservation, it would be necessary to accurately predict the vegetation dynamics for a long period of time. In this study, we have developed a stochastic model for predicting the dynamics of trees in floodplains with emphasis on the interaction with flood impacts. The model consists of the following four processes in coupling ecohydrology with biogeomorphology: (i) stochastic behavior of flow discharge, (ii) hydrodynamics in a channel with vegetation, (iii) variation of riverbed topography and (iv) vegetation dynamics on the floodplain. In the model, the flood discharge is stochastically simulated using a Poisson process, one of the conventional approaches in hydrological time-series generation. The model for vegetation dynamics includes the effects of tree growth, mortality by flood impacts, and infant tree invasion. To determine the model parameters, vegetation conditions have been observed mainly before and after flood impacts since 2008 at a field site located between 23.2-24.0 km from the river mouth in Kako River, Japan. This site is one of the vegetation overgrowth locations in Kako River floodplains, where the predominant tree species are willows and bamboos. In this presentation, sensitivity of the vegetation overgrowth tendency is investigated in Kako River channels. Through the Monte Carlo simulation for several cross sections in Kako River, responses of the vegetated channels are stochastically evaluated in terms of the changes of discharge magnitude and channel geomorphology. The expectation and standard deviation of vegetation areal ratio are compared in the different channel cross sections for different river discharges and relative floodplain heights. The result shows that the vegetation status changes sensitively in the channels with larger discharge and insensitive in the lower floodplain

  9. Adaptive hybrid simulations for multiscale stochastic reaction networks

    SciTech Connect

    Hepp, Benjamin; Gupta, Ankit; Khammash, Mustafa

    2015-01-21

    The probability distribution describing the state of a Stochastic Reaction Network (SRN) evolves according to the Chemical Master Equation (CME). It is common to estimate its solution using Monte Carlo methods such as the Stochastic Simulation Algorithm (SSA). In many cases, these simulations can take an impractical amount of computational time. Therefore, many methods have been developed that approximate sample paths of the underlying stochastic process and estimate the solution of the CME. A prominent class of these methods include hybrid methods that partition the set of species and the set of reactions into discrete and continuous subsets. Such a partition separates the dynamics into a discrete and a continuous part. Simulating such a stochastic process can be computationally much easier than simulating the exact discrete stochastic process with SSA. Moreover, the quasi-stationary assumption to approximate the dynamics of fast subnetworks can be applied for certain classes of networks. However, as the dynamics of a SRN evolves, these partitions may have to be adapted during the simulation. We develop a hybrid method that approximates the solution of a CME by automatically partitioning the reactions and species sets into discrete and continuous components and applying the quasi-stationary assumption on identifiable fast subnetworks. Our method does not require any user intervention and it adapts to exploit the changing timescale separation between reactions and/or changing magnitudes of copy-numbers of constituent species. We demonstrate the efficiency of the proposed method by considering examples from systems biology and showing that very good approximations to the exact probability distributions can be achieved in significantly less computational time. This is especially the case for systems with oscillatory dynamics, where the system dynamics change considerably throughout the time-period of interest.

  10. On the stochastic dependence between photomultipliers in the TDCR method.

    PubMed

    Bobin, C; Thiam, C; Chauvenet, B; Bouchard, J

    2012-04-01

    The TDCR method (Triple to Double Coincidence Ratio) is widely implemented in National Metrology Institutes for activity primary measurements based on liquid scintillation counting. The detection efficiency and thereby the activity are determined using a statistical and physical model. In this article, we propose to revisit the application of the classical TDCR model and its validity by introducing a prerequisite of stochastic independence between photomultiplier counting. In order to support the need for this condition, the demonstration is carried out by considering the simple case of a monoenergetic deposition in the scintillation cocktail. Simulations of triple and double coincidence counting are presented in order to point out the existence of stochastic dependence between photomultipliers that can be significant in the case of low-energy deposition in the scintillator. It is demonstrated that a problem of time dependence arises when the coincidence resolving time is shorter than the time distribution of scintillation photons; in addition, it is shown that this effect is at the origin of a bias in the detection efficiency calculation encountered for the standardization of (3)H. This investigation is extended to the study of geometric dependence between photomultipliers related to the position of light emission inside the scintillation vial (the volume of the vial is not considered in the classical TDCR model). In that case, triple and double coincidences are calculated using a stochastic TDCR model based on the Monte-Carlo simulation code Geant4. This stochastic approach is also applied to the standardization of (51)Cr by liquid scintillation; the difference observed in detection efficiencies calculated using the standard and stochastic models can be explained by such an effect of geometric dependence between photomultiplier channels. PMID:22244195

  11. Angular biasing in implicit Monte-Carlo

    SciTech Connect

    Zimmerman, G.B.

    1994-10-20

    Calculations of indirect drive Inertial Confinement Fusion target experiments require an integrated approach in which laser irradiation and radiation transport in the hohlraum are solved simultaneously with the symmetry, implosion and burn of the fuel capsule. The Implicit Monte Carlo method has proved to be a valuable tool for the two dimensional radiation transport within the hohlraum, but the impact of statistical noise on the symmetric implosion of the small fuel capsule is difficult to overcome. We present an angular biasing technique in which an increased number of low weight photons are directed at the imploding capsule. For typical parameters this reduces the required computer time for an integrated calculation by a factor of 10. An additional factor of 5 can also be achieved by directing even smaller weight photons at the polar regions of the capsule where small mass zones are most sensitive to statistical noise.

  12. A stochastic multi-symplectic scheme for stochastic Maxwell equations with additive noise

    SciTech Connect

    Hong, Jialin; Zhang, Liying

    2014-07-01

    In this paper we investigate a stochastic multi-symplectic method for stochastic Maxwell equations with additive noise. Based on the stochastic version of variational principle, we find a way to obtain the stochastic multi-symplectic structure of three-dimensional (3-D) stochastic Maxwell equations with additive noise. We propose a stochastic multi-symplectic scheme and show that it preserves the stochastic multi-symplectic conservation law and the local and global stochastic energy dissipative properties, which the equations themselves possess. Numerical experiments are performed to verify the numerical behaviors of the stochastic multi-symplectic scheme.

  13. Fuel flexible fuel injector

    SciTech Connect

    Tuthill, Richard S; Davis, Dustin W; Dai, Zhongtao

    2015-02-03

    A disclosed fuel injector provides mixing of fuel with airflow by surrounding a swirled fuel flow with first and second swirled airflows that ensures mixing prior to or upon entering the combustion chamber. Fuel tubes produce a central fuel flow along with a central airflow through a plurality of openings to generate the high velocity fuel/air mixture along the axis of the fuel injector in addition to the swirled fuel/air mixture.

  14. Monte Carlo portal dosimetry

    SciTech Connect

    Chin, P.W. . E-mail: mary.chin@physics.org

    2005-10-15

    This project developed a solution for verifying external photon beam radiotherapy. The solution is based on a calibration chain for deriving portal dose maps from acquired portal images, and a calculation framework for predicting portal dose maps. Quantitative comparison between acquired and predicted portal dose maps accomplishes both geometric (patient positioning with respect to the beam) and dosimetric (two-dimensional fluence distribution of the beam) verifications. A disagreement would indicate that beam delivery had not been according to plan. The solution addresses the clinical need for verifying radiotherapy both pretreatment (without the patient in the beam) and on treatment (with the patient in the beam). Medical linear accelerators mounted with electronic portal imaging devices (EPIDs) were used to acquire portal images. Two types of EPIDs were investigated: the amorphous silicon (a-Si) and the scanning liquid ion chamber (SLIC). The EGSnrc family of Monte Carlo codes were used to predict portal dose maps by computer simulation of radiation transport in the beam-phantom-EPID configuration. Monte Carlo simulations have been implemented on several levels of high throughput computing (HTC), including the grid, to reduce computation time. The solution has been tested across the entire clinical range of gantry angle, beam size (5 cmx5 cm to 20 cmx20 cm), and beam-patient and patient-EPID separations (4 to 38 cm). In these tests of known beam-phantom-EPID configurations, agreement between acquired and predicted portal dose profiles was consistently within 2% of the central axis value. This Monte Carlo portal dosimetry solution therefore achieved combined versatility, accuracy, and speed not readily achievable by other techniques.

  15. Stochastic superparameterization in quasigeostrophic turbulence

    SciTech Connect

    Grooms, Ian; Majda, Andrew J.

    2014-08-15

    In this article we expand and develop the authors' recent proposed methodology for efficient stochastic superparameterization algorithms for geophysical turbulence. Geophysical turbulence is characterized by significant intermittent cascades of energy from the unresolved to the resolved scales resulting in complex patterns of waves, jets, and vortices. Conventional superparameterization simulates large scale dynamics on a coarse grid in a physical domain, and couples these dynamics to high-resolution simulations on periodic domains embedded in the coarse grid. Stochastic superparameterization replaces the nonlinear, deterministic eddy equations on periodic embedded domains by quasilinear stochastic approximations on formally infinite embedded domains. The result is a seamless algorithm which never uses a small scale grid and is far cheaper than conventional SP, but with significant success in difficult test problems. Various design choices in the algorithm are investigated in detail here, including decoupling the timescale of evolution on the embedded domains from the length of the time step used on the coarse grid, and sensitivity to certain assumed properties of the eddies (e.g. the shape of the assumed eddy energy spectrum). We present four closures based on stochastic superparameterization which elucidate the properties of the underlying framework: a ‘null hypothesis’ stochastic closure that uncouples the eddies from the mean, a stochastic closure with nonlinearly coupled eddies and mean, a nonlinear deterministic closure, and a stochastic closure based on energy conservation. The different algorithms are compared and contrasted on a stringent test suite for quasigeostrophic turbulence involving two-layer dynamics on a β-plane forced by an imposed background shear. The success of the algorithms developed here suggests that they may be fruitfully applied to more realistic situations. They are expected to be particularly useful in providing accurate and

  16. Stochastic roots of growth phenomena

    NASA Astrophysics Data System (ADS)

    De Lauro, E.; De Martino, S.; De Siena, S.; Giorno, V.

    2014-05-01

    We show that the Gompertz equation describes the evolution in time of the median of a geometric stochastic process. Therefore, we induce that the process itself generates the growth. This result allows us further to exploit a stochastic variational principle to take account of self-regulation of growth through feedback of relative density variations. The conceptually well defined framework so introduced shows its usefulness by suggesting a form of control of growth by exploiting external actions.

  17. Stochastic cooling in RHIC

    SciTech Connect

    Brennan J. M.; Blaskiewicz, M.; Mernick, K.

    2012-05-20

    The full 6-dimensional [x,x'; y,y'; z,z'] stochastic cooling system for RHIC was completed and operational for the FY12 Uranium-Uranium collider run. Cooling enhances the integrated luminosity of the Uranium collisions by a factor of 5, primarily by reducing the transverse emittances but also by cooling in the longitudinal plane to preserve the bunch length. The components have been deployed incrementally over the past several runs, beginning with longitudinal cooling, then cooling in the vertical planes but multiplexed between the Yellow and Blue rings, next cooling both rings simultaneously in vertical (the horizontal plane was cooled by betatron coupling), and now simultaneous horizontal cooling has been commissioned. The system operated between 5 and 9 GHz and with 3 x 10{sup 8} Uranium ions per bunch and produces a cooling half-time of approximately 20 minutes. The ultimate emittance is determined by the balance between cooling and emittance growth from Intra-Beam Scattering. Specific details of the apparatus and mathematical techniques for calculating its performance have been published elsewhere. Here we report on: the method of operation, results with beam, and comparison of results to simulations.

  18. Monte Carlo and quasi-Monte Carlo methods

    NASA Astrophysics Data System (ADS)

    Caflisch, Russel E.

    Monte Carlo is one of the most versatile and widely used numerical methods. Its convergence rate, O(N-1/2), is independent of dimension, which shows Monte Carlo to be very robust but also slow. This article presents an introduction to Monte Carlo methods for integration problems, including convergence theory, sampling methods and variance reduction techniques. Accelerated convergence for Monte Carlo quadrature is attained using quasi-random (also called low-discrepancy) sequences, which are a deterministic alternative to random or pseudo-random sequences. The points in a quasi-random sequence are correlated to provide greater uniformity. The resulting quadrature method, called quasi-Monte Carlo, has a convergence rate of approximately O((logN)kN-1). For quasi-Monte Carlo, both theoretical error estimates and practical limitations are presented. Although the emphasis in this article is on integration, Monte Carlo simulation of rarefied gas dynamics is also discussed. In the limit of small mean free path (that is, the fluid dynamic limit), Monte Carlo loses its effectiveness because the collisional distance is much less than the fluid dynamic length scale. Computational examples are presented throughout the text to illustrate the theory. A number of open problems are described.

  19. Numerical solution of the Stratonovich- and Ito–Euler equations: Application to the stochastic piston problem

    SciTech Connect

    Zhang, Zhongqiang; Yang, Xiu; Lin, Guang; Karniadakis, George Em

    2013-03-01

    We consider a piston with a velocity perturbed by Brownian motion moving into a straight tube filled with a perfect gas at rest. The shock generated ahead of the piston can be located by solving the one-dimensional Euler equations driven by white noise using the Stratonovich or Ito formulations. We approximate the Brownian motion with its spectral truncation and subsequently apply stochastic collocation using either sparse grid or the quasi-Monte Carlo (QMC) method. In particular, we first transform the Euler equations with an unsteady stochastic boundary into stochastic Euler equations over a fixed domain with a time-dependent stochastic source term. We then solve the transformed equations by splitting them up into two parts, i.e., a ‘deterministic part’ and a ‘stochastic part’. Numerical results verify the Stratonovich–Euler and Ito–Euler models against stochastic perturbation results, and demonstrate the efficiency of sparse grid and QMC for small and large random piston motions, respectively. The variance of shock location of the piston grows cubically in the case of white noise in contrast to colored noise reported in [1], where the variance of shock location grows quadratically with time for short times and linearly for longer times.

  20. A probabilistic graphical model approach to stochastic multiscale partial differential equations

    SciTech Connect

    Wan, Jiang; Zabaras, Nicholas; Center for Applied Mathematics, Cornell University, 657 Frank H.T. Rhodes Hall, Ithaca, NY 14853

    2013-10-01

    We develop a probabilistic graphical model based methodology to efficiently perform uncertainty quantification in the presence of both stochastic input and multiple scales. Both the stochastic input and model responses are treated as random variables in this framework. Their relationships are modeled by graphical models which give explicit factorization of a high-dimensional joint probability distribution. The hyperparameters in the probabilistic model are learned using sequential Monte Carlo (SMC) method, which is superior to standard Markov chain Monte Carlo (MCMC) methods for multi-modal distributions. Finally, we make predictions from the probabilistic graphical model using the belief propagation algorithm. Numerical examples are presented to show the accuracy and efficiency of the predictive capability of the developed graphical model.

  1. Stochastic effects in a thermochemical system with Newtonian heat exchange

    NASA Astrophysics Data System (ADS)

    Nowakowski, B.; Lemarchand, A.

    2001-12-01

    We develop a mesoscopic description of stochastic effects in the Newtonian heat exchange between a diluted gas system and a thermostat. We explicitly study the homogeneous Semenov model involving a thermochemical reaction and neglecting consumption of reactants. The master equation includes a transition rate for the thermal transfer process, which is derived on the basis of the statistics for inelastic collisions between gas particles and walls of the thermostat. The main assumption is that the perturbation of the Maxwellian particle velocity distribution can be neglected. The transition function for the thermal process admits a continuous spectrum of temperature changes, and consequently, the master equation has a complicated integro-differential form. We perform Monte Carlo simulations based on this equation to study the stochastic effects in the Semenov system in the explosive regime. The dispersion of ignition times is calculated as a function of system size. For sufficiently small systems, the probability distribution of temperature displays transient bimodality during the ignition period. The results of the stochastic description are successfully compared with those of direct simulations of microscopic particle dynamics.

  2. Fluorescence Correlation Spectroscopy and Nonlinear Stochastic Reaction-Diffusion

    SciTech Connect

    Del Razo, Mauricio; Pan, Wenxiao; Qian, Hong; Lin, Guang

    2014-05-30

    The currently existing theory of fluorescence correlation spectroscopy (FCS) is based on the linear fluctuation theory originally developed by Einstein, Onsager, Lax, and others as a phenomenological approach to equilibrium fluctuations in bulk solutions. For mesoscopic reaction-diffusion systems with nonlinear chemical reactions among a small number of molecules, a situation often encountered in single-cell biochemistry, it is expected that FCS time correlation functions of a reaction-diffusion system can deviate from the classic results of Elson and Magde [Biopolymers (1974) 13:1-27]. We first discuss this nonlinear effect for reaction systems without diffusion. For nonlinear stochastic reaction-diffusion systems there are no closed solutions; therefore, stochastic Monte-Carlo simulations are carried out. We show that the deviation is small for a simple bimolecular reaction; the most significant deviations occur when the number of molecules is small and of the same order. Extending Delbrück-Gillespie’s theory for stochastic nonlinear reactions with rapidly stirring to reaction-diffusion systems provides a mesoscopic model for chemical and biochemical reactions at nanometric and mesoscopic level such as a single biological cell.

  3. A stochastic transcriptional switch model for single cell imaging data

    PubMed Central

    Hey, Kirsty L.; Momiji, Hiroshi; Featherstone, Karen; Davis, Julian R.E.; White, Michael R.H.; Rand, David A.; Finkenstädt, Bärbel

    2015-01-01

    Gene expression is made up of inherently stochastic processes within single cells and can be modeled through stochastic reaction networks (SRNs). In particular, SRNs capture the features of intrinsic variability arising from intracellular biochemical processes. We extend current models for gene expression to allow the transcriptional process within an SRN to follow a random step or switch function which may be estimated using reversible jump Markov chain Monte Carlo (MCMC). This stochastic switch model provides a generic framework to capture many different dynamic features observed in single cell gene expression. Inference for such SRNs is challenging due to the intractability of the transition densities. We derive a model-specific birth–death approximation and study its use for inference in comparison with the linear noise approximation where both approximations are considered within the unifying framework of state-space models. The methodology is applied to synthetic as well as experimental single cell imaging data measuring expression of the human prolactin gene in pituitary cells. PMID:25819987

  4. A stochastic transcriptional switch model for single cell imaging data.

    PubMed

    Hey, Kirsty L; Momiji, Hiroshi; Featherstone, Karen; Davis, Julian R E; White, Michael R H; Rand, David A; Finkenstädt, Bärbel

    2015-10-01

    Gene expression is made up of inherently stochastic processes within single cells and can be modeled through stochastic reaction networks (SRNs). In particular, SRNs capture the features of intrinsic variability arising from intracellular biochemical processes. We extend current models for gene expression to allow the transcriptional process within an SRN to follow a random step or switch function which may be estimated using reversible jump Markov chain Monte Carlo (MCMC). This stochastic switch model provides a generic framework to capture many different dynamic features observed in single cell gene expression. Inference for such SRNs is challenging due to the intractability of the transition densities. We derive a model-specific birth-death approximation and study its use for inference in comparison with the linear noise approximation where both approximations are considered within the unifying framework of state-space models. The methodology is applied to synthetic as well as experimental single cell imaging data measuring expression of the human prolactin gene in pituitary cells. PMID:25819987

  5. Global parameter estimation methods for stochastic biochemical systems

    PubMed Central

    2010-01-01

    Background The importance of stochasticity in cellular processes having low number of molecules has resulted in the development of stochastic models such as chemical master equation. As in other modelling frameworks, the accompanying rate constants are important for the end-applications like analyzing system properties (e.g. robustness) or predicting the effects of genetic perturbations. Prior knowledge of kinetic constants is usually limited and the model identification routine typically includes parameter estimation from experimental data. Although the subject of parameter estimation is well-established for deterministic models, it is not yet routine for the chemical master equation. In addition, recent advances in measurement technology have made the quantification of genetic substrates possible to single molecular levels. Thus, the purpose of this work is to develop practical and effective methods for estimating kinetic model parameters in the chemical master equation and other stochastic models from single cell and cell population experimental data. Results Three parameter estimation methods are proposed based on the maximum likelihood and density function distance, including probability and cumulative density functions. Since stochastic models such as chemical master equations are typically solved using a Monte Carlo approach in which only a finite number of Monte Carlo realizations are computationally practical, specific considerations are given to account for the effect of finite sampling in the histogram binning of the state density functions. Applications to three practical case studies showed that while maximum likelihood method can effectively handle low replicate measurements, the density function distance methods, particularly the cumulative density function distance estimation, are more robust in estimating the parameters with consistently higher accuracy, even for systems showing multimodality. Conclusions The parameter estimation methodologies

  6. Fission Matrix Capability for MCNP Monte Carlo

    SciTech Connect

    Carney, Sean E.; Brown, Forrest B.; Kiedrowski, Brian C.; Martin, William R.

    2012-09-05

    In a Monte Carlo criticality calculation, before the tallying of quantities can begin, a converged fission source (the fundamental eigenvector of the fission kernel) is required. Tallies of interest may include powers, absorption rates, leakage rates, or the multiplication factor (the fundamental eigenvalue of the fission kernel, k{sub eff}). Just as in the power iteration method of linear algebra, if the dominance ratio (the ratio of the first and zeroth eigenvalues) is high, many iterations of neutron history simulations are required to isolate the fundamental mode of the problem. Optically large systems have large dominance ratios, and systems containing poor neutron communication between regions are also slow to converge. The fission matrix method, implemented into MCNP[1], addresses these problems. When Monte Carlo random walk from a source is executed, the fission kernel is stochastically applied to the source. Random numbers are used for: distances to collision, reaction types, scattering physics, fission reactions, etc. This method is used because the fission kernel is a complex, 7-dimensional operator that is not explicitly known. Deterministic methods use approximations/discretization in energy, space, and direction to the kernel. Consequently, they are faster. Monte Carlo directly simulates the physics, which necessitates the use of random sampling. Because of this statistical noise, common convergence acceleration methods used in deterministic methods do not work. In the fission matrix method, we are using the random walk information not only to build the next-iteration fission source, but also a spatially-averaged fission kernel. Just like in deterministic methods, this involves approximation and discretization. The approximation is the tallying of the spatially-discretized fission kernel with an incorrect fission source. We address this by making the spatial mesh fine enough that this error is negligible. As a consequence of discretization we get a

  7. Stochastic uncertainty analysis for unconfined flow systems

    USGS Publications Warehouse

    Liu, Gaisheng; Zhang, Dongxiao; Lu, Zhiming

    2006-01-01

    A new stochastic approach proposed by Zhang and Lu (2004), called the Karhunen-Loeve decomposition-based moment equation (KLME), has been extended to solving nonlinear, unconfined flow problems in randomly heterogeneous aquifers. This approach is on the basis of an innovative combination of Karhunen-Loeve decomposition, polynomial expansion, and perturbation methods. The random log-transformed hydraulic conductivity field (InKS) is first expanded into a series in terms of orthogonal Gaussian standard random variables with their coefficients obtained as the eigenvalues and eigenfunctions of the covariance function of InKS- Next, head h is decomposed as a perturbation expansion series ??A(m), where A(m) represents the mth-order head term with respect to the standard deviation of InKS. Then A(m) is further expanded into a polynomial series of m products of orthogonal Gaussian standard random variables whose coefficients Ai1,i2(m)...,im are deterministic and solved sequentially from low to high expansion orders using MODFLOW-2000. Finally, the statistics of head and flux are computed using simple algebraic operations on Ai1,i2(m)...,im. A series of numerical test results in 2-D and 3-D unconfined flow systems indicated that the KLME approach is effective in estimating the mean and (co)variance of both heads and fluxes and requires much less computational effort as compared to the traditional Monte Carlo simulation technique. Copyright 2006 by the American Geophysical Union.

  8. Frost in Charitum Montes

    NASA Technical Reports Server (NTRS)

    2003-01-01

    MGS MOC Release No. MOC2-387, 10 June 2003

    This is a Mars Global Surveyor (MGS) Mars Orbiter Camera (MOC) wide angle view of the Charitum Montes, south of Argyre Planitia, in early June 2003. The seasonal south polar frost cap, composed of carbon dioxide, has been retreating southward through this area since spring began a month ago. The bright features toward the bottom of this picture are surfaces covered by frost. The picture is located near 57oS, 43oW. North is at the top, south is at the bottom. Sunlight illuminates the scene from the upper left. The area shown is about 217 km (135 miles) wide.

  9. MCMini: Monte Carlo on GPGPU

    SciTech Connect

    Marcus, Ryan C.

    2012-07-25

    MCMini is a proof of concept that demonstrates the possibility for Monte Carlo neutron transport using OpenCL with a focus on performance. This implementation, written in C, shows that tracing particles and calculating reactions on a 3D mesh can be done in a highly scalable fashion. These results demonstrate a potential path forward for MCNP or other Monte Carlo codes.

  10. Stacking with stochastic cooling

    NASA Astrophysics Data System (ADS)

    Caspers, Fritz; Möhl, Dieter

    2004-10-01

    Accumulation of large stacks of antiprotons or ions with the aid of stochastic cooling is more delicate than cooling a constant intensity beam. Basically the difficulty stems from the fact that the optimized gain and the cooling rate are inversely proportional to the number of particles 'seen' by the cooling system. Therefore, to maintain fast stacking, the newly injected batch has to be strongly 'protected' from the Schottky noise of the stack. Vice versa the stack has to be efficiently 'shielded' against the high gain cooling system for the injected beam. In the antiproton accumulators with stacking ratios up to 105 the problem is solved by radial separation of the injection and the stack orbits in a region of large dispersion. An array of several tapered cooling systems with a matched gain profile provides a continuous particle flux towards the high-density stack core. Shielding of the different systems from each other is obtained both through the spatial separation and via the revolution frequencies (filters). In the 'old AA', where the antiproton collection and stacking was done in one single ring, the injected beam was further shielded during cooling by means of a movable shutter. The complexity of these systems is very high. For more modest stacking ratios, one might use azimuthal rather than radial separation of stack and injected beam. Schematically half of the circumference would be used to accept and cool new beam and the remainder to house the stack. Fast gating is then required between the high gain cooling of the injected beam and the low gain stack cooling. RF-gymnastics are used to merge the pre-cooled batch with the stack, to re-create free space for the next injection, and to capture the new batch. This scheme is less demanding for the storage ring lattice, but at the expense of some reduction in stacking rate. The talk reviews the 'radial' separation schemes and also gives some considerations to the 'azimuthal' schemes.

  11. A method for stochastic constrained optimization using derivative-free surrogate pattern search and collocation

    SciTech Connect

    Sankaran, Sethuraman; Audet, Charles; Marsden, Alison L.

    2010-06-20

    Recent advances in coupling novel optimization methods to large-scale computing problems have opened the door to tackling a diverse set of physically realistic engineering design problems. A large computational overhead is associated with computing the cost function for most practical problems involving complex physical phenomena. Such problems are also plagued with uncertainties in a diverse set of parameters. We present a novel stochastic derivative-free optimization approach for tackling such problems. Our method extends the previously developed surrogate management framework (SMF) to allow for uncertainties in both simulation parameters and design variables. The stochastic collocation scheme is employed for stochastic variables whereas Kriging based surrogate functions are employed for the cost function. This approach is tested on four numerical optimization problems and is shown to have significant improvement in efficiency over traditional Monte-Carlo schemes. Problems with multiple probabilistic constraints are also discussed.

  12. A Stochastic-Variational Model for Soft Mumford-Shah Segmentation

    PubMed Central

    2006-01-01

    In contemporary image and vision analysis, stochastic approaches demonstrate great flexibility in representing and modeling complex phenomena, while variational-PDE methods gain enormous computational advantages over Monte Carlo or other stochastic algorithms. In combination, the two can lead to much more powerful novel models and efficient algorithms. In the current work, we propose a stochastic-variational model for soft (or fuzzy) Mumford-Shah segmentation of mixture image patterns. Unlike the classical hard Mumford-Shah segmentation, the new model allows each pixel to belong to each image pattern with some probability. Soft segmentation could lead to hard segmentation, and hence is more general. The modeling procedure, mathematical analysis on the existence of optimal solutions, and computational implementation of the new model are explored in detail, and numerical examples of both synthetic and natural images are presented. PMID:23165059

  13. Stochastic response and bifurcation of periodically driven nonlinear oscillators by the generalized cell mapping method

    NASA Astrophysics Data System (ADS)

    Han, Qun; Xu, Wei; Sun, Jian-Qiao

    2016-09-01

    The stochastic response of nonlinear oscillators under periodic and Gaussian white noise excitations is studied with the generalized cell mapping based on short-time Gaussian approximation (GCM/STGA) method. The solutions of the transition probability density functions over a small fraction of the period are constructed by the STGA scheme in order to construct the GCM over one complete period. Both the transient and steady-state probability density functions (PDFs) of a smooth and discontinuous (SD) oscillator are computed to illustrate the application of the method. The accuracy of the results is verified by direct Monte Carlo simulations. The transient responses show the evolution of the PDFs from being Gaussian to non-Gaussian. The effect of a chaotic saddle on the stochastic response is also studied. The stochastic P-bifurcation in terms of the steady-state PDFs occurs with the decrease of the smoothness parameter, which corresponds to the deterministic pitchfork bifurcation.

  14. Improved diffusion Monte Carlo and the Brownian fan

    NASA Astrophysics Data System (ADS)

    Weare, J.; Hairer, M.

    2012-12-01

    Diffusion Monte Carlo (DMC) is a workhorse of stochastic computing. It was invented forty years ago as the central component in a Monte Carlo technique for estimating various characteristics of quantum mechanical systems. Since then it has been used in applied in a huge number of fields, often as a central component in sequential Monte Carlo techniques (e.g. the particle filter). DMC computes averages of some underlying stochastic dynamics weighted by a functional of the path of the process. The weight functional could represent the potential term in a Feynman-Kac representation of a partial differential equation (as in quantum Monte Carlo) or it could represent the likelihood of a sequence of noisy observations of the underlying system (as in particle filtering). DMC alternates between an evolution step in which a collection of samples of the underlying system are evolved for some short time interval, and a branching step in which, according to the weight functional, some samples are copied and some samples are eliminated. Unfortunately for certain choices of the weight functional DMC fails to have a meaningful limit as one decreases the evolution time interval between branching steps. We propose a modification of the standard DMC algorithm. The new algorithm has a lower variance per workload, regardless of the regime considered. In particular, it makes it feasible to use DMC in situations where the ``naive'' generalization of the standard algorithm would be impractical, due to an exponential explosion of its variance. We numerically demonstrate the effectiveness of the new algorithm on a standard rare event simulation problem (probability of an unlikely transition in a Lennard-Jones cluster), as well as a high-frequency data assimilation problem. We then provide a detailed heuristic explanation of why, in the case of rare event simulation, the new algorithm is expected to converge to a limiting process as the underlying stepsize goes to 0. This is shown

  15. A Stochastic Collocation Algorithm for Uncertainty Analysis

    NASA Technical Reports Server (NTRS)

    Mathelin, Lionel; Hussaini, M. Yousuff; Zang, Thomas A. (Technical Monitor)

    2003-01-01

    This report describes a stochastic collocation method to adequately handle a physically intrinsic uncertainty in the variables of a numerical simulation. For instance, while the standard Galerkin approach to Polynomial Chaos requires multi-dimensional summations over the stochastic basis functions, the stochastic collocation method enables to collapse those summations to a one-dimensional summation only. This report furnishes the essential algorithmic details of the new stochastic collocation method and provides as a numerical example the solution of the Riemann problem with the stochastic collocation method used for the discretization of the stochastic parameters.

  16. Enhanced algorithms for stochastic programming

    SciTech Connect

    Krishna, A.S.

    1993-09-01

    In this dissertation, we present some of the recent advances made in solving two-stage stochastic linear programming problems of large size and complexity. Decomposition and sampling are two fundamental components of techniques to solve stochastic optimization problems. We describe improvements to the current techniques in both these areas. We studied different ways of using importance sampling techniques in the context of Stochastic programming, by varying the choice of approximation functions used in this method. We have concluded that approximating the recourse function by a computationally inexpensive piecewise-linear function is highly efficient. This reduced the problem from finding the mean of a computationally expensive functions to finding that of a computationally inexpensive function. Then we implemented various variance reduction techniques to estimate the mean of a piecewise-linear function. This method achieved similar variance reductions in orders of magnitude less time than, when we directly applied variance-reduction techniques directly on the given problem. In solving a stochastic linear program, the expected value problem is usually solved before a stochastic solution and also to speed-up the algorithm by making use of the information obtained from the solution of the expected value problem. We have devised a new decomposition scheme to improve the convergence of this algorithm.

  17. Stochastic simulation in systems biology

    PubMed Central

    Székely, Tamás; Burrage, Kevin

    2014-01-01

    Natural systems are, almost by definition, heterogeneous: this can be either a boon or an obstacle to be overcome, depending on the situation. Traditionally, when constructing mathematical models of these systems, heterogeneity has typically been ignored, despite its critical role. However, in recent years, stochastic computational methods have become commonplace in science. They are able to appropriately account for heterogeneity; indeed, they are based around the premise that systems inherently contain at least one source of heterogeneity (namely, intrinsic heterogeneity). In this mini-review, we give a brief introduction to theoretical modelling and simulation in systems biology and discuss the three different sources of heterogeneity in natural systems. Our main topic is an overview of stochastic simulation methods in systems biology. There are many different types of stochastic methods. We focus on one group that has become especially popular in systems biology, biochemistry, chemistry and physics. These discrete-state stochastic methods do not follow individuals over time; rather they track only total populations. They also assume that the volume of interest is spatially homogeneous. We give an overview of these methods, with a discussion of the advantages and disadvantages of each, and suggest when each is more appropriate to use. We also include references to software implementations of them, so that beginners can quickly start using stochastic methods for practical problems of interest. PMID:25505503

  18. Variance decomposition in stochastic simulators.

    PubMed

    Le Maître, O P; Knio, O M; Moraes, A

    2015-06-28

    This work aims at the development of a mathematical and computational approach that enables quantification of the inherent sources of stochasticity and of the corresponding sensitivities in stochastic simulations of chemical reaction networks. The approach is based on reformulating the system dynamics as being generated by independent standardized Poisson processes. This reformulation affords a straightforward identification of individual realizations for the stochastic dynamics of each reaction channel, and consequently a quantitative characterization of the inherent sources of stochasticity in the system. By relying on the Sobol-Hoeffding decomposition, the reformulation enables us to perform an orthogonal decomposition of the solution variance. Thus, by judiciously exploiting the inherent stochasticity of the system, one is able to quantify the variance-based sensitivities associated with individual reaction channels, as well as the importance of channel interactions. Implementation of the algorithms is illustrated in light of simulations of simplified systems, including the birth-death, Schlögl, and Michaelis-Menten models. PMID:26133418

  19. Simulated Stochastic Approximation Annealing for Global Optimization with a Square-Root Cooling Schedule

    SciTech Connect

    Liang, Faming; Cheng, Yichen; Lin, Guang

    2014-06-13

    Simulated annealing has been widely used in the solution of optimization problems. As known by many researchers, the global optima cannot be guaranteed to be located by simulated annealing unless a logarithmic cooling schedule is used. However, the logarithmic cooling schedule is so slow that no one can afford to have such a long CPU time. This paper proposes a new stochastic optimization algorithm, the so-called simulated stochastic approximation annealing algorithm, which is a combination of simulated annealing and the stochastic approximation Monte Carlo algorithm. Under the framework of stochastic approximation Markov chain Monte Carlo, it is shown that the new algorithm can work with a cooling schedule in which the temperature can decrease much faster than in the logarithmic cooling schedule, e.g., a square-root cooling schedule, while guaranteeing the global optima to be reached when the temperature tends to zero. The new algorithm has been tested on a few benchmark optimization problems, including feed-forward neural network training and protein-folding. The numerical results indicate that the new algorithm can significantly outperform simulated annealing and other competitors.

  20. Comparison of effects of copropagated and precomputed atmosphere profiles on Monte Carlo trajectory simulation

    NASA Technical Reports Server (NTRS)

    Queen, Eric M.; Omara, Thomas M.

    1990-01-01

    A realization of a stochastic atmosphere model for use in simulations is presented. The model provides pressure, density, temperature, and wind velocity as a function of latitude, longitude, and altitude, and is implemented in a three degree of freedom simulation package. This implementation is used in the Monte Carlo simulation of an aeroassisted orbital transfer maneuver and results are compared to those of a more traditional approach.

  1. Grover search algorithm with Rydberg-blockaded atoms: quantum Monte Carlo simulations

    NASA Astrophysics Data System (ADS)

    Petrosyan, David; Saffman, Mark; Mølmer, Klaus

    2016-05-01

    We consider the Grover search algorithm implementation for a quantum register of size N={2}k using k (or k+1) microwave- and laser-driven Rydberg-blockaded atoms, following the proposal by Mølmer et al (2011 J. Phys. B 44 184016). We suggest some simplifications for the microwave and laser couplings, and analyze the performance of the algorithm for up to k = 4 multilevel atoms under realistic experimental conditions using quantum stochastic (Monte Carlo) wavefunction simulations.

  2. Stochastic generation of hourly rainstorm events in Johor

    SciTech Connect

    Nojumuddin, Nur Syereena; Yusof, Fadhilah; Yusop, Zulkifli

    2015-02-03

    Engineers and researchers in water-related studies are often faced with the problem of having insufficient and long rainfall record. Practical and effective methods must be developed to generate unavailable data from limited available data. Therefore, this paper presents a Monte-Carlo based stochastic hourly rainfall generation model to complement the unavailable data. The Monte Carlo simulation used in this study is based on the best fit of storm characteristics. Hence, by using the Maximum Likelihood Estimation (MLE) and Anderson Darling goodness-of-fit test, lognormal appeared to be the best rainfall distribution. Therefore, the Monte Carlo simulation based on lognormal distribution was used in the study. The proposed model was verified by comparing the statistical moments of rainstorm characteristics from the combination of the observed rainstorm events under 10 years and simulated rainstorm events under 30 years of rainfall records with those under the entire 40 years of observed rainfall data based on the hourly rainfall data at the station J1 in Johor over the period of 1972–2011. The absolute percentage error of the duration-depth, duration-inter-event time and depth-inter-event time will be used as the accuracy test. The results showed the first four product-moments of the observed rainstorm characteristics were close with the simulated rainstorm characteristics. The proposed model can be used as a basis to derive rainfall intensity-duration frequency in Johor.

  3. Novel Quantum Monte Carlo Approaches for Quantum Liquids

    NASA Astrophysics Data System (ADS)

    Rubenstein, Brenda M.

    Quantum Monte Carlo methods are a powerful suite of techniques for solving the quantum many-body problem. By using random numbers to stochastically sample quantum properties, QMC methods are capable of studying low-temperature quantum systems well beyond the reach of conventional deterministic techniques. QMC techniques have likewise been indispensible tools for augmenting our current knowledge of superfluidity and superconductivity. In this thesis, I present two new quantum Monte Carlo techniques, the Monte Carlo Power Method and Bose-Fermi Auxiliary-Field Quantum Monte Carlo, and apply previously developed Path Integral Monte Carlo methods to explore two new phases of quantum hard spheres and hydrogen. I lay the foundation for a subsequent description of my research by first reviewing the physics of quantum liquids in Chapter One and the mathematics behind Quantum Monte Carlo algorithms in Chapter Two. I then discuss the Monte Carlo Power Method, a stochastic way of computing the first several extremal eigenvalues of a matrix too memory-intensive to be stored and therefore diagonalized. As an illustration of the technique, I demonstrate how it can be used to determine the second eigenvalues of the transition matrices of several popular Monte Carlo algorithms. This information may be used to quantify how rapidly a Monte Carlo algorithm is converging to the equilibrium probability distribution it is sampling. I next present the Bose-Fermi Auxiliary-Field Quantum Monte Carlo algorithm. This algorithm generalizes the well-known Auxiliary-Field Quantum Monte Carlo algorithm for fermions to bosons and Bose-Fermi mixtures. Despite some shortcomings, the Bose-Fermi Auxiliary-Field Quantum Monte Carlo algorithm represents the first exact technique capable of studying Bose-Fermi mixtures of any size in any dimension. In Chapter Six, I describe a new Constant Stress Path Integral Monte Carlo algorithm for the study of quantum mechanical systems under high pressures. While

  4. Investigation of stochastic radiation transport methods in random heterogeneous mixtures

    NASA Astrophysics Data System (ADS)

    Reinert, Dustin Ray

    Among the most formidable challenges facing our world is the need for safe, clean, affordable energy sources. Growing concerns over global warming induced climate change and the rising costs of fossil fuels threaten conventional means of electricity production and are driving the current nuclear renaissance. One concept at the forefront of international development efforts is the High Temperature Gas-Cooled Reactor (HTGR). With numerous passive safety features and a meltdown-proof design capable of attaining high thermodynamic efficiencies for electricity generation as well as high temperatures useful for the burgeoning hydrogen economy, the HTGR is an extremely promising technology. Unfortunately, the fundamental understanding of neutron behavior within HTGR fuels lags far behind that of more conventional water-cooled reactors. HTGRs utilize a unique heterogeneous fuel element design consisting of thousands of tiny fissile fuel kernels randomly mixed with a non-fissile graphite matrix. Monte Carlo neutron transport simulations of the HTGR fuel element geometry in its full complexity are infeasible and this has motivated the development of more approximate computational techniques. A series of MATLAB codes was written to perform Monte Carlo simulations within HTGR fuel pebbles to establish a comprehensive understanding of the parameters under which the accuracy of the approximate techniques diminishes. This research identified the accuracy of the chord length sampling method to be a function of the matrix scattering optical thickness, the kernel optical thickness, and the kernel packing density. Two new Monte Carlo methods designed to focus the computational effort upon the parameter conditions shown to contribute most strongly to the overall computational error were implemented and evaluated. An extended memory chord length sampling routine that recalls a neutron's prior material traversals was demonstrated to be effective in fixed source calculations containing

  5. Stochastic decision analysis

    NASA Technical Reports Server (NTRS)

    Lacksonen, Thomas A.

    1994-01-01

    Small space flight project design at NASA Langley Research Center goes through a multi-phase process from preliminary analysis to flight operations. The process insures that each system achieves its technical objectives with demonstrated quality and within planned budgets and schedules. A key technical component of early phases is decision analysis, which is a structure procedure for determining the best of a number of feasible concepts based upon project objectives. Feasible system concepts are generated by the designers and analyzed for schedule, cost, risk, and technical measures. Each performance measure value is normalized between the best and worst values and a weighted average score of all measures is calculated for each concept. The concept(s) with the highest scores are retained, while others are eliminated from further analysis. This project automated and enhanced the decision analysis process. Automation of the decision analysis process was done by creating a user-friendly, menu-driven, spreadsheet macro based decision analysis software program. The program contains data entry dialog boxes, automated data and output report generation, and automated output chart generation. The enhancements to the decision analysis process permit stochastic data entry and analysis. Rather than enter single measure values, the designers enter the range and most likely value for each measure and concept. The data can be entered at the system or subsystem level. System level data can be calculated as either sum, maximum, or product functions of the subsystem data. For each concept, the probability distributions are approximated for each measure and the total score for each concept as either constant, triangular, normal, or log-normal distributions. Based on these distributions, formulas are derived for the probability that the concept meets any given constraint, the probability that the concept meets all constraints, and the probability that the concept is within a given

  6. Stochastic determination of matrix determinants.

    PubMed

    Dorn, Sebastian; Ensslin, Torsten A

    2015-07-01

    Matrix determinants play an important role in data analysis, in particular when Gaussian processes are involved. Due to currently exploding data volumes, linear operations-matrices-acting on the data are often not accessible directly but are only represented indirectly in form of a computer routine. Such a routine implements the transformation a data vector undergoes under matrix multiplication. While efficient probing routines to estimate a matrix's diagonal or trace, based solely on such computationally affordable matrix-vector multiplications, are well known and frequently used in signal inference, there is no stochastic estimate for its determinant. We introduce a probing method for the logarithm of a determinant of a linear operator. Our method rests upon a reformulation of the log-determinant by an integral representation and the transformation of the involved terms into stochastic expressions. This stochastic determinant determination enables large-size applications in Bayesian inference, in particular evidence calculations, model comparison, and posterior determination. PMID:26274302

  7. Mechanical autonomous stochastic heat engines

    NASA Astrophysics Data System (ADS)

    Serra-Garcia, Marc; Foehr, Andre; Moleron, Miguel; Lydon, Joseph; Chong, Christopher; Daraio, Chiara; . Team

    Stochastic heat engines extract work from the Brownian motion of a set of particles out of equilibrium. So far, experimental demonstrations of stochastic heat engines have required extreme operating conditions or nonautonomous external control systems. In this talk, we will present a simple, purely classical, autonomous stochastic heat engine that uses the well-known tension induced nonlinearity in a string. Our engine operates between two heat baths out of equilibrium, and transfers energy from the hot bath to a work reservoir. This energy transfer occurs even if the work reservoir is at a higher temperature than the hot reservoir. The talk will cover a theoretical investigation and experimental results on a macroscopic setup subject to external noise excitations. This system presents an opportunity for the study of non equilibrium thermodynamics and is an interesting candidate for innovative energy conversion devices.

  8. Stochastic Control of Pharmacokinetic Systems

    PubMed Central

    Schumitzky, Alan; Milman, Mark; Katz, Darryl; D'Argenio, David Z.; Jelliffe, Roger W.

    1983-01-01

    The application of stochastic control theory to the clinical problem of designing a dosage regimen for a pharmacokinetic system is considered. This involves defining a patient-dependent pharmacokinetic model and a clinically appropriate therapeutic goal. Most investigators have attacked the dosage regimen problem by first estimating the values of the patient's unknown model parameters and then controlling the system as if those parameter estimates were in fact the true values. We have developed an alternative approach utilizing stochastic control theory in which the estimation and control phases of the problem are not separated. Mathematical results are given which show that this approach yields significant potential improvement in attaining, for example, therapeutic serum level goals over methods in which estimation and control are separated. Finally, a computer simulation is given for the optimal stochastic control of an aminoglycoside regimen which shows that this approach is feasible for practical applications.

  9. Correlation functions in stochastic inflation

    NASA Astrophysics Data System (ADS)

    Vennin, Vincent; Starobinsky, Alexei A.

    2015-09-01

    Combining the stochastic and formalisms, we derive non-perturbative analytical expressions for all correlation functions of scalar perturbations in single-field, slow-roll inflation. The standard, classical formulas are recovered as saddle-point limits of the full results. This yields a classicality criterion that shows that stochastic effects are small only if the potential is sub-Planckian and not too flat. The saddle-point approximation also provides an expansion scheme for calculating stochastic corrections to observable quantities perturbatively in this regime. In the opposite regime, we show that a strong suppression in the power spectrum is generically obtained, and we comment on the physical implications of this effect.

  10. Stochastic determination of matrix determinants

    NASA Astrophysics Data System (ADS)

    Dorn, Sebastian; Enßlin, Torsten A.

    2015-07-01

    Matrix determinants play an important role in data analysis, in particular when Gaussian processes are involved. Due to currently exploding data volumes, linear operations—matrices—acting on the data are often not accessible directly but are only represented indirectly in form of a computer routine. Such a routine implements the transformation a data vector undergoes under matrix multiplication. While efficient probing routines to estimate a matrix's diagonal or trace, based solely on such computationally affordable matrix-vector multiplications, are well known and frequently used in signal inference, there is no stochastic estimate for its determinant. We introduce a probing method for the logarithm of a determinant of a linear operator. Our method rests upon a reformulation of the log-determinant by an integral representation and the transformation of the involved terms into stochastic expressions. This stochastic determinant determination enables large-size applications in Bayesian inference, in particular evidence calculations, model comparison, and posterior determination.

  11. Nonlinear optimization for stochastic simulations.

    SciTech Connect

    Johnson, Michael M.; Yoshimura, Ann S.; Hough, Patricia Diane; Ammerlahn, Heidi R.

    2003-12-01

    This report describes research targeting development of stochastic optimization algorithms and their application to mission-critical optimization problems in which uncertainty arises. The first section of this report covers the enhancement of the Trust Region Parallel Direct Search (TRPDS) algorithm to address stochastic responses and the incorporation of the algorithm into the OPT++ optimization library. The second section describes the Weapons of Mass Destruction Decision Analysis Center (WMD-DAC) suite of systems analysis tools and motivates the use of stochastic optimization techniques in such non-deterministic simulations. The third section details a batch programming interface designed to facilitate criteria-based or algorithm-driven execution of system-of-system simulations. The fourth section outlines the use of the enhanced OPT++ library and batch execution mechanism to perform systems analysis and technology trade-off studies in the WMD detection and response problem domain.

  12. Reducing temperature uncertainties by stochastic geothermal reservoir modelling

    NASA Astrophysics Data System (ADS)

    Vogt, C.; Mottaghy, D.; Wolf, A.; Rath, V.; Pechnig, R.; Clauser, C.

    2010-04-01

    Quantifying and minimizing uncertainty is vital for simulating technically and economically successful geothermal reservoirs. To this end, we apply a stochastic modelling sequence, a Monte Carlo study, based on (i) creating an ensemble of possible realizations of a reservoir model, (ii) forward simulation of fluid flow and heat transport, and (iii) constraining post-processing using observed state variables. To generate the ensemble, we use the stochastic algorithm of Sequential Gaussian Simulation and test its potential fitting rock properties, such as thermal conductivity and permeability, of a synthetic reference model and-performing a corresponding forward simulation-state variables such as temperature. The ensemble yields probability distributions of rock properties and state variables at any location inside the reservoir. In addition, we perform a constraining post-processing in order to minimize the uncertainty of the obtained distributions by conditioning the ensemble to observed state variables, in this case temperature. This constraining post-processing works particularly well on systems dominated by fluid flow. The stochastic modelling sequence is applied to a large, steady-state 3-D heat flow model of a reservoir in The Hague, Netherlands. The spatial thermal conductivity distribution is simulated stochastically based on available logging data. Errors of bottom-hole temperatures provide thresholds for the constraining technique performed afterwards. This reduce the temperature uncertainty for the proposed target location significantly from 25 to 12K (full distribution width) in a depth of 2300m. Assuming a Gaussian shape of the temperature distribution, the standard deviation is 1.8K. To allow a more comprehensive approach to quantify uncertainty, we also implement the stochastic simulation of boundary conditions and demonstrate this for the basal specific heat flow in the reservoir of The Hague. As expected, this results in a larger distribution width

  13. Parallelizing Monte Carlo with PMC

    SciTech Connect

    Rathkopf, J.A.; Jones, T.R.; Nessett, D.M.; Stanberry, L.C.

    1994-11-01

    PMC (Parallel Monte Carlo) is a system of generic interface routines that allows easy porting of Monte Carlo packages of large-scale physics simulation codes to Massively Parallel Processor (MPP) computers. By loading various versions of PMC, simulation code developers can configure their codes to run in several modes: serial, Monte Carlo runs on the same processor as the rest of the code; parallel, Monte Carlo runs in parallel across many processors of the MPP with the rest of the code running on other MPP processor(s); distributed, Monte Carlo runs in parallel across many processors of the MPP with the rest of the code running on a different machine. This multi-mode approach allows maintenance of a single simulation code source regardless of the target machine. PMC handles passing of messages between nodes on the MPP, passing of messages between a different machine and the MPP, distributing work between nodes, and providing independent, reproducible sequences of random numbers. Several production codes have been parallelized under the PMC system. Excellent parallel efficiency in both the distributed and parallel modes results if sufficient workload is available per processor. Experiences with a Monte Carlo photonics demonstration code and a Monte Carlo neutronics package are described.

  14. Present Status and Extensions of the Monte Carlo Performance Benchmark

    NASA Astrophysics Data System (ADS)

    Hoogenboom, J. Eduard; Petrovic, Bojan; Martin, William R.

    2014-06-01

    The NEA Monte Carlo Performance benchmark started in 2011 aiming to monitor over the years the abilities to perform a full-size Monte Carlo reactor core calculation with a detailed power production for each fuel pin with axial distribution. This paper gives an overview of the contributed results thus far. It shows that reaching a statistical accuracy of 1 % for most of the small fuel zones requires about 100 billion neutron histories. The efficiency of parallel execution of Monte Carlo codes on a large number of processor cores shows clear limitations for computer clusters with common type computer nodes. However, using true supercomputers the speedup of parallel calculations is increasing up to large numbers of processor cores. More experience is needed from calculations on true supercomputers using large numbers of processors in order to predict if the requested calculations can be done in a short time. As the specifications of the reactor geometry for this benchmark test are well suited for further investigations of full-core Monte Carlo calculations and a need is felt for testing other issues than its computational performance, proposals are presented for extending the benchmark to a suite of benchmark problems for evaluating fission source convergence for a system with a high dominance ratio, for coupling with thermal-hydraulics calculations to evaluate the use of different temperatures and coolant densities and to study the correctness and effectiveness of burnup calculations. Moreover, other contemporary proposals for a full-core calculation with realistic geometry and material composition will be discussed.

  15. Stochastic Cooling Developments at GSI

    SciTech Connect

    Nolden, F.; Beckert, K.; Beller, P.; Dolinskii, A.; Franzke, B.; Jandewerth, U.; Nesmiyan, I.; Peschke, C.; Petri, P.; Steck, M.; Caspers, F.; Moehl, D.; Thorndahl, L.

    2006-03-20

    Stochastic Cooling is presently used at the existing storage ring ESR as a first stage of cooling for secondary heavy ion beams. In the frame of the FAIR project at GSI, stochastic cooling is planned to play a major role for the preparation of high quality antiproton and rare isotope beams. The paper describes the existing ESR system, the first stage cooling system at the planned Collector Ring, and will also cover first steps toward the design of an antiproton collection system at the planned RESR ring.

  16. Stochastic modeling of Lagrangian accelerations

    NASA Astrophysics Data System (ADS)

    Reynolds, Andy

    2002-11-01

    It is shown how Sawford's second-order Lagrangian stochastic model (Phys. Fluids A 3, 1577-1586, 1991) for fluid-particle accelerations can be combined with a model for the evolution of the dissipation rate (Pope and Chen, Phys. Fluids A 2, 1437-1449, 1990) to produce a Lagrangian stochastic model that is consistent with both the measured distribution of Lagrangian accelerations (La Porta et al., Nature 409, 1017-1019, 2001) and Kolmogorov's similarity theory. The later condition is found not to be satisfied when a constant dissipation rate is employed and consistency with prescribed acceleration statistics is enforced through fulfilment of a well-mixed condition.

  17. Some remarks on Nelson's stochastic field

    NASA Astrophysics Data System (ADS)

    Lim, S. C.

    1980-09-01

    An attempt to extend Nelson's stochastic quantization procedure to tensor fields indicates that the results of Guerra et al. on the connection between a euclidean Markov scalar field and a stochastic scalar field fails to hold for tensor fields.

  18. Partial ASL extensions for stochastic programming.

    Energy Science and Technology Software Center (ESTSC)

    2010-03-31

    partially completed extensions for stochastic programming to the AMPL/solver interface library (ASL).modeling and experimenting with stochastic recourse problems. This software is not primarily for military applications

  19. Theory, technology, and technique of stochastic cooling

    SciTech Connect

    Marriner, J.

    1993-10-01

    The theory and technological implementation of stochastic cooling is described. Theoretical and technological limitations are discussed. Data from existing stochastic cooling systems are shown to illustrate some useful techniques.

  20. Wormhole Hamiltonian Monte Carlo

    PubMed Central

    Lan, Shiwei; Streets, Jeffrey; Shahbaba, Babak

    2015-01-01

    In machine learning and statistics, probabilistic inference involving multimodal distributions is quite difficult. This is especially true in high dimensional problems, where most existing algorithms cannot easily move from one mode to another. To address this issue, we propose a novel Bayesian inference approach based on Markov Chain Monte Carlo. Our method can effectively sample from multimodal distributions, especially when the dimension is high and the modes are isolated. To this end, it exploits and modifies the Riemannian geometric properties of the target distribution to create wormholes connecting modes in order to facilitate moving between them. Further, our proposed method uses the regeneration technique in order to adapt the algorithm by identifying new modes and updating the network of wormholes without affecting the stationary distribution. To find new modes, as opposed to redis-covering those previously identified, we employ a novel mode searching algorithm that explores a residual energy function obtained by subtracting an approximate Gaussian mixture density (based on previously discovered modes) from the target density function. PMID:25861551

  1. The Hamiltonian Mechanics of Stochastic Acceleration

    SciTech Connect

    Burby, J. W.

    2013-07-17

    We show how to nd the physical Langevin equation describing the trajectories of particles un- dergoing collisionless stochastic acceleration. These stochastic di erential equations retain not only one-, but two-particle statistics, and inherit the Hamiltonian nature of the underlying microscopic equations. This opens the door to using stochastic variational integrators to perform simulations of stochastic interactions such as Fermi acceleration. We illustrate the theory by applying it to two example problems.

  2. SEU43 fuel bundle shielding analysis during spent fuel transport

    SciTech Connect

    Margeanu, C. A.; Ilie, P.; Olteanu, G.

    2006-07-01

    The basic task accomplished by the shielding calculations in a nuclear safety analysis consist in radiation doses calculation, in order to prevent any risks both for personnel protection and impact on the environment during the spent fuel manipulation, transport and storage. The paper investigates the effects induced by fuel bundle geometry modifications on the CANDU SEU spent fuel shielding analysis during transport. For this study, different CANDU-SEU43 fuel bundle projects, developed in INR Pitesti, have been considered. The spent fuel characteristics will be obtained by means of ORIGEN-S code. In order to estimate the corresponding radiation doses for different measuring points the Monte Carlo MORSE-SGC code will be used. Both codes are included in ORNL's SCALE 5 programs package. A comparison between the considered SEU43 fuel bundle projects will be also provided, with CANDU standard fuel bundle taken as reference. (authors)

  3. Stochastically forced zonal flows

    NASA Astrophysics Data System (ADS)

    Srinivasan, Kaushik

    an approximate equation for the vorticity correlation function that is then solved perturbatively. The Reynolds stress of the pertubative solution can then be expressed as a function of the mean-flow and its y-derivatives. In particular, it is shown that as long as the forcing breaks mirror-symmetry, the Reynolds stress has a wave-like term, as a result of which the mean-flow is governed by a dispersive wave equation. In a separate study, Reynolds stress induced by an anisotropically forced unbounded Couette flow with uniform shear gamma, on a beta-plane, is calculated in conjunction with the eddy diffusivity of a co-evolving passive tracer. The flow is damped by linear drag on a time scale mu--1. The stochastic forcing is controlled by a parameter alpha, that characterizes whether eddies are elongated along the zonal direction (alpha < 0), the meridional direction (alpha > 0) or are isotropic (alpha = 0). The Reynolds stress varies linearly with alpha and non-linearly and non-monotonically with gamma; but the Reynolds stress is independent of beta. For positive values of alpha, the Reynolds stress displays an "anti-frictional" effect (energy is transferred from the eddies to the mean flow) and a frictional effect for negative values of alpha. With gamma = beta =0, the meridional tracer eddy diffusivity is v'2/(2mu), where v' is the meridional eddy velocity. In general, beta and gamma suppress the diffusivity below v'2/(2mu).

  4. Stochastic architecture for Hopfield neural nets

    NASA Technical Reports Server (NTRS)

    Pavel, Sandy

    1992-01-01

    An expandable stochastic digital architecture for recurrent (Hopfield like) neural networks is proposed. The main features and basic principles of stochastic processing are presented. The stochastic digital architecture is based on a chip with n full interconnected neurons with a pipeline, bit processing structure. For large applications, a flexible way to interconnect many such chips is provided.

  5. Stability of stochastic switched SIRS models

    NASA Astrophysics Data System (ADS)

    Meng, Xiaoying; Liu, Xinzhi; Deng, Feiqi

    2011-11-01

    Stochastic stability problems of a stochastic switched SIRS model with or without distributed time delay are considered. By utilizing the Lyapunov methods, sufficient stability conditions of the disease-free equilibrium are established. Stability conditions about the subsystem of the stochastic switched SIRS systems are also obtained.

  6. Calculating Pi Using the Monte Carlo Method

    NASA Astrophysics Data System (ADS)

    Williamson, Timothy

    2013-11-01

    During the summer of 2012, I had the opportunity to participate in a research experience for teachers at the center for sustainable energy at Notre Dame University (RET @ cSEND) working with Professor John LoSecco on the problem of using antineutrino detection to accurately determine the fuel makeup and operating power of nuclear reactors. During full power operation, a reactor may produce 1021 antineutrinos per second with approximately 100 per day being detected. While becoming familiar with the design and operation of the detectors, and how total antineutrino flux could be obtained from such a small sample, I read about a simulation program called Monte Carlo. Further investigation led me to the Monte Carlo method page of Wikipedia2 where I saw an example of approximating pi using this simulation. Other examples where this method was applied were typically done with computer simulations2 or purely mathematical.3 It is my belief that this method may be easily related to the students by performing the simple activity of sprinkling rice on an arc drawn in a square. The activity that follows was inspired by those simulations and was used by my AP Physics class last year with very good results.

  7. Monte Carlo methods: Application to hydrogen gas and hard spheres

    NASA Astrophysics Data System (ADS)

    Dewing, Mark Douglas

    2001-08-01

    Quantum Monte Carlo (QMC) methods are among the most accurate for computing ground state properties of quantum systems. The two major types of QMC we use are Variational Monte Carlo (VMC), which evaluates integrals arising from the variational principle, and Diffusion Monte Carlo (DMC), which stochastically projects to the ground state from a trial wave function. These methods are applied to a system of boson hard spheres to get exact, infinite system size results for the ground state at several densities. The kinds of problems that can be simulated with Monte Carlo methods are expanded through the development of new algorithms for combining a QMC simulation with a classical Monte Carlo simulation, which we call Coupled Electronic-Ionic Monte Carlo (CEIMC). The new CEIMC method is applied to a system of molecular hydrogen at temperatures ranging from 2800K to 4500K and densities from 0.25 to 0.46 g/cm3. VMC requires optimizing a parameterized wave function to find the minimum energy. We examine several techniques for optimizing VMC wave functions, focusing on the ability to optimize parameters appearing in the Slater determinant. Classical Monte Carlo simulations use an empirical interatomic potential to compute equilibrium properties of various states of matter. The CEIMC method replaces the empirical potential with a QMC calculation of the electronic energy. This is similar in spirit to the Car-Parrinello technique, which uses Density Functional Theory for the electrons and molecular dynamics for the nuclei. The challenges in constructing an efficient CEIMC simulation center mostly around the noisy results generated from the QMC computations of the electronic energy. We introduce two complementary techniques, one for tolerating the noise and the other for reducing it. The penalty method modifies the Metropolis acceptance ratio to tolerate noise without introducing a bias in the simulation of the nuclei. For reducing the noise, we introduce the two-sided energy

  8. Isotropic Monte Carlo Grain Growth

    Energy Science and Technology Software Center (ESTSC)

    2013-04-25

    IMCGG performs Monte Carlo simulations of normal grain growth in metals on a hexagonal grid in two dimensions with periodic boundary conditions. This may be performed with either an isotropic or a misorientation - and incliantion-dependent grain boundary energy.

  9. A fully coupled Monte Carlo/discrete ordinates solution to the neutron transport equation. Final report

    SciTech Connect

    Filippone, W.L.; Baker, R.S.

    1990-12-31

    The neutron transport equation is solved by a hybrid method that iteratively couples regions where deterministic (S{sub N}) and stochastic (Monte Carlo) methods are applied. Unlike previous hybrid methods, the Monte Carlo and S{sub N} regions are fully coupled in the sense that no assumption is made about geometrical separation or decoupling. The hybrid method provides a new means of solving problems involving both optically thick and optically thin regions that neither Monte Carlo nor S{sub N} is well suited for by themselves. The fully coupled Monte Carlo/S{sub N} technique consists of defining spatial and/or energy regions of a problem in which either a Monte Carlo calculation or an S{sub N} calculation is to be performed. The Monte Carlo region may comprise the entire spatial region for selected energy groups, or may consist of a rectangular area that is either completely or partially embedded in an arbitrary S{sub N} region. The Monte Carlo and S{sub N} regions are then connected through the common angular boundary fluxes, which are determined iteratively using the response matrix technique, and volumetric sources. The hybrid method has been implemented in the S{sub N} code TWODANT by adding special-purpose Monte Carlo subroutines to calculate the response matrices and volumetric sources, and linkage subrountines to carry out the interface flux iterations. The common angular boundary fluxes are included in the S{sub N} code as interior boundary sources, leaving the logic for the solution of the transport flux unchanged, while, with minor modifications, the diffusion synthetic accelerator remains effective in accelerating S{sub N} calculations. The special-purpose Monte Carlo routines used are essentially analog, with few variance reduction techniques employed. However, the routines have been successfully vectorized, with approximately a factor of five increase in speed over the non-vectorized version.

  10. Stochastic resonance on a circle

    SciTech Connect

    Wiesenfeld, K. ); Pierson, D.; Pantazelou, E.; Dames, C.; Moss, F. )

    1994-04-04

    We describe a new realization of stochastic resonance, applicable to a broad class of systems, based on an underlying excitable dynamics with deterministic reinjection. A simple but general theory of such single-trigger'' systems is compared with analog simulations of the Fitzhugh-Nagumo model, as well as experimental data obtained from stimulated sensory neurons in the crayfish.

  11. Universality in Stochastic Exponential Growth

    NASA Astrophysics Data System (ADS)

    Iyer-Biswas, Srividya; Crooks, Gavin E.; Scherer, Norbert F.; Dinner, Aaron R.

    2014-07-01

    Recent imaging data for single bacterial cells reveal that their mean sizes grow exponentially in time and that their size distributions collapse to a single curve when rescaled by their means. An analogous result holds for the division-time distributions. A model is needed to delineate the minimal requirements for these scaling behaviors. We formulate a microscopic theory of stochastic exponential growth as a Master Equation that accounts for these observations, in contrast to existing quantitative models of stochastic exponential growth (e.g., the Black-Scholes equation or geometric Brownian motion). Our model, the stochastic Hinshelwood cycle (SHC), is an autocatalytic reaction cycle in which each molecular species catalyzes the production of the next. By finding exact analytical solutions to the SHC and the corresponding first passage time problem, we uncover universal signatures of fluctuations in exponential growth and division. The model makes minimal assumptions, and we describe how more complex reaction networks can reduce to such a cycle. We thus expect similar scalings to be discovered in stochastic processes resulting in exponential growth that appear in diverse contexts such as cosmology, finance, technology, and population growth.

  12. Stochastic cooling: recent theoretical directions

    SciTech Connect

    Bisognano, J.

    1983-03-01

    A kinetic-equation derivation of the stochastic-cooling Fokker-Planck equation of correlation is introduced to describe both the Schottky spectrum and signal suppression. Generalizations to nonlinear gain and coupling between degrees of freedom are presented. Analysis of bunch beam cooling is included.

  13. Universality in stochastic exponential growth.

    PubMed

    Iyer-Biswas, Srividya; Crooks, Gavin E; Scherer, Norbert F; Dinner, Aaron R

    2014-07-11

    Recent imaging data for single bacterial cells reveal that their mean sizes grow exponentially in time and that their size distributions collapse to a single curve when rescaled by their means. An analogous result holds for the division-time distributions. A model is needed to delineate the minimal requirements for these scaling behaviors. We formulate a microscopic theory of stochastic exponential growth as a Master Equation that accounts for these observations, in contrast to existing quantitative models of stochastic exponential growth (e.g., the Black-Scholes equation or geometric Brownian motion). Our model, the stochastic Hinshelwood cycle (SHC), is an autocatalytic reaction cycle in which each molecular species catalyzes the production of the next. By finding exact analytical solutions to the SHC and the corresponding first passage time problem, we uncover universal signatures of fluctuations in exponential growth and division. The model makes minimal assumptions, and we describe how more complex reaction networks can reduce to such a cycle. We thus expect similar scalings to be discovered in stochastic processes resulting in exponential growth that appear in diverse contexts such as cosmology, finance, technology, and population growth. PMID:25062238

  14. Stochastic Resonance and Information Processing

    NASA Astrophysics Data System (ADS)

    Nicolis, C.

    2014-12-01

    A dynamical system giving rise to multiple steady states and subjected to noise and a periodic forcing is analyzed from the standpoint of information theory. It is shown that stochastic resonance has a clearcut signature on information entropy, information transfer and other related quantities characterizing information transduction within the system.

  15. Stochastic Energy Deployment System

    Energy Science and Technology Software Center (ESTSC)

    2011-11-30

    SEDS is an economy-wide energy model of the U.S. The model captures dynamics between supply, demand, and pricing of the major energy types consumed and produced within the U.S. These dynamics are captured by including: the effects of macroeconomics; the resources and costs of primary energy types such as oil, natural gas, coal, and biomass; the conversion of primary fuels into energy products like petroleum products, electricity, biofuels, and hydrogen; and lastly the end- usemore » consumption attributable to residential and commercial buildings, light and heavy transportation, and industry. Projections from SEDS extend to the year 2050 by one-year time steps and are generally projected at the national level. SEDS differs from other economy-wide energy models in that it explicitly accounts for uncertainty in technology, markets, and policy. SEDS has been specifically developed to avoid the computational burden, and sometimes fruitless labor, that comes from modeling significantly low-level details. Instead, SEDS focuses on the major drivers within the energy economy and evaluates the impact of uncertainty around those drivers.« less

  16. Stochastic Energy Deployment System

    SciTech Connect

    2011-11-30

    SEDS is an economy-wide energy model of the U.S. The model captures dynamics between supply, demand, and pricing of the major energy types consumed and produced within the U.S. These dynamics are captured by including: the effects of macroeconomics; the resources and costs of primary energy types such as oil, natural gas, coal, and biomass; the conversion of primary fuels into energy products like petroleum products, electricity, biofuels, and hydrogen; and lastly the end- use consumption attributable to residential and commercial buildings, light and heavy transportation, and industry. Projections from SEDS extend to the year 2050 by one-year time steps and are generally projected at the national level. SEDS differs from other economy-wide energy models in that it explicitly accounts for uncertainty in technology, markets, and policy. SEDS has been specifically developed to avoid the computational burden, and sometimes fruitless labor, that comes from modeling significantly low-level details. Instead, SEDS focuses on the major drivers within the energy economy and evaluates the impact of uncertainty around those drivers.

  17. A Survey of Stochastic Simulation and Optimization Methods in Signal Processing

    NASA Astrophysics Data System (ADS)

    Pereyra, Marcelo; Schniter, Philip; Chouzenoux, Emilie; Pesquet, Jean-Christophe; Tourneret, Jean-Yves; Hero, Alfred O.; McLaughlin, Steve

    2016-03-01

    Modern signal processing (SP) methods rely very heavily on probability and statistics to solve challenging SP problems. SP methods are now expected to deal with ever more complex models, requiring ever more sophisticated computational inference techniques. This has driven the development of statistical SP methods based on stochastic simulation and optimization. Stochastic simulation and optimization algorithms are computationally intensive tools for performing statistical inference in models that are analytically intractable and beyond the scope of deterministic inference methods. They have been recently successfully applied to many difficult problems involving complex statistical models and sophisticated (often Bayesian) statistical inference techniques. This survey paper offers an introduction to stochastic simulation and optimization methods in signal and image processing. The paper addresses a variety of high-dimensional Markov chain Monte Carlo (MCMC) methods as well as deterministic surrogate methods, such as variational Bayes, the Bethe approach, belief and expectation propagation and approximate message passing algorithms. It also discusses a range of optimization methods that have been adopted to solve stochastic problems, as well as stochastic methods for deterministic optimization. Subsequently, areas of overlap between simulation and optimization, in particular optimization-within-MCMC and MCMC-driven optimization are discussed.

  18. Implementation of Chord Length Sampling for Transport Through a Binary Stochastic Mixture

    SciTech Connect

    T.J. Donovan; T.M. Sutton; Y. Danon

    2002-11-18

    Neutron transport through a special case stochastic mixture is examined, in which spheres of constant radius are uniformly mixed in a matrix material. A Monte Carlo algorithm previously proposed and examined in 2-D has been implemented in a test version of MCNP. The Limited Chord Length Sampling (LCLS) technique provides a means for modeling a binary stochastic mixture as a cell in MCNP. When inside a matrix cell, LCLS uses chord-length sampling to sample the distance to the next stochastic sphere. After a surface crossing into a stochastic sphere, transport is treated explicitly until the particle exits or is killed. Results were computed for a simple model with two different fixed neutron source distributions and three sets of material number densities. Stochastic spheres were modeled as black absorbers and varying degrees of scattering were introduced in the matrix material. Tallies were computed using the LCLS capability and by averaging results obtained from multiple realizations of the random geometry. Results were compared for accuracy and figures of merit were compared to indicate the efficiency gain of the LCLS method over the benchmark method. Results show that LCLS provides very good accuracy if the scattering optical thickness of the matrix is small ({le} 1). Comparisons of figures of merit show an advantage to LCLS varying between factors of 141 and 5. LCLS efficiency and accuracy relative to the benchmark both decrease as scattering is increased in the matrix.

  19. Stem cell proliferation and differentiation and stochastic bistability in gene expression

    SciTech Connect

    Zhdanov, V. P.

    2007-02-15

    The process of proliferation and differentiation of stem cells is inherently stochastic in the sense that the outcome of cell division is characterized by probabilities that depend on the intracellular properties, extracellular medium, and cell-cell communication. Despite four decades of intensive studies, the understanding of the physics behind this stochasticity is still limited, both in details and conceptually. Here, we suggest a simple scheme showing that the stochastic behavior of a single stem cell may be related to (i) the existence of a short stage of decision whether it will proliferate or differentiate and (ii) control of this stage by stochastic bistability in gene expression or, more specifically, by transcriptional 'bursts.' Our Monte Carlo simulations indicate that our proposed scheme may operate if the number of mRNA (or protein) molecules generated during the high-reactive periods of gene expression is below or about 50. The stochastic-burst window in the space of kinetic parameters is found to increase with decreasing the mRNA and/or regulatory-protein numbers and increasing the number of regulatory sites. For mRNA production with three regulatory sites, for example, the mRNA degradation rate constant may change in the range {+-}10%.

  20. Stochastic resonance in visual sensitivity.

    PubMed

    Kundu, Ajanta; Sarkar, Sandip

    2015-04-01

    It is well known from psychophysical studies that stochastic resonance, in its simplest threshold paradigm, can be used as a tool to measure the detection sensitivity to fine details in noise contaminated stimuli. In the present manuscript, we report simulation studies conducted in the similar threshold paradigm of stochastic resonance. We have estimated the contrast sensitivity in detecting noisy sine-wave stimuli, with varying area and spatial frequency, as a function of noise strength. In all the cases, the measured sensitivity attained a peak at intermediate noise strength, which indicate the occurrence of stochastic resonance. The peak sensitivity exhibited a strong dependence on area and spatial frequency of the stimulus. We show that the peak contrast sensitivity varies with spatial frequency in a nonmonotonic fashion and the qualitative nature of the sensitivity variation is in good agreement with human contrast sensitivity function. We also demonstrate that the peak sensitivity first increases and then saturates with increasing area, and this result is in line with the results of psychophysical experiments. Additionally, we also show that critical area, denoting the saturation of contrast sensitivity, decreases with spatial frequency and the associated maximum contrast sensitivity varies with spatial frequency in a manner that is consistent with the results of psychophysical experiments. In all the studies, the sensitivities were elevated via a nonlinear filtering operation called stochastic resonance. Because of this nonlinear effect, it was not guaranteed that the sensitivities, estimated at each frequency, would be in agreement with the corresponding results of psychophysical experiments; on the contrary, close agreements were observed between our results and the findings of psychophysical investigations. These observations indicate the utility of stochastic resonance in human vision and suggest that this paradigm can be useful in psychophysical studies

  1. Crossing the mesoscale no-mans land via parallel kinetic Monte Carlo.

    SciTech Connect

    Garcia Cardona, Cristina; Webb, Edmund Blackburn, III; Wagner, Gregory John; Tikare, Veena; Holm, Elizabeth Ann; Plimpton, Steven James; Thompson, Aidan Patrick; Slepoy, Alexander; Zhou, Xiao Wang; Battaile, Corbett Chandler; Chandross, Michael Evan

    2009-10-01

    The kinetic Monte Carlo method and its variants are powerful tools for modeling materials at the mesoscale, meaning at length and time scales in between the atomic and continuum. We have completed a 3 year LDRD project with the goal of developing a parallel kinetic Monte Carlo capability and applying it to materials modeling problems of interest to Sandia. In this report we give an overview of the methods and algorithms developed, and describe our new open-source code called SPPARKS, for Stochastic Parallel PARticle Kinetic Simulator. We also highlight the development of several Monte Carlo models in SPPARKS for specific materials modeling applications, including grain growth, bubble formation, diffusion in nanoporous materials, defect formation in erbium hydrides, and surface growth and evolution.

  2. Criticality of spent reactor fuel

    SciTech Connect

    Harris, D.R.

    1987-01-01

    The storage capacity of spent reactor fuel pools can be greatly increased by consolidation. In this process, the fuel rods are removed from reactor fuel assemblies and are stored in close-packed arrays in a canister or skeleton. An earlier study examined criticality consideration for consolidation of Westinghouse fuel, assumed to be fresh, in canisters at the Millstone-2 spent-fuel pool and in the General Electric IF-300 shipping cask. The conclusions were that the fuel rods in the canister are so deficient in water that they are adequately subcritical, both in normal and in off-normal conditions. One potential accident, the water spill event, remained unresolved in the earlier study. A methodology is developed here for spent-fuel criticality and is applied to the water spill event. The methodology utilizes LEOPARD to compute few-group cross sections for the diffusion code PDQ7, which then is used to compute reactivity. These codes give results for fresh fuel that are in good agreement with KENO IV-NITAWL Monte Carlo results, which themselves are in good agreement with continuous energy Monte Carlo calculations. These methodologies are in reasonable agreement with critical measurements for undepleted fuel.

  3. Path integral approach to closed-form option pricing formulas with applications to stochastic volatility and interest rate models

    NASA Astrophysics Data System (ADS)

    Lemmens, D.; Wouters, M.; Tempere, J.; Foulon, S.

    2008-07-01

    We present a path integral method to derive closed-form solutions for option prices in a stochastic volatility model. The method is explained in detail for the pricing of a plain vanilla option. The flexibility of our approach is demonstrated by extending the realm of closed-form option price formulas to the case where both the volatility and interest rates are stochastic. This flexibility is promising for the treatment of exotic options. Our analytical formulas are tested with numerical Monte Carlo simulations.

  4. Stochastic Parallel PARticle Kinetic Simulator

    Energy Science and Technology Software Center (ESTSC)

    2008-07-01

    SPPARKS is a kinetic Monte Carlo simulator which implements kinetic and Metropolis Monte Carlo solvers in a general way so that they can be hooked to applications of various kinds. Specific applications are implemented in SPPARKS as physical models which generate events (e.g. a diffusive hop or chemical reaction) and execute them one-by-one. Applications can run in paralle so long as the simulation domain can be partitoned spatially so that multiple events can be invokedmore » simultaneously. SPPARKS is used to model various kinds of mesoscale materials science scenarios such as grain growth, surface deposition and growth, and reaction kinetics. It can also be used to develop new Monte Carlo models that hook to the existing solver and paralle infrastructure provided by the code.« less

  5. Detector-selection technique for Monte Carlo transport in azimuthally symmetric geometries

    SciTech Connect

    Hoffman, T.J.; Tang, J.S.; Parks, C.V.

    1982-01-01

    Many radiation transport problems contain geometric symmetries which are not exploited in obtaining their Monte Carlo solutions. An important class of problems is that in which the geometry is symmetric about an axis. These problems arise in the analyses of a reactor core or shield, spent fuel shipping casks, tanks containing radioactive solutions, radiation transport in the atmosphere (air-over-ground problems), etc. Although amenable to deterministic solution, such problems can often be solved more efficiently and accurately with the Monte Carlo method. For this class of problems, a technique is described in this paper which significantly reduces the variance of the Monte Carlo-calculated effect of interest at point detectors.

  6. On the forward-backward-in-time approach for Monte Carlo solution of Parker's transport equation: One-dimensional case

    NASA Astrophysics Data System (ADS)

    Bobik, P.; Boschini, M. J.; Della Torre, S.; Gervasi, M.; Grandi, D.; La Vacca, G.; Pensotti, S.; Putis, M.; Rancoita, P. G.; Rozza, D.; Tacconi, M.; Zannoni, M.

    2016-05-01

    The cosmic rays propagation inside the heliosphere is well described by a transport equation introduced by Parker in 1965. To solve this equation, several approaches were followed in the past. Recently, a Monte Carlo approach became widely used in force of its advantages with respect to other numerical methods. In this approach the transport equation is associated to a fully equivalent set of stochastic differential equations (SDE). This set is used to describe the stochastic path of quasi-particle from a source, e.g., the interstellar space, to a specific target, e.g., a detector at Earth. We present a comparison of forward-in-time and backward-in-time methods to solve the cosmic rays transport equation in the heliosphere. The Parker equation and the related set of SDE in the several formulations are treated in this paper. For the sake of clarity, this work is focused on the one-dimensional solutions. Results were compared with an alternative numerical solution, namely, Crank-Nicolson method, specifically developed for the case under study. The methods presented are fully consistent each others for energy greater than 400 MeV. The comparison between stochastic integrations and Crank-Nicolson allows us to estimate the systematic uncertainties of Monte Carlo methods. The forward-in-time stochastic integrations method showed a systematic uncertainty <5%, while backward-in-time stochastic integrations method showed a systematic uncertainty <1% in the studied energy range.

  7. HTGR Reactor Physics and Burnup Calculations Using the Serpent Monte Carlo Code

    SciTech Connect

    Leppanen, Jaakko; DeHart, Mark D

    2009-01-01

    One of the main advantages of the continuous-energy Monte Carlo method is its versatility and the capability to model any fuel or reactor configuration without major approximations. This capability becomes particularly valuable in studies involving innovative reactor designs and next-generation systems, which often lie beyond the capabilities of deterministic LWR transport codes. In this study, a conceptual prismatic HTGR fuel assembly was modeled using the Serpent Monte Carlo reactor physics burnup calculation code, under development at VTT Technical Research Centre of Finland since 2004. A new explicit particle fuel model was developed to account for the heterogeneity effects. The results are compared to other Monte Carlo and deterministic transport codes and the study also serves as a test case for the modules and methods in SCALE 6.

  8. Estimating stepwise debromination pathways of polybrominated diphenyl ethers with an analogue Markov Chain Monte Carlo algorithm.

    PubMed

    Zou, Yonghong; Christensen, Erik R; Zheng, Wei; Wei, Hua; Li, An

    2014-11-01

    A stochastic process was developed to simulate the stepwise debromination pathways for polybrominated diphenyl ethers (PBDEs). The stochastic process uses an analogue Markov Chain Monte Carlo (AMCMC) algorithm to generate PBDE debromination profiles. The acceptance or rejection of the randomly drawn stepwise debromination reactions was determined by a maximum likelihood function. The experimental observations at certain time points were used as target profiles; therefore, the stochastic processes are capable of presenting the effects of reaction conditions on the selection of debromination pathways. The application of the model is illustrated by adopting the experimental results of decabromodiphenyl ether (BDE209) in hexane exposed to sunlight. Inferences that were not obvious from experimental data were suggested by model simulations. For example, BDE206 has much higher accumulation at the first 30 min of sunlight exposure. By contrast, model simulation suggests that, BDE206 and BDE207 had comparable yields from BDE209. The reason for the higher BDE206 level is that BDE207 has the highest depletion in producing octa products. Compared to a previous version of the stochastic model based on stochastic reaction sequences (SRS), the AMCMC approach was determined to be more efficient and robust. Due to the feature of only requiring experimental observations as input, the AMCMC model is expected to be applicable to a wide range of PBDE debromination processes, e.g. microbial, photolytic, or joint effects in natural environments. PMID:25113201

  9. Thermal explosion near bifurcation: stochastic features of ignition

    NASA Astrophysics Data System (ADS)

    Nowakowski, B.; Lemarchand, A.

    2002-08-01

    We study stochastic effects in a thermochemical explosive system exchanging heat with a thermostat. We use a mesoscopic description based on the master equation for temperature which includes a transition rate for the Newtonian thermal transfer process. This master equation for a continuous variable has a complicated integro-differential form and to solve it we resort to Monte Carlo simulations. The results of the master equation approach are compared with those of direct simulations of the microscopic particle dynamics in a dilute gas system. We study the Semenov model in the vicinity of the bifurcation related to the emergence of bistability. The probability distributions of ignition time are calculated below and above the bifurcation point. An approximate analytical prediction for the main statistical properties of ignition time is deduced from the Fokker-Planck equation derived from the master equation. The theoretical results are compared with the experimental data obtained for cool flames of a hydrocarbon in the explosive regime.

  10. Stochastic Particle Real Time Analyzer (SPARTA) Validation and Verification Suite

    SciTech Connect

    Gallis, Michael A.; Koehler, Timothy P.; Plimpton, Steven J.

    2014-10-01

    This report presents the test cases used to verify, validate and demonstrate the features and capabilities of the first release of the 3D Direct Simulation Monte Carlo (DSMC) code SPARTA (Stochastic Real Time Particle Analyzer). The test cases included in this report exercise the most critical capabilities of the code like the accurate representation of physical phenomena (molecular advection and collisions, energy conservation, etc.) and implementation of numerical methods (grid adaptation, load balancing, etc.). Several test cases of simple flow examples are shown to demonstrate that the code can reproduce phenomena predicted by analytical solutions and theory. A number of additional test cases are presented to illustrate the ability of SPARTA to model flow around complicated shapes. In these cases, the results are compared to other well-established codes or theoretical predictions. This compilation of test cases is not exhaustive, and it is anticipated that more cases will be added in the future.

  11. Nonequilibrium Steady States of a Stochastic Model System.

    NASA Astrophysics Data System (ADS)

    Zhang, Qiwei

    We study the nonequilibrium steady state of a stochastic lattice gas model, originally proposed by Katz, Lebowitz and Spohn (Phys. Rev. B 28: 1655 (1983)). Firstly, we solve the model on some small lattices exactly in order to see the general dependence of the steady state upon different parameters of the model. Nextly, we derive some analytical results for infinite lattice systems by taking some suitable limits. We then present some renormalization group results for the continuum version of the model via field theoretical techniques, the supersymmetry of the critical dynamics in zero field is also explored. Finally, we report some very recent 3-D Monte Carlo simulation results, which have been obtained by applying Multi-Spin-Coding techniques on a CDC vector supercomputer - Cyber 205 at John von Neumann Center.

  12. A stochastic model for the analysis of maximum daily temperature

    NASA Astrophysics Data System (ADS)

    Sirangelo, B.; Caloiero, T.; Coscarelli, R.; Ferrari, E.

    2016-08-01

    In this paper, a stochastic model for the analysis of the daily maximum temperature is proposed. First, a deseasonalization procedure based on the truncated Fourier expansion is adopted. Then, the Johnson transformation functions were applied for the data normalization. Finally, the fractionally autoregressive integrated moving average model was used to reproduce both short- and long-memory behavior of the temperature series. The model was applied to the data of the Cosenza gauge (Calabria region) and verified on other four gauges of southern Italy. Through a Monte Carlo simulation procedure based on the proposed model, 105 years of daily maximum temperature have been generated. Among the possible applications of the model, the occurrence probabilities of the annual maximum values have been evaluated. Moreover, the procedure was applied for the estimation of the return periods of long sequences of days with maximum temperature above prefixed thresholds.

  13. GPU Computing in Bayesian Inference of Realized Stochastic Volatility Model

    NASA Astrophysics Data System (ADS)

    Takaishi, Tetsuya

    2015-01-01

    The realized stochastic volatility (RSV) model that utilizes the realized volatility as additional information has been proposed to infer volatility of financial time series. We consider the Bayesian inference of the RSV model by the Hybrid Monte Carlo (HMC) algorithm. The HMC algorithm can be parallelized and thus performed on the GPU for speedup. The GPU code is developed with CUDA Fortran. We compare the computational time in performing the HMC algorithm on GPU (GTX 760) and CPU (Intel i7-4770 3.4GHz) and find that the GPU can be up to 17 times faster than the CPU. We also code the program with OpenACC and find that appropriate coding can achieve the similar speedup with CUDA Fortran.

  14. Binomial distribution based τ-leap accelerated stochastic simulation

    NASA Astrophysics Data System (ADS)

    Chatterjee, Abhijit; Vlachos, Dionisios G.; Katsoulakis, Markos A.

    2005-01-01

    Recently, Gillespie introduced the τ-leap approximate, accelerated stochastic Monte Carlo method for well-mixed reacting systems [J. Chem. Phys. 115, 1716 (2001)]. In each time increment of that method, one executes a number of reaction events, selected randomly from a Poisson distribution, to enable simulation of long times. Here we introduce a binomial distribution τ-leap algorithm (abbreviated as BD-τ method). This method combines the bounded nature of the binomial distribution variable with the limiting reactant and constrained firing concepts to avoid negative populations encountered in the original τ-leap method of Gillespie for large time increments, and thus conserve mass. Simulations using prototype reaction networks show that the BD-τ method is more accurate than the original method for comparable coarse-graining in time.

  15. Stochastic-Dynamical Modeling of Space Time Rainfall

    NASA Technical Reports Server (NTRS)

    Georgankakos, Konstantine P.

    1997-01-01

    The focus of this research work is the elucidation of the physical origins of the observed extreme-rainfall variability over tropical oceans. The quantitative results of this work may be used to establish links between deterministic models of the mesoscale and synoptic scale with statistical descriptions of the temporal variability of local tropical oceanic rainfall. In addition, they may be used to quantify the influence of measurement error in large-scale forcing and cloud scale observations on the accuracy of local rainfall variability inferences, important for hydrologic studies. A simple statistical-dynamical model, suitable for use in repetitive Monte Carlo experiments, is formulated as a diagnostic tool for this purpose. Stochastic processes with temporal structure and parameters estimated from observed large-scale data represent large-scale forcing.

  16. Stochastic many-body perturbation theory for anharmonic molecular vibrations

    NASA Astrophysics Data System (ADS)

    Hermes, Matthew R.; Hirata, So

    2014-08-01

    A new quantum Monte Carlo (QMC) method for anharmonic vibrational zero-point energies and transition frequencies is developed, which combines the diagrammatic vibrational many-body perturbation theory based on the Dyson equation with Monte Carlo integration. The infinite sums of the diagrammatic and thus size-consistent first- and second-order anharmonic corrections to the energy and self-energy are expressed as sums of a few m- or 2m-dimensional integrals of wave functions and a potential energy surface (PES) (m is the vibrational degrees of freedom). Each of these integrals is computed as the integrand (including the value of the PES) divided by the value of a judiciously chosen weight function evaluated on demand at geometries distributed randomly but according to the weight function via the Metropolis algorithm. In this way, the method completely avoids cumbersome evaluation and storage of high-order force constants necessary in the original formulation of the vibrational perturbation theory; it furthermore allows even higher-order force constants essentially up to an infinite order to be taken into account in a scalable, memory-efficient algorithm. The diagrammatic contributions to the frequency-dependent self-energies that are stochastically evaluated at discrete frequencies can be reliably interpolated, allowing the self-consistent solutions to the Dyson equation to be obtained. This method, therefore, can compute directly and stochastically the transition frequencies of fundamentals and overtones as well as their relative intensities as pole strengths, without fixed-node errors that plague some QMC. It is shown that, for an identical PES, the new method reproduces the correct deterministic values of the energies and frequencies within a few cm-1 and pole strengths within a few thousandths. With the values of a PES evaluated on the fly at random geometries, the new method captures a noticeably greater proportion of anharmonic effects.

  17. Stochastic many-body perturbation theory for anharmonic molecular vibrations

    SciTech Connect

    Hermes, Matthew R.; Hirata, So

    2014-08-28

    A new quantum Monte Carlo (QMC) method for anharmonic vibrational zero-point energies and transition frequencies is developed, which combines the diagrammatic vibrational many-body perturbation theory based on the Dyson equation with Monte Carlo integration. The infinite sums of the diagrammatic and thus size-consistent first- and second-order anharmonic corrections to the energy and self-energy are expressed as sums of a few m- or 2m-dimensional integrals of wave functions and a potential energy surface (PES) (m is the vibrational degrees of freedom). Each of these integrals is computed as the integrand (including the value of the PES) divided by the value of a judiciously chosen weight function evaluated on demand at geometries distributed randomly but according to the weight function via the Metropolis algorithm. In this way, the method completely avoids cumbersome evaluation and storage of high-order force constants necessary in the original formulation of the vibrational perturbation theory; it furthermore allows even higher-order force constants essentially up to an infinite order to be taken into account in a scalable, memory-efficient algorithm. The diagrammatic contributions to the frequency-dependent self-energies that are stochastically evaluated at discrete frequencies can be reliably interpolated, allowing the self-consistent solutions to the Dyson equation to be obtained. This method, therefore, can compute directly and stochastically the transition frequencies of fundamentals and overtones as well as their relative intensities as pole strengths, without fixed-node errors that plague some QMC. It is shown that, for an identical PES, the new method reproduces the correct deterministic values of the energies and frequencies within a few cm{sup −1} and pole strengths within a few thousandths. With the values of a PES evaluated on the fly at random geometries, the new method captures a noticeably greater proportion of anharmonic effects.

  18. Quasi-Monte Carlo integration

    SciTech Connect

    Morokoff, W.J.; Caflisch, R.E.

    1995-12-01

    The standard Monte Carlo approach to evaluating multidimensional integrals using (pseudo)-random integration nodes is frequently used when quadrature methods are too difficult or expensive to implement. As an alternative to the random methods, it has been suggested that lower error and improved convergence may be obtained by replacing the pseudo-random sequences with more uniformly distributed sequences known as quasi-random. In this paper quasi-random (Halton, Sobol`, and Faure) and pseudo-random sequences are compared in computational experiments designed to determine the effects on convergence of certain properties of the integrand, including variance, variation, smoothness, and dimension. The results show that variation, which plays an important role in the theoretical upper bound given by the Koksma-Hlawka inequality, does not affect convergence, while variance, the determining factor in random Monte Carlo, is shown to provide a rough upper bound, but does not accurately predict performance. In general, quasi-Monte Carlo methods are superior to random Monte Carlo, but the advantage may be slight, particularly in high dimensions or for integrands that are not smooth. For discontinuous integrands, we derive a bound which shows that the exponent for algebraic decay of the integration error from quasi-Monte Carlo is only slightly larger than {1/2} in high dimensions. 21 refs., 6 figs., 5 tabs.

  19. Quasi-Monte Carlo Integration

    NASA Astrophysics Data System (ADS)

    Morokoff, William J.; Caflisch, Russel E.

    1995-12-01

    The standard Monte Carlo approach to evaluating multidimensional integrals using (pseudo)-random integration nodes is frequently used when quadrature methods are too difficult or expensive to implement. As an alternative to the random methods, it has been suggested that lower error and improved convergence may be obtained by replacing the pseudo-random sequences with more uniformly distributed sequences known as quasi-random. In this paper quasi-random (Halton, Sobol', and Faure) and pseudo-random sequences are compared in computational experiments designed to determine the effects on convergence of certain properties of the integrand, including variance, variation, smoothness, and dimension. The results show that variation, which plays an important role in the theoretical upper bound given by the Koksma-Hlawka inequality, does not affect convergence, while variance, the determining factor in random Monte Carlo, is shown to provide a rough upper bound, but does not accurately predict performance. In general, quasi-Monte Carlo methods are superior to random Monte Carlo, but the advantage may be slight, particularly in high dimensions or for integrands that are not smooth. For discontinuous integrands, we derive a bound which shows that the exponent for algebraic decay of the integration error from quasi-Monte Carlo is only slightly larger than {1}/{2} in high dimensions.

  20. Path integral Monte Carlo and the electron gas

    NASA Astrophysics Data System (ADS)

    Brown, Ethan W.

    Path integral Monte Carlo is a proven method for accurately simulating quantum mechanical systems at finite-temperature. By stochastically sampling Feynman's path integral representation of the quantum many-body density matrix, path integral Monte Carlo includes non-perturbative effects like thermal fluctuations and particle correlations in a natural way. Over the past 30 years, path integral Monte Carlo has been successfully employed to study the low density electron gas, high-pressure hydrogen, and superfluid helium. For systems where the role of Fermi statistics is important, however, traditional path integral Monte Carlo simulations have an exponentially decreasing efficiency with decreased temperature and increased system size. In this thesis, we work towards improving this efficiency, both through approximate and exact methods, as specifically applied to the homogeneous electron gas. We begin with a brief overview of the current state of atomic simulations at finite-temperature before we delve into a pedagogical review of the path integral Monte Carlo method. We then spend some time discussing the one major issue preventing exact simulation of Fermi systems, the sign problem. Afterwards, we introduce a way to circumvent the sign problem in PIMC simulations through a fixed-node constraint. We then apply this method to the homogeneous electron gas at a large swatch of densities and temperatures in order to map out the warm-dense matter regime. The electron gas can be a representative model for a host of real systems, from simple medals to stellar interiors. However, its most common use is as input into density functional theory. To this end, we aim to build an accurate representation of the electron gas from the ground state to the classical limit and examine its use in finite-temperature density functional formulations. The latter half of this thesis focuses on possible routes beyond the fixed-node approximation. As a first step, we utilize the variational

  1. Stochastic scanning multiphoton multifocal microscopy.

    PubMed

    Jureller, Justin E; Kim, Hee Y; Scherer, Norbert F

    2006-04-17

    Multiparticle tracking with scanning confocal and multiphoton fluorescence imaging is increasingly important for elucidating biological function, as in the transport of intracellular cargo-carrying vesicles. We demonstrate a simple rapid-sampling stochastic scanning multifocal multiphoton microscopy (SS-MMM) fluorescence imaging technique that enables multiparticle tracking without specialized hardware at rates 1,000 times greater than conventional single point raster scanning. Stochastic scanning of a diffractive optic generated 10x10 hexagonal array of foci with a white noise driven galvanometer yields a scan pattern that is random yet space-filling. SS-MMM creates a more uniformly sampled image with fewer spatio-temporal artifacts than obtained by conventional or multibeam raster scanning. SS-MMM is verified by simulation and experimentally demonstrated by tracking microsphere diffusion in solution. PMID:19516485

  2. Stochastic Models of Quantum Decoherence

    NASA Astrophysics Data System (ADS)

    Kennerly, Sam

    Suppose a single qubit is repeatedly prepared and evolved under imperfectly-controlled conditions. A drunk model represents uncontrolled interactions on each experimental trial as random or stochastic terms in the qubit's Hamiltonian operator. Time evolution of states is generated by a stochastic differential equation whose sample paths evolve according to the Schrodinger equation. For models with Gaussian white noise which is independent of the qubit's state, the expectation value of the solution obeys a master equation which is identical to the high-temperature limit of the Bloch equation. Drunk models predict that experimental data can appear consistent with decoherence even if qubit states evolve by unitary transformations. Examples are shown in which reversible evolution appears to cause irreversible information loss. This paradox is resolved by distinguishing between the true state of a system and the estimated state inferred from an experimental dataset.

  3. Stochastic thermodynamics with information reservoirs.

    PubMed

    Barato, Andre C; Seifert, Udo

    2014-10-01

    We generalize stochastic thermodynamics to include information reservoirs. Such information reservoirs, which can be modeled as a sequence of bits, modify the second law. For example, work extraction from a system in contact with a single heat bath becomes possible if the system also interacts with an information reservoir. We obtain an inequality, and the corresponding fluctuation theorem, generalizing the standard entropy production of stochastic thermodynamics. From this inequality we can derive an information processing entropy production, which gives the second law in the presence of information reservoirs. We also develop a systematic linear response theory for information processing machines. For a unicyclic machine powered by an information reservoir, the efficiency at maximum power can deviate from the standard value of 1/2. For the case where energy is consumed to erase the tape, the efficiency at maximum erasure rate is found to be 1/2. PMID:25375481

  4. Stochastic weighted particle methods for population balance equations

    SciTech Connect

    Patterson, Robert I.A.; Wagner, Wolfgang; Kraft, Markus

    2011-08-10

    Highlights: {yields} Weight transfer functions for Monte Carlo simulation of coagulation. {yields} Efficient support for single-particle growth processes. {yields} Comparisons to analytic solutions and soot formation problems. {yields} Better numerical accuracy for less common particles. - Abstract: A class of coagulation weight transfer functions is constructed, each member of which leads to a stochastic particle algorithm for the numerical treatment of population balance equations. These algorithms are based on systems of weighted computational particles and the weight transfer functions are constructed such that the number of computational particles does not change during coagulation events. The algorithms also facilitate the simulation of physical processes that change single particles, such as growth, or other surface reactions. Four members of the algorithm family have been numerically validated by comparison to analytic solutions to simple problems. Numerical experiments have been performed for complex laminar premixed flame systems in which members of the class of stochastic weighted particle methods were compared to each other and to a direct simulation algorithm. Two of the weighted algorithms have been shown to offer performance advantages over the direct simulation algorithm in situations where interest is focused on the larger particles in a system. The extent of this advantage depends on the particular system and on the quantities of interest.

  5. Stochastic cellular automata model for wildland fire spread dynamics

    NASA Astrophysics Data System (ADS)

    Maduro Almeida, Rodolfo; Macau, Elbert E. N.

    2011-03-01

    A stochastic cellular automata model for wildland fire spread under flat terrain and no-wind conditions is proposed and its dynamics is characterized and analyzed. One of three possible states characterizes each cell: vegetation cell, burning cell and burnt cell. The dynamics of fire spread is modeled as a stochastic event with an effective fire spread probability S which is a function of three probabilities that characterize: the proportion of vegetation cells across the lattice, the probability of a burning cell becomes burnt, and the probability of the fire spread from a burning cell to a neighboring vegetation cell. A set of simulation experiments is performed to analyze the effects of different values of the three probabilities in the fire pattern. Monte-Carlo simulations indicate that there is a critical line in the model parameter space that separates the set of parameters which a fire can propagate from those for which it cannot propagate. Finally, the relevance of the model is discussed under the light of computational experiments that illustrate the capability of the model catches both the dynamical and static qualitative properties of fire propagation.

  6. Stochastic analysis of long dry spells in Calabria (Southern Italy)

    NASA Astrophysics Data System (ADS)

    Sirangelo, B.; Caloiero, T.; Coscarelli, R.; Ferrari, E.

    2015-10-01

    A deficit in precipitation may impact greatly on soil moisture, snowpack, streamflow, groundwater and reservoir storage. Among the several approaches available to investigate this phenomenon, one of the most applied is the analysis of dry spells. In this study, a non-homogeneous Poisson model has been applied to a set of high-quality daily rainfall series, recorded in southern Italy (Calabria region) during the period 1981-2010, for the stochastic analysis of dry spells. Firstly, some statistical details of the Poisson models were presented. Then, the proposed model has been applied to the analysis of long dry spells. In particular, a Monte Carlo technique was performed to reproduce the characteristics of the process. As a result, the main characteristics of the long dry spells have shown patterns clearly related to some geographical features of the study area, such as elevation and latitude. The results obtained from the stochastic modelling of the long dry spells proved that the proposed model is useful for the probability evaluation of drought, thus improving environmental planning and management.

  7. Stochastic spatio-temporal modelling with PCRaster Python

    NASA Astrophysics Data System (ADS)

    Karssenberg, D.; Schmitz, O.; de Jong, K.

    2012-04-01

    PCRaster Python is a software framework for building spatio-temporal models of land surface processes (Karssenberg, Schmitz, Salamon, De Jong, & Bierkens, 2010; PCRaster, 2012). Building blocks of models are spatial operations on raster maps, including a large suite of operations for water and sediment routing. These operations, developed in C++, are available to model builders as Python functions. Users create models by combining these functions in a Python script. As construction of large iterative models is often difficult and time consuming for non-specialists in programming, the software comes with a set of Python framework classes that provide control flow for static modelling, temporal modelling, stochastic modelling using Monte Carlo simulation, and data assimilation techniques including the Ensemble Kalman filter and the Particle Filter. A framework for integrating model components with different time steps and spatial discretization is currently available as a prototype (Schmitz, de Jong, & Karssenberg, in review). The software includes routines for visualisation of stochastic spatio-temporal data for prompt, interactive, visualisation of model inputs and outputs. Visualisation techniques include animated maps, time series, probability distributions, and animated maps with exceedance probabilities. The PCRaster Python software is used by researchers from a large range of disciplines, including hydrology, ecology, sedimentology, and land use change studies. Applications include global scale hydrological modelling and error propagation in large-scale land use change models. The software runs on MS Windows and Linux operating systems, and OS X (under development).

  8. Stochastic background of atmospheric cascades

    SciTech Connect

    Wilk, G. ); Wlodarczyk, Z. )

    1993-06-15

    Fluctuations in the atmospheric cascades developing during the propagation of very high energy cosmic rays through the atmosphere are investigated using stochastic branching model of pure birth process with immigration. In particular, we show that the multiplicity distributions of secondaries emerging from gamma families are much narrower than those resulting from hadronic families. We argue that the strong intermittent like behaviour found recently in atmospheric families results from the fluctuations in the cascades themselves and are insensitive to the details of elementary interactions.

  9. Discrete stability in stochastic programming

    SciTech Connect

    Lepp, R.

    1994-12-31

    In this lecture we study stability properties of stochastic programs with recourse where the probability measure is approximated by a sequence of weakly convergent discrete measures. Such discrete approximation approach gives us a possibility to analyze explicitly the behavior of the second stage correction function. The approach is based on modern functional analytical methods of an approximation of extremum problems in function spaces, especially on the notion of the discrete convergence of vectors to an essentially bounded measurable function.

  10. Stochastic background of atmospheric cascades

    NASA Astrophysics Data System (ADS)

    Wilk, G.; WŁOdarczyk, Z.

    1993-06-01

    Fluctuations in the atmospheric cascades developing during the propagation of very high energy cosmic rays through the atmosphere are investigated using stochastic branching model of pure birth process with immigration. In particular, we show that the multiplicity distributions of secondaries emerging from gamma families are much narrower than those resulting from hadronic families. We argue that the strong intermittent like behaviour found recently in atmospheric families results from the fluctuations in the cascades themselves and are insensitive to the details of elementary interactions.

  11. Stochastic cooling technology at Fermilab

    NASA Astrophysics Data System (ADS)

    Pasquinelli, Ralph J.

    2004-10-01

    The first antiproton cooling systems were installed and commissioned at Fermilab in 1984-1985. In the interim period, there have been several major upgrades, system improvements, and complete reincarnation of cooling systems. This paper will present some of the technology that was pioneered at Fermilab to implement stochastic cooling systems in both the Antiproton Source and Recycler accelerators. Current performance data will also be presented.

  12. Symmetry and Stochastic Gene Regulation

    NASA Astrophysics Data System (ADS)

    Ramos, Alexandre F.; Hornos, José E. M.

    2007-09-01

    Lorentz-like noncompact Lie symmetry SO(2,1) is found in a spin-boson stochastic model for gene expression. The invariant of the algebra characterizes the switch decay to equilibrium. The azimuthal eigenvalue describes the affinity between the regulatory protein and the gene operator site. Raising and lowering operators are constructed and their actions increase or decrease the affinity parameter. The classification of the noise regime of the gene arises from the group theoretical numbers.

  13. Stochastic neural nets and vision

    NASA Astrophysics Data System (ADS)

    Fall, Thomas C.

    1991-03-01

    A stochastic neural net shares with the normally defined neural nets the concept that information is processed by a system consisting of a set of nodes (neurons) connected by weighted links (axons). The normal neural net takes in inputs on an initial layer of neurons which fire appropriately; a neuron of the next layer fires depending on the sum of weights of the axons leading to it from fired neurons of the first layer. The stochastic neural net differs in that the neurons are more complex and that the vision activity is a dynamic process. The first layer (viewing layer) of neurons fires stochastically based on the average brightness of the area it sees and then has a refractory period. The viewing layer looks at the image for several clock cycles. The effect is like those photo sensitive sunglasses that darken in bright light. The neurons over the bright areas are most likely in a refractory period (and this can't fire) and the neurons over the dark areas are not. Now if we move the sensing layer with respect to the image so that a portion of the neurons formerly over the dark are now over the bright, they will likely all fire on that first cycle. Thus, on that cycle, one would see a flash from that portion significantly stronger than surrounding regions. Movement the other direction would produce a patch that is darker, but this effect is not as noticeable. These effects are collected in a collection layer. This paper will discuss the use of the stochastic neural net for edge detection and segmentation of some simple images.

  14. Determining Reduced Order Models for Optimal Stochastic Reduced Order Models

    SciTech Connect

    Bonney, Matthew S.; Brake, Matthew R.W.

    2015-08-01

    The use of parameterized reduced order models(PROMs) within the stochastic reduced order model (SROM) framework is a logical progression for both methods. In this report, five different parameterized reduced order models are selected and critiqued against the other models along with truth model for the example of the Brake-Reuss beam. The models are: a Taylor series using finite difference, a proper orthogonal decomposition of the the output, a Craig-Bampton representation of the model, a method that uses Hyper-Dual numbers to determine the sensitivities, and a Meta-Model method that uses the Hyper-Dual results and constructs a polynomial curve to better represent the output data. The methods are compared against a parameter sweep and a distribution propagation where the first four statistical moments are used as a comparison. Each method produces very accurate results with the Craig-Bampton reduction having the least accurate results. The models are also compared based on time requirements for the evaluation of each model where the Meta- Model requires the least amount of time for computation by a significant amount. Each of the five models provided accurate results in a reasonable time frame. The determination of which model to use is dependent on the availability of the high-fidelity model and how many evaluations can be performed. Analysis of the output distribution is examined by using a large Monte-Carlo simulation along with a reduced simulation using Latin Hypercube and the stochastic reduced order model sampling technique. Both techniques produced accurate results. The stochastic reduced order modeling technique produced less error when compared to an exhaustive sampling for the majority of methods.

  15. Stochastic methods for uncertainty quantification in radiation transport

    SciTech Connect

    Fichtl, Erin D; Prinja, Anil K; Warsa, James S

    2009-01-01

    The use of generalized polynomial chaos (gPC) expansions is investigated for uncertainty quantification in radiation transport. The gPC represents second-order random processes in terms of an expansion of orthogonal polynomials of random variables and is used to represent the uncertain input(s) and unknown(s). We assume a single uncertain input-the total macroscopic cross section-although this does not represent a limitation of the approaches considered here. Two solution methods are examined: The Stochastic Finite Element Method (SFEM) and the Stochastic Collocation Method (SCM). The SFEM entails taking Galerkin projections onto the orthogonal basis, which, for fixed source problems, yields a linear system of fully -coupled equations for the PC coefficients of the unknown. For k-eigenvalue calculations, the SFEM system is non-linear and a Newton-Krylov method is employed to solve it. The SCM utilizes a suitable quadrature rule to compute the moments or PC coefficients of the unknown(s), thus the SCM solution involves a series of independent deterministic transport solutions. The accuracy and efficiency of the two methods are compared and contrasted. The PC coefficients are used to compute the moments and probability density functions of the unknown(s), which are shown to be accurate by comparing with Monte Carlo results. Our work demonstrates that stochastic spectral expansions are a viable alternative to sampling-based uncertainty quantification techniques since both provide a complete characterization of the distribution of the flux and the k-eigenvalue. Furthermore, it is demonstrated that, unlike perturbation methods, SFEM and SCM can handle large parameter uncertainty.

  16. Mechanical Autonomous Stochastic Heat Engine.

    PubMed

    Serra-Garcia, Marc; Foehr, André; Molerón, Miguel; Lydon, Joseph; Chong, Christopher; Daraio, Chiara

    2016-07-01

    Stochastic heat engines are devices that generate work from random thermal motion using a small number of highly fluctuating degrees of freedom. Proposals for such devices have existed for more than a century and include the Maxwell demon and the Feynman ratchet. Only recently have they been demonstrated experimentally, using, e.g., thermal cycles implemented in optical traps. However, recent experimental demonstrations of classical stochastic heat engines are nonautonomous, since they require an external control system that prescribes a heating and cooling cycle and consume more energy than they produce. We present a heat engine consisting of three coupled mechanical resonators (two ribbons and a cantilever) subject to a stochastic drive. The engine uses geometric nonlinearities in the resonating ribbons to autonomously convert a random excitation into a low-entropy, nonpassive oscillation of the cantilever. The engine presents the anomalous heat transport property of negative thermal conductivity, consisting in the ability to passively transfer energy from a cold reservoir to a hot reservoir. PMID:27419553

  17. Multiple fields in stochastic inflation

    NASA Astrophysics Data System (ADS)

    Assadullahi, Hooshyar; Firouzjahi, Hassan; Noorbala, Mahdiyar; Vennin, Vincent; Wands, David

    2016-06-01

    Stochastic effects in multi-field inflationary scenarios are investigated. A hierarchy of diffusion equations is derived, the solutions of which yield moments of the numbers of inflationary e-folds. Solving the resulting partial differential equations in multi-dimensional field space is more challenging than the single-field case. A few tractable examples are discussed, which show that the number of fields is, in general, a critical parameter. When more than two fields are present for instance, the probability to explore arbitrarily large-field regions of the potential, otherwise inaccessible to single-field dynamics, becomes non-zero. In some configurations, this gives rise to an infinite mean number of e-folds, regardless of the initial conditions. Another difference with respect to single-field scenarios is that multi-field stochastic effects can be large even at sub-Planckian energy. This opens interesting new possibilities for probing quantum effects in inflationary dynamics, since the moments of the numbers of e-folds can be used to calculate the distribution of primordial density perturbations in the stochastic-δ N formalism.

  18. Mechanical Autonomous Stochastic Heat Engine

    NASA Astrophysics Data System (ADS)

    Serra-Garcia, Marc; Foehr, André; Molerón, Miguel; Lydon, Joseph; Chong, Christopher; Daraio, Chiara

    2016-07-01

    Stochastic heat engines are devices that generate work from random thermal motion using a small number of highly fluctuating degrees of freedom. Proposals for such devices have existed for more than a century and include the Maxwell demon and the Feynman ratchet. Only recently have they been demonstrated experimentally, using, e.g., thermal cycles implemented in optical traps. However, recent experimental demonstrations of classical stochastic heat engines are nonautonomous, since they require an external control system that prescribes a heating and cooling cycle and consume more energy than they produce. We present a heat engine consisting of three coupled mechanical resonators (two ribbons and a cantilever) subject to a stochastic drive. The engine uses geometric nonlinearities in the resonating ribbons to autonomously convert a random excitation into a low-entropy, nonpassive oscillation of the cantilever. The engine presents the anomalous heat transport property of negative thermal conductivity, consisting in the ability to passively transfer energy from a cold reservoir to a hot reservoir.

  19. Reactive Monte Carlo sampling with an ab initio potential

    NASA Astrophysics Data System (ADS)

    Leiding, Jeff; Coe, Joshua D.

    2016-05-01

    We present the first application of reactive Monte Carlo in a first-principles context. The algorithm samples in a modified NVT ensemble in which the volume, temperature, and total number of atoms of a given type are held fixed, but molecular composition is allowed to evolve through stochastic variation of chemical connectivity. We discuss general features of the method, as well as techniques needed to enhance the efficiency of Boltzmann sampling. Finally, we compare the results of simulation of NH3 to those of ab initio molecular dynamics (AIMD). We find that there are regions of state space for which RxMC sampling is much more efficient than AIMD due to the "rare-event" character of chemical reactions.

  20. Markov Chain Monte Carlo Bayesian Learning for Neural Networks

    NASA Technical Reports Server (NTRS)

    Goodrich, Michael S.

    2011-01-01

    Conventional training methods for neural networks involve starting al a random location in the solution space of the network weights, navigating an error hyper surface to reach a minimum, and sometime stochastic based techniques (e.g., genetic algorithms) to avoid entrapment in a local minimum. It is further typically necessary to preprocess the data (e.g., normalization) to keep the training algorithm on course. Conversely, Bayesian based learning is an epistemological approach concerned with formally updating the plausibility of competing candidate hypotheses thereby obtaining a posterior distribution for the network weights conditioned on the available data and a prior distribution. In this paper, we developed a powerful methodology for estimating the full residual uncertainty in network weights and therefore network predictions by using a modified Jeffery's prior combined with a Metropolis Markov Chain Monte Carlo method.

  1. Quantum Monte Carlo simulations with tensor-network states

    NASA Astrophysics Data System (ADS)

    Song, Jeong Pil; Clay, R. T.

    2011-03-01

    Matrix-product states, generated by the density-matrix renormalization group method, are among the most powerful methods for simulation of quasi-one dimensional quantum systems. Direct application of a matrix-product state representation fails for two dimensional systems, although a number of tensor-network states have been proposed to generalize the concept for two dimensions. We introduce a useful approximate method replacing a 4-index tensor by two matrices in order to contract tensors in two dimensions. We use this formalism as a basis for variational quantum Monte Carlo, optimizing the matrix elements stochastically. We present results on a two dimensional spinless fermion model including nearest- neighbor Coulomb interactions, and determine the critical Coulomb interaction for the charge density wave state by finite size scaling. This work was supported by the Department of Energy grant DE-FG02-06ER46315.

  2. Accelerating particle-in-cell simulations using multilevel Monte Carlo

    NASA Astrophysics Data System (ADS)

    Ricketson, Lee

    2015-11-01

    Particle-in-cell (PIC) simulations have been an important tool in understanding plasmas since the dawn of the digital computer. Much more recently, the multilevel Monte Carlo (MLMC) method has accelerated particle-based simulations of a variety of systems described by stochastic differential equations (SDEs), from financial portfolios to porous media flow. The fundamental idea of MLMC is to perform correlated particle simulations using a hierarchy of different time steps, and to use these correlations for variance reduction on the fine-step result. This framework is directly applicable to the Langevin formulation of Coulomb collisions, as demonstrated in previous work, but in order to apply to PIC simulations of realistic scenarios, MLMC must be generalized to incorporate self-consistent evolution of the electromagnetic fields. We present such a generalization, with rigorous results concerning its accuracy and efficiency. We present examples of the method in the collisionless, electrostatic context, and discuss applications and extensions for the future.

  3. Two-Dimensional Ferromagnet: Quantum Monte Carlo results

    NASA Astrophysics Data System (ADS)

    Henelius, Patrik; Timm, Carsten; Girvin, Steven M.; Sandvik, Anders

    1997-03-01

    In the quantum Hall system the Zeeman interaction between electronic spins and the external magnetic field is typically weak compared to both the Landau-level splitting and the exchange interaction. Therefore, quantum Hall systems at integer filling factors can be ferromagnets. The magnetization and, recently, the nuclear magnetic relaxation rate 1/T1 have been measured for these magnets.(S.E. Barrett et al.), Phys. Rev. Lett. 72, 1368 (1994); 74, 5112 (1995) These quantities have been calculated in a Schwinger-boson mean-field approach.(N. Read and S. Sachdev, Phys. Rev. Lett. 75), 3509 (1995) We have calculated these same quantities using a Stochastic Series Expansion Monte Carlo Method. The results are compared with the experimental data, the mean-field results and with 1/N corrections for the mean-field results, calculated by our group.

  4. Monte Carlo simulation of scenario probability distributions

    SciTech Connect

    Glaser, R.

    1996-10-23

    Suppose a scenario of interest can be represented as a series of events. A final result R may be viewed then as the intersection of three events, A, B, and C. The probability of the result P(R) in this case is the product P(R) = P(A) P(B {vert_bar} A) P(C {vert_bar} A {intersection} B). An expert may be reluctant to estimate P(R) as a whole yet agree to supply his notions of the component probabilities in the form of prior distributions. Each component prior distribution may be viewed as the stochastic characterization of the expert`s uncertainty regarding the true value of the component probability. Mathematically, the component probabilities are treated as independent random variables and P(R) as their product; the induced prior distribution for P(R) is determined which characterizes the expert`s uncertainty regarding P(R). It may be both convenient and adequate to approximate the desired distribution by Monte Carlo simulation. Software has been written for this task that allows a variety of component priors that experts with good engineering judgment might feel comfortable with. The priors are mostly based on so-called likelihood classes. The software permits an expert to choose for a given component event probability one of six types of prior distributions, and the expert specifies the parameter value(s) for that prior. Each prior is unimodal. The expert essentially decides where the mode is, how the probability is distributed in the vicinity of the mode, and how rapidly it attenuates away. Limiting and degenerate applications allow the expert to be vague or precise.

  5. Relative frequencies of constrained events in stochastic processes: An analytical approach

    NASA Astrophysics Data System (ADS)

    Rusconi, S.; Akhmatskaya, E.; Sokolovski, D.; Ballard, N.; de la Cal, J. C.

    2015-10-01

    The stochastic simulation algorithm (SSA) and the corresponding Monte Carlo (MC) method are among the most common approaches for studying stochastic processes. They relies on knowledge of interevent probability density functions (PDFs) and on information about dependencies between all possible events. Analytical representations of a PDF are difficult to specify in advance, in many real life applications. Knowing the shapes of PDFs, and using experimental data, different optimization schemes can be applied in order to evaluate probability density functions and, therefore, the properties of the studied system. Such methods, however, are computationally demanding, and often not feasible. We show that, in the case where experimentally accessed properties are directly related to the frequencies of events involved, it may be possible to replace the heavy Monte Carlo core of optimization schemes with an analytical solution. Such a replacement not only provides a more accurate estimation of the properties of the process, but also reduces the simulation time by a factor of order of the sample size (at least ≈104 ). The proposed analytical approach is valid for any choice of PDF. The accuracy, computational efficiency, and advantages of the method over MC procedures are demonstrated in the exactly solvable case and in the evaluation of branching fractions in controlled radical polymerization (CRP) of acrylic monomers. This polymerization can be modeled by a constrained stochastic process. Constrained systems are quite common, and this makes the method useful for various applications.

  6. A stochastic analysis of steady and transient heat conduction in random media using a homogenization approach

    SciTech Connect

    Zhijie Xu

    2014-07-01

    We present a new stochastic analysis for steady and transient one-dimensional heat conduction problem based on the homogenization approach. Thermal conductivity is assumed to be a random field K consisting of random variables of a total number N. Both steady and transient solutions T are expressed in terms of the homogenized solution (symbol) and its spatial derivatives (equation), where homogenized solution (symbol) is obtained by solving the homogenized equation with effective thermal conductivity. Both mean and variance of stochastic solutions can be obtained analytically for K field consisting of independent identically distributed (i.i.d) random variables. The mean and variance of T are shown to be dependent only on the mean and variance of these i.i.d variables, not the particular form of probability distribution function of i.i.d variables. Variance of temperature field T can be separated into two contributions: the ensemble contribution (through the homogenized temperature (symbol)); and the configurational contribution (through the random variable Ln(x)Ln(x)). The configurational contribution is shown to be proportional to the local gradient of (symbol). Large uncertainty of T field was found at locations with large gradient of (symbol) due to the significant configurational contributions at these locations. Numerical simulations were implemented based on a direct Monte Carlo method and good agreement is obtained between numerical Monte Carlo results and the proposed stochastic analysis.

  7. Relative frequencies of constrained events in stochastic processes: An analytical approach.

    PubMed

    Rusconi, S; Akhmatskaya, E; Sokolovski, D; Ballard, N; de la Cal, J C

    2015-10-01

    The stochastic simulation algorithm (SSA) and the corresponding Monte Carlo (MC) method are among the most common approaches for studying stochastic processes. They relies on knowledge of interevent probability density functions (PDFs) and on information about dependencies between all possible events. Analytical representations of a PDF are difficult to specify in advance, in many real life applications. Knowing the shapes of PDFs, and using experimental data, different optimization schemes can be applied in order to evaluate probability density functions and, therefore, the properties of the studied system. Such methods, however, are computationally demanding, and often not feasible. We show that, in the case where experimentally accessed properties are directly related to the frequencies of events involved, it may be possible to replace the heavy Monte Carlo core of optimization schemes with an analytical solution. Such a replacement not only provides a more accurate estimation of the properties of the process, but also reduces the simulation time by a factor of order of the sample size (at least ≈10(4)). The proposed analytical approach is valid for any choice of PDF. The accuracy, computational efficiency, and advantages of the method over MC procedures are demonstrated in the exactly solvable case and in the evaluation of branching fractions in controlled radical polymerization (CRP) of acrylic monomers. This polymerization can be modeled by a constrained stochastic process. Constrained systems are quite common, and this makes the method useful for various applications. PMID:26565363

  8. Integrated Stochastic Evaluation of Flood and Vegetation Dynamics in Riverine Landscapes

    NASA Astrophysics Data System (ADS)

    Miyamoto, H.; Kimura, R.

    2014-12-01

    Areal expansion of trees on gravel beds and sand bars has been a serious problem for river management in Japan. From the viewpoints of ecological restoration and flood control, it would be necessary to accurately predict the vegetation dynamics for a long period of time. This presentation tries to evaluate both vegetation overgrowth tendency and flood protection safety in an integrated manner for several vegetated channels in Kako River, Japan. The predominant tree species in Kako River are willows and bamboos. The evaluation employs a stochastic process model, which has been developed for statistically evaluating flow and vegetation status in a river course through the Monte Carlo simulation. The model for vegetation dynamics includes the effects of tree growth, mortality by flood impacts, and infant tree invasion. Through the Monte Carlo simulation for several cross sections in Kako River, responses of the vegetated channels are stochastically evaluated in terms of the changes of discharge magnitude and channel geomorphology. The result shows that the river channels with high flood protection priority are extracted from the several channel sections with the corresponding vegetation status. The present investigation suggests that the stochastic analysis could be one of the powerful diagnostic methods for river management.

  9. Stochastic response surface methods (SRSMs) for uncertainty propagation: Application to environmental and biological systems

    SciTech Connect

    Isukapalli, S.S.; Roy, A.; Georgopoulos, P.G. |

    1998-06-01

    Comprehensive uncertainty analyses of complex models of environmental and biological systems are essential but often not feasible due to the computational resources they require. Traditional methods, such as standard Monte Carlo and Latin Hypercube Sampling, for propagating uncertainty and developing probability densities of model outputs, may in fact require performing a prohibitive number of model simulations. An alternative is offered, for a wide range of problems, by the computationally efficient Stochastic Response Surface Methods (SRSMs) for uncertainty propagation. These methods extend the classical response surface methodology to systems with stochastic inputs and outputs. This is accomplished by approximating both inputs and outputs of the uncertain system through stochastic series of well behaved standard random variables; the series expansions of the outputs contain unknown coefficients which are calculated by a method that uses the results of a limited number of model simulations. Two case studies are presented here involving (a) a physiologically-based pharmacokinetic (PBPK) model for perchloroethylene (PERC) for humans, and (b) an atmospheric photochemical model, the Reactive Plume Model (RPM-IV). The results obtained agree closely with those of traditional Monte Carlo and Latin Hypercube Sampling methods, while significantly reducing the required number of model simulations.

  10. Proton Upset Monte Carlo Simulation

    NASA Technical Reports Server (NTRS)

    O'Neill, Patrick M.; Kouba, Coy K.; Foster, Charles C.

    2009-01-01

    The Proton Upset Monte Carlo Simulation (PROPSET) program calculates the frequency of on-orbit upsets in computer chips (for given orbits such as Low Earth Orbit, Lunar Orbit, and the like) from proton bombardment based on the results of heavy ion testing alone. The software simulates the bombardment of modern microelectronic components (computer chips) with high-energy (.200 MeV) protons. The nuclear interaction of the proton with the silicon of the chip is modeled and nuclear fragments from this interaction are tracked using Monte Carlo techniques to produce statistically accurate predictions.

  11. The analysis of a sparse grid stochastic collocation method for partial differential equations with high-dimensional random input data.

    SciTech Connect

    Webster, Clayton; Tempone, Raul; Nobile, Fabio

    2007-12-01

    This work describes the convergence analysis of a Smolyak-type sparse grid stochastic collocation method for the approximation of statistical quantities related to the solution of partial differential equations with random coefficients and forcing terms (input data of the model). To compute solution statistics, the sparse grid stochastic collocation method uses approximate solutions, produced here by finite elements, corresponding to a deterministic set of points in the random input space. This naturally requires solving uncoupled deterministic problems and, as such, the derived strong error estimates for the fully discrete solution are used to compare the computational efficiency of the proposed method with the Monte Carlo method. Numerical examples illustrate the theoretical results and are used to compare this approach with several others, including the standard Monte Carlo.

  12. AESS: Accelerated Exact Stochastic Simulation

    NASA Astrophysics Data System (ADS)

    Jenkins, David D.; Peterson, Gregory D.

    2011-12-01

    The Stochastic Simulation Algorithm (SSA) developed by Gillespie provides a powerful mechanism for exploring the behavior of chemical systems with small species populations or with important noise contributions. Gene circuit simulations for systems biology commonly employ the SSA method, as do ecological applications. This algorithm tends to be computationally expensive, so researchers seek an efficient implementation of SSA. In this program package, the Accelerated Exact Stochastic Simulation Algorithm (AESS) contains optimized implementations of Gillespie's SSA that improve the performance of individual simulation runs or ensembles of simulations used for sweeping parameters or to provide statistically significant results. Program summaryProgram title: AESS Catalogue identifier: AEJW_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AEJW_v1_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: University of Tennessee copyright agreement No. of lines in distributed program, including test data, etc.: 10 861 No. of bytes in distributed program, including test data, etc.: 394 631 Distribution format: tar.gz Programming language: C for processors, CUDA for NVIDIA GPUs Computer: Developed and tested on various x86 computers and NVIDIA C1060 Tesla and GTX 480 Fermi GPUs. The system targets x86 workstations, optionally with multicore processors or NVIDIA GPUs as accelerators. Operating system: Tested under Ubuntu Linux OS and CentOS 5.5 Linux OS Classification: 3, 16.12 Nature of problem: Simulation of chemical systems, particularly with low species populations, can be accurately performed using Gillespie's method of stochastic simulation. Numerous variations on the original stochastic simulation algorithm have been developed, including approaches that produce results with statistics that exactly match the chemical master equation (CME) as well as other approaches that approximate the CME. Solution

  13. Stochastic Approximation of Dynamical Exponent at Quantum Critical Point

    NASA Astrophysics Data System (ADS)

    Suwa, Hidemaro; Yasuda, Shinya; Todo, Synge

    We have developed a unified finite-size scaling method for quantum phase transitions that requires no prior knowledge of the dynamical exponent z. During a quantum Monte Carlo simulation, the temperature is automatically tuned by the Robbins-Monro stochastic approximation method, being proportional to the lowest gap of the finite-size system. The dynamical exponent is estimated in a straightforward way from the system-size dependence of the temperature. As a demonstration of our novel method, the two-dimensional S = 1 / 2 quantum XY model, or equivalently the hard-core boson system, in uniform and staggered magnetic fields is investigated in the combination of the world-line quantum Monte Carlo worm algorithm. In the absence of a uniform magnetic field, we obtain the fully consistent result with the Lorentz invariance at the quantum critical point, z = 1 . Under a finite uniform magnetic field, on the other hand, the dynamical exponent becomes two, and the mean-field universality with effective dimension (2+2) governs the quantum phase transition. We will discuss also the system with random magnetic fields, or the dirty boson system, bearing a non-trivial dynamical exponent.Reference: S. Yasuda, H. Suwa, and S. Todo Phys. Rev. B 92, 104411 (2015); arXiv:1506.04837

  14. Stochastic approximation of dynamical exponent at quantum critical point

    NASA Astrophysics Data System (ADS)

    Yasuda, Shinya; Suwa, Hidemaro; Todo, Synge

    2015-09-01

    We have developed a unified finite-size scaling method for quantum phase transitions that requires no prior knowledge of the dynamical exponent z . During a quantum Monte Carlo simulation, the temperature is automatically tuned by the Robbins-Monro stochastic approximation method, being proportional to the lowest gap of the finite-size system. The dynamical exponent is estimated in a straightforward way from the system-size dependence of the temperature. As a demonstration of our novel method, the two-dimensional S =1 /2 quantum X Y model in uniform and staggered magnetic fields is investigated in the combination of the world-line quantum Monte Carlo worm algorithm. In the absence of a uniform magnetic field, we obtain the fully consistent result with the Lorentz invariance at the quantum critical point, z =1 , i.e., the three-dimensional classical X Y universality class. Under a finite uniform magnetic field, on the other hand, the dynamical exponent becomes two, and the mean-field universality with effective dimension (2 +2 ) governs the quantum phase transition.

  15. Long time behaviour of a stochastic nanoparticle

    NASA Astrophysics Data System (ADS)

    Étoré, Pierre; Labbé, Stéphane; Lelong, Jérôme

    2014-09-01

    In this article, we are interested in the behaviour of a single ferromagnetic mono-domain particle submitted to an external field with a stochastic perturbation. This model is the first step toward the mathematical understanding of thermal effects on a ferromagnet. In a first part, we present the stochastic model and prove that the associated stochastic differential equation is well defined. The second part is dedicated to the study of the long time behaviour of the magnetic moment and in the third part we prove that the stochastic perturbation induces a non-reversibility phenomenon. Last, we illustrate these results through numerical simulations of our stochastic model. The main results presented in this article are on the one hand the rate of convergence of the magnetization toward the unique stable equilibrium of the deterministic model and on the other hand a sharp estimate of the hysteresis phenomenon induced by the stochastic perturbation (remember that with no perturbation, the magnetic moment remains constant).

  16. Generalized spectral decomposition for stochastic nonlinear problems

    SciTech Connect

    Nouy, Anthony Le Maitre, Olivier P.

    2009-01-10

    We present an extension of the generalized spectral decomposition method for the resolution of nonlinear stochastic problems. The method consists in the construction of a reduced basis approximation of the Galerkin solution and is independent of the stochastic discretization selected (polynomial chaos, stochastic multi-element or multi-wavelets). Two algorithms are proposed for the sequential construction of the successive generalized spectral modes. They involve decoupled resolutions of a series of deterministic and low-dimensional stochastic problems. Compared to the classical Galerkin method, the algorithms allow for significant computational savings and require minor adaptations of the deterministic codes. The methodology is detailed and tested on two model problems, the one-dimensional steady viscous Burgers equation and a two-dimensional nonlinear diffusion problem. These examples demonstrate the effectiveness of the proposed algorithms which exhibit convergence rates with the number of modes essentially dependent on the spectrum of the stochastic solution but independent of the dimension of the stochastic approximation space.

  17. Ant colony optimization and stochastic gradient descent.

    PubMed

    Meuleau, Nicolas; Dorigo, Marco

    2002-01-01

    In this article, we study the relationship between the two techniques known as ant colony optimization (ACO) and stochastic gradient descent. More precisely, we show that some empirical ACO algorithms approximate stochastic gradient descent in the space of pheromones, and we propose an implementation of stochastic gradient descent that belongs to the family of ACO algorithms. We then use this insight to explore the mutual contributions of the two techniques. PMID:12171633

  18. Stochastic Vorticity and Associated Filtering Theory

    SciTech Connect

    Amirdjanova, A.; Kallianpur, G.

    2002-12-19

    The focus of this work is on a two-dimensional stochastic vorticity equation for an incompressible homogeneous viscous fluid. We consider a signed measure-valued stochastic partial differential equation for a vorticity process based on the Skorohod-Ito evolution of a system of N randomly moving point vortices. A nonlinear filtering problem associated with the evolution of the vorticity is considered and a corresponding Fujisaki-Kallianpur-Kunita stochastic differential equation for the optimal filter is derived.

  19. Stochastic Turing patterns on a network.

    PubMed

    Asslani, Malbor; Di Patti, Francesca; Fanelli, Duccio

    2012-10-01

    The process of stochastic Turing instability on a scale-free network is discussed for a specific case study: the stochastic Brusselator model. The system is shown to spontaneously differentiate into activator-rich and activator-poor nodes outside the region of parameters classically deputed to the deterministic Turing instability. This phenomenon, as revealed by direct stochastic simulations, is explained analytically and eventually traced back to the finite-size corrections stemming from the inherent graininess of the scrutinized medium. PMID:23214650

  20. Stochastic Turing patterns on a network

    NASA Astrophysics Data System (ADS)

    Asslani, Malbor; Di Patti, Francesca; Fanelli, Duccio

    2012-10-01

    The process of stochastic Turing instability on a scale-free network is discussed for a specific case study: the stochastic Brusselator model. The system is shown to spontaneously differentiate into activator-rich and activator-poor nodes outside the region of parameters classically deputed to the deterministic Turing instability. This phenomenon, as revealed by direct stochastic simulations, is explained analytically and eventually traced back to the finite-size corrections stemming from the inherent graininess of the scrutinized medium.

  1. On the Application of a Hybrid Monte Carlo Technique to Radiation Transfer in the Post-Explosion Phase of Type IA Supernovae

    NASA Astrophysics Data System (ADS)

    Wollaeger, Ryan; van Rossum, Daniel; Graziani, Carlo; Couch, Sean; Jordan, George; Lamb, Donald; Moses, Gregory

    2013-10-01

    We apply Implicit Monte Carlo (IMC) and Discrete Diffusion Monte Carlo (DDMC) to Nomoto's W7 model of Type Ia Supernovae (SNe Ia). IMC is a stochastic method for solving the nonlinear radiation transport equations. DDMC is a stochastic radiation diffusion method that is generally used to accelerate IMC for Monte Carlo (MC) particle histories in optically thick regions of space. The hybrid IMC-DDMC method has recently been extended to account for multifrequency and velocity effects. SNe Ia are thermonuclear explosions of white dwarf stars that produce characteristic light curves and spectra sourced by radioactive decay of 56Ni. We exhibit the advantages of the hybrid MC approach relative to pure IMC for the W7 model. These results shed light on the viability of IMC-DDMC in more sophisticated, multi-dimensional simulations of SNe Ia. This work was supported in part by the University of Chicago and the National Science Foundation under grant AST-0909132.

  2. Multilevel Monte Carlo for two phase flow and Buckley–Leverett transport in random heterogeneous porous media

    SciTech Connect

    Müller, Florian Jenny, Patrick Meyer, Daniel W.

    2013-10-01

    Monte Carlo (MC) is a well known method for quantifying uncertainty arising for example in subsurface flow problems. Although robust and easy to implement, MC suffers from slow convergence. Extending MC by means of multigrid techniques yields the multilevel Monte Carlo (MLMC) method. MLMC has proven to greatly accelerate MC for several applications including stochastic ordinary differential equations in finance, elliptic stochastic partial differential equations and also hyperbolic problems. In this study, MLMC is combined with a streamline-based solver to assess uncertain two phase flow and Buckley–Leverett transport in random heterogeneous porous media. The performance of MLMC is compared to MC for a two dimensional reservoir with a multi-point Gaussian logarithmic permeability field. The influence of the variance and the correlation length of the logarithmic permeability on the MLMC performance is studied.

  3. Monte Carlo calculations of nuclei

    SciTech Connect

    Pieper, S.C.

    1997-10-01

    Nuclear many-body calculations have the complication of strong spin- and isospin-dependent potentials. In these lectures the author discusses the variational and Green`s function Monte Carlo techniques that have been developed to address this complication, and presents a few results.

  4. Multilevel sequential Monte Carlo samplers

    DOE PAGESBeta

    Beskos, Alexandros; Jasra, Ajay; Law, Kody; Tempone, Raul; Zhou, Yan

    2016-08-24

    Here, we study the approximation of expectations w.r.t. probability distributions associated to the solution of partial differential equations (PDEs); this scenario appears routinely in Bayesian inverse problems. In practice, one often has to solve the associated PDE numerically, using, for instance finite element methods and leading to a discretisation bias, with the step-size level hL. In addition, the expectation cannot be computed analytically and one often resorts to Monte Carlo methods. In the context of this problem, it is known that the introduction of the multilevel Monte Carlo (MLMC) method can reduce the amount of computational effort to estimate expectations, for a given level of error. This is achieved via a telescoping identity associated to a Monte Carlo approximation of a sequence of probability distributions with discretisation levelsmore » $${\\infty}$$ >h0>h1 ...>hL. In many practical problems of interest, one cannot achieve an i.i.d. sampling of the associated sequence of probability distributions. A sequential Monte Carlo (SMC) version of the MLMC method is introduced to deal with this problem. In conclusion, it is shown that under appropriate assumptions, the attractive property of a reduction of the amount of computational effort to estimate expectations, for a given level of error, can be maintained within the SMC context.« less

  5. Synchronous Parallel Kinetic Monte Carlo

    SciTech Connect

    Mart?nez, E; Marian, J; Kalos, M H

    2006-12-14

    A novel parallel kinetic Monte Carlo (kMC) algorithm formulated on the basis of perfect time synchronicity is presented. The algorithm provides an exact generalization of any standard serial kMC model and is trivially implemented in parallel architectures. We demonstrate the mathematical validity and parallel performance of the method by solving several well-understood problems in diffusion.

  6. Energy-optimal path planning by stochastic dynamically orthogonal level-set optimization

    NASA Astrophysics Data System (ADS)

    Subramani, Deepak N.; Lermusiaux, Pierre F. J.

    2016-04-01

    A stochastic optimization methodology is formulated for computing energy-optimal paths from among time-optimal paths of autonomous vehicles navigating in a dynamic flow field. Based on partial differential equations, the methodology rigorously leverages the level-set equation that governs time-optimal reachability fronts for a given relative vehicle-speed function. To set up the energy optimization, the relative vehicle-speed and headings are considered to be stochastic and new stochastic Dynamically Orthogonal (DO) level-set equations are derived. Their solution provides the distribution of time-optimal reachability fronts and corresponding distribution of time-optimal paths. An optimization is then performed on the vehicle's energy-time joint distribution to select the energy-optimal paths for each arrival time, among all stochastic time-optimal paths for that arrival time. Numerical schemes to solve the reduced stochastic DO level-set equations are obtained, and accuracy and efficiency considerations are discussed. These reduced equations are first shown to be efficient at solving the governing stochastic level-sets, in part by comparisons with direct Monte Carlo simulations. To validate the methodology and illustrate its accuracy, comparisons with semi-analytical energy-optimal path solutions are then completed. In particular, we consider the energy-optimal crossing of a canonical steady front and set up its semi-analytical solution using a energy-time nested nonlinear double-optimization scheme. We then showcase the inner workings and nuances of the energy-optimal path planning, considering different mission scenarios. Finally, we study and discuss results of energy-optimal missions in a wind-driven barotropic quasi-geostrophic double-gyre ocean circulation.

  7. Geostatistical and Stochastic Study of Flow and Transport in the Unsaturated Zone at Yucca Mountain

    SciTech Connect

    Ye, Ming; Pan, Feng; Hu, Xiaolong; Zhu, Jianting

    2007-08-14

    Yucca Mountain has been proposed by the U.S. Department of Energy as the nation’s long-term, permanent geologic repository for spent nuclear fuel or high-level radioactive waste. The potential repository would be located in Yucca Mountain’s unsaturated zone (UZ), which acts as a critical natural barrier delaying arrival of radionuclides to the water table. Since radionuclide transport in groundwater can pose serious threats to human health and the environment, it is important to understand how much and how fast water and radionuclides travel through the UZ to groundwater. The UZ system consists of multiple hydrogeologic units whose hydraulic and geochemical properties exhibit systematic and random spatial variation, or heterogeneity, at multiple scales. Predictions of radionuclide transport under such complicated conditions are uncertain, and the uncertainty complicates decision making and risk analysis. This project aims at using geostatistical and stochastic methods to assess uncertainty of unsaturated flow and radionuclide transport in the UZ at Yucca Mountain. Focus of this study is parameter uncertainty of hydraulic and transport properties of the UZ. The parametric uncertainty arises since limited parameter measurements are unable to deterministically describe spatial variability of the parameters. In this project, matrix porosity, permeability and sorption coefficient of the reactive tracer (neptunium) of the UZ are treated as random variables. Corresponding propagation of parametric uncertainty is quantitatively measured using mean, variance, 5th and 95th percentiles of simulated state variables (e.g., saturation, capillary pressure, percolation flux, and travel time). These statistics are evaluated using a Monte Carlo method, in which a three-dimensional flow and transport model implemented using the TOUGH2 code is executed with multiple parameter realizations of the random model parameters. The project specifically studies uncertainty of unsaturated

  8. Kinetic Monte Carlo with fields: diffusion in heterogeneous systems

    NASA Astrophysics Data System (ADS)

    Caro, Jose Alfredo

    2011-03-01

    It is commonly perceived that to achieve breakthrough scientific discoveries in the 21st century an integration of world leading experimental capabilities with theory, computational modeling and high performance computer simulations is necessary. Lying between the atomic and the macro scales, the meso scale is crucial for advancing materials research. Deterministic methods result computationally too heavy to cover length and time scales relevant for this scale. Therefore, stochastic approaches are one of the options of choice. In this talk I will describe recent progress in efficient parallelization schemes for Metropolis and kinetic Monte Carlo [1-2], and the combination of these ideas into a new hybrid Molecular Dynamics-kinetic Monte Carlo algorithm developed to study the basic mechanisms taking place in diffusion in concentrated alloys under the action of chemical and stress fields, incorporating in this way the actual driving force emerging from chemical potential gradients. Applications are shown on precipitation and segregation in nanostructured materials. Work in collaboration with E. Martinez, LANL, and with B. Sadigh, P. Erhart and A. Stukowsky, LLNL. Supported by the Center for Materials at Irradiation and Mechanical Extremes, an Energy Frontier Research Center funded by the U.S. Department of Energy (Award # 2008LANL1026) at Los Alamos National Laboratory

  9. Stochastics In Circumplanetary Dust Dynamics

    NASA Astrophysics Data System (ADS)

    Spahn, F.; Krivov, A. V.; Sremcevic, M.; Schwarz, U.; Kurths, J.

    Charged dust grains in circumplanetary environments experience, beyond various de- terministic forces, also stochastic perturbations: E.g., fluctuations of the magnetic field, the charge of the grains etc. Here, we investigate the dynamics of a dust population in a circular orbit around the planet which is perturbed by a stochastic magnetic field B , modeled by an isotropi- cally Gaussian white noise. The resulting perturbation equations give rise to a modi- 2 fied diffusion of the inclinations and eccentricities ­ x D [t +/- sin[2nt]/(2n)] (x - alias for eccentricity e and the inclination i, t - time). The diffusion coefficient is found to be D = [G]2/n, where the gyrofrequency and the orbital frequency are denoted by G, and n, respectively. This behavior has been checked by numerical experiments. We have chosen dust grains (1µm in radius) initially moving in circular orbits around a planet (Jupiter) and integrated numerically their trajectories over their typical lifetimes (100 years). The particles were exposed to a Gaussian fluctuating magnetic field B obeying the same statistical properties as in the analytical treatment. In this case, the theoretical 2 findings have been confirmed according to x D t with a diffusion coefficient of D G/n. 2 The theoretical studies showed the statistical properties of B being of decisive im- portance. To this aim, we analyzed the magnetic field data measured by the Galileo magnetometer at Jupiter and found almost Gaussian fluctuations of about 5 % of the mean field and exponentially decaying correlations. This results in a diffusion in the space of orbital elements of at least 1...5 % (variations of inclinations and eccentric- ity) over the lifetime of the dust grains. For smaller dusty motes stochastics might well dominate the dynamics.

  10. Analysis of stochastic effects in chemically amplified poly(4-hydroxystyrene-co-t-butyl methacrylate) resist

    NASA Astrophysics Data System (ADS)

    Kozawa, Takahiro; Santillan, Julius Joseph; Itani, Toshiro

    2016-07-01

    Understanding of stochastic phenomena is essential to the development of a highly sensitive resist for nanofabrication. In this study, we investigated the stochastic effects in a chemically amplified resist consisting of poly(4-hydroxystyrene-co-t-butyl methacrylate), triphenylsulfonium nonafluorobutanesulfonate (acid generator), and tri-n-octylamine (quencher). Scanning electron microscopy (SEM) images of resist patterns were analyzed by Monte Carlo simulation on the basis of the sensitization and reaction mechanisms of chemically amplified extreme ultraviolet resists. It was estimated that a ±0.82σ fluctuation of the number of protected units per polymer molecule led to line edge roughness formation. Here, σ is the standard deviation of the number of protected units per polymer molecule after postexposure baking (PEB). The threshold for the elimination of stochastic bridge generation was 4.38σ (the difference between the average number of protected units after PEB and the dissolution point). The threshold for the elimination of stochastic pinching was 2.16σ.

  11. Statistical inference in a stochastic epidemic SEIR model with control intervention: Ebola as a case study.

    PubMed

    Lekone, Phenyo E; Finkenstädt, Bärbel F

    2006-12-01

    A stochastic discrete-time susceptible-exposed-infectious-recovered (SEIR) model for infectious diseases is developed with the aim of estimating parameters from daily incidence and mortality time series for an outbreak of Ebola in the Democratic Republic of Congo in 1995. The incidence time series exhibit many low integers as well as zero counts requiring an intrinsically stochastic modeling approach. In order to capture the stochastic nature of the transitions between the compartmental populations in such a model we specify appropriate conditional binomial distributions. In addition, a relatively simple temporally varying transmission rate function is introduced that allows for the effect of control interventions. We develop Markov chain Monte Carlo methods for inference that are used to explore the posterior distribution of the parameters. The algorithm is further extended to integrate numerically over state variables of the model, which are unobserved. This provides a realistic stochastic model that can be used by epidemiologists to study the dynamics of the disease and the effect of control interventions. PMID:17156292

  12. Stochastic lattice gas model describing the dynamics of the SIRS epidemic process

    NASA Astrophysics Data System (ADS)

    de Souza, David R.; Tomé, Tânia

    2010-03-01

    We study a stochastic process describing the onset of spreading dynamics of an epidemic in a population composed of individuals of three classes: susceptible (S), infected (I), and recovered (R). The stochastic process is defined by local rules and involves the following cyclic process: S → I → R → S (SIRS). The open process S → I → R (SIR) is studied as a particular case of the SIRS process. The epidemic process is analyzed at different levels of description: by a stochastic lattice gas model and by a birth and death process. By means of Monte Carlo simulations and dynamical mean-field approximations we show that the SIRS stochastic lattice gas model exhibit a line of critical points separating the two phases: an absorbing phase where the lattice is completely full of S individuals and an active phase where S, I and R individuals coexist, which may or may not present population cycles. The critical line, that corresponds to the onset of epidemic spreading, is shown to belong in the directed percolation universality class. By considering the birth and death process we analyze the role of noise in stabilizing the oscillations.

  13. Stochastic dynamic causal modelling of fMRI data: Should we care about neural noise?

    PubMed Central

    Daunizeau, J.; Stephan, K.E.; Friston, K.J.

    2012-01-01

    Dynamic causal modelling (DCM) was introduced to study the effective connectivity among brain regions using neuroimaging data. Until recently, DCM relied on deterministic models of distributed neuronal responses to external perturbation (e.g., sensory stimulation or task demands). However, accounting for stochastic fluctuations in neuronal activity and their interaction with task-specific processes may be of particular importance for studying state-dependent interactions. Furthermore, allowing for random neuronal fluctuations may render DCM more robust to model misspecification and finesse problems with network identification. In this article, we examine stochastic dynamic causal models (sDCM) in relation to their deterministic counterparts (dDCM) and highlight questions that can only be addressed with sDCM. We also compare the network identification performance of deterministic and stochastic DCM, using Monte Carlo simulations and an empirical case study of absence epilepsy. For example, our results demonstrate that stochastic DCM can exploit the modelling of neural noise to discriminate between direct and mediated connections. We conclude with a discussion of the added value and limitations of sDCM, in relation to its deterministic homologue. PMID:22579726

  14. A stochastic averaging method for analyzing vibro-impact systems under Gaussian white noise excitations

    NASA Astrophysics Data System (ADS)

    Gu, Xudong; Zhu, Weiqiu

    2014-04-01

    A new stochastic averaging method for predicting the response of vibro-impact (VI) systems to random perturbations is proposed. First, the free VI system (without damping and random perturbation) is analyzed. The impact condition for the displacement is transformed to that for the system energy. Thus, the motion of the free VI systems is divided into periodic motion without impact and quasi-periodic motion with impact according to the level of system energy. The energy loss during each impact is found to be related to the restitution factor and the energy level before impact. Under the assumption of lightly damping and weakly random perturbation, the system energy is a slowly varying process and an averaged Itô stochastic differential equation for system energy can be derived. The drift and diffusion coefficients of the averaged Itô equation for system energy without impact are the functions of the damping and the random excitations, and those for system energy with impact are the functions of the damping, the random excitations and the impact energy loss. Finally, the averaged Fokker-Plank-Kolmogorov (FPK) equation associated with the averaged Itô equation is derived and solved to yield the stationary probability density of system energy. Numerical results for a nonlinear VI oscillator are obtained to illustrate the proposed stochastic averaging method. Monte-Carlo simulation (MCS) is also conducted to show that the proposed stochastic averaging method is quite effective.

  15. Incorporating Wind Power Forecast Uncertainties Into Stochastic Unit Commitment Using Neural Network-Based Prediction Intervals.

    PubMed

    Quan, Hao; Srinivasan, Dipti; Khosravi, Abbas

    2015-09-01

    Penetration of renewable energy resources, such as wind and solar power, into power systems significantly increases the uncertainties on system operation, stability, and reliability in smart grids. In this paper, the nonparametric neural network-based prediction intervals (PIs) are implemented for forecast uncertainty quantification. Instead of a single level PI, wind power forecast uncertainties are represented in a list of PIs. These PIs are then decomposed into quantiles of wind power. A new scenario generation method is proposed to handle wind power forecast uncertainties. For each hour, an empirical cumulative distribution function (ECDF) is fitted to these quantile points. The Monte Carlo simulation method is used to generate scenarios from the ECDF. Then the wind power scenarios are incorporated into a stochastic security-constrained unit commitment (SCUC) model. The heuristic genetic algorithm is utilized to solve the stochastic SCUC problem. Five deterministic and four stochastic case studies incorporated with interval forecasts of wind power are implemented. The results of these cases are presented and discussed together. Generation costs, and the scheduled and real-time economic dispatch reserves of different unit commitment strategies are compared. The experimental results show that the stochastic model is more robust than deterministic ones and, thus, decreases the risk in system operations of smart grids. PMID:25532191

  16. SLUG-STOCHASTICALLY LIGHTING UP GALAXIES. I. METHODS AND VALIDATING TESTS

    SciTech Connect

    Da Silva, Robert L.; Fumagalli, Michele; Krumholz, Mark

    2012-02-01

    The effects of stochasticity on the luminosities of stellar populations are an often neglected but crucial element for understanding populations in the low-mass or the low star formation rate regime. To address this issue, we present SLUG, a new code to 'Stochastically Light Up Galaxies'. SLUG synthesizes stellar populations using a Monte Carlo technique that properly treats stochastic sampling including the effects of clustering, the stellar initial mass function, star formation history, stellar evolution, and cluster disruption. This code produces many useful outputs, including (1) catalogs of star clusters and their properties such as their stellar initial mass distributions and their photometric properties in a variety of filters, (2) two dimensional histograms of color-magnitude diagrams of every star in the simulation, and (3) the photometric properties of field stars and the integrated photometry of the entire simulated galaxy. After presenting the SLUG algorithm in detail, we validate the code through comparisons with STARBURST99 in the well-sampled regime, and with observed photometry of Milky Way clusters. Finally, we demonstrate SLUG's capabilities by presenting outputs in the stochastic regime. SLUG is publicly distributed through the Web site http://sites.google.com/site/runslug/.

  17. Neural Dynamics as Sampling: A Model for Stochastic Computation in Recurrent Networks of Spiking Neurons

    PubMed Central

    Buesing, Lars; Bill, Johannes; Nessler, Bernhard; Maass, Wolfgang

    2011-01-01

    The organization of computations in networks of spiking neurons in the brain is still largely unknown, in particular in view of the inherently stochastic features of their firing activity and the experimentally observed trial-to-trial variability of neural systems in the brain. In principle there exists a powerful computational framework for stochastic computations, probabilistic inference by sampling, which can explain a large number of macroscopic experimental data in neuroscience and cognitive science. But it has turned out to be surprisingly difficult to create a link between these abstract models for stochastic computations and more detailed models of the dynamics of networks of spiking neurons. Here we create such a link and show that under some conditions the stochastic firing activity of networks of spiking neurons can be interpreted as probabilistic inference via Markov chain Monte Carlo (MCMC) sampling. Since common methods for MCMC sampling in distributed systems, such as Gibbs sampling, are inconsistent with the dynamics of spiking neurons, we introduce a different approach based on non-reversible Markov chains that is able to reflect inherent temporal processes of spiking neuronal activity through a suitable choice of random variables. We propose a neural network model and show by a rigorous theoretical analysis that its neural activity implements MCMC sampling of a given distribution, both for the case of discrete and continuous time. This provides a step towards closing the gap between abstract functional models of cortical computation and more detailed models of networks of spiking neurons. PMID:22096452

  18. Hamilton's principle in stochastic mechanics

    NASA Astrophysics Data System (ADS)

    Pavon, Michele

    1995-12-01

    In this paper we establish three variational principles that provide new foundations for Nelson's stochastic mechanics in the case of nonrelativistic particles without spin. The resulting variational picture is much richer and of a different nature with respect to the one previously considered in the literature. We first develop two stochastic variational principles whose Hamilton-Jacobi-like equations are precisely the two coupled partial differential equations that are obtained from the Schrödinger equation (Madelung equations). The two problems are zero-sum, noncooperative, stochastic differential games that are familiar in the control theory literature. They are solved here by means of a new, absolutely elementary method based on Lagrange functionals. For both games the saddle-point equilibrium solution is given by the Nelson's process and the optimal controls for the two competing players are precisely Nelson's current velocity v and osmotic velocity u, respectively. The first variational principle includes as special cases both the Guerra-Morato variational principle [Phys. Rev. D 27, 1774 (1983)] and Schrödinger original variational derivation of the time-independent equation. It also reduces to the classical least action principle when the intensity of the underlying noise tends to zero. It appears as a saddle-point action principle. In the second variational principle the action is simply the difference between the initial and final configurational entropy. It is therefore a saddle-point entropy production principle. From the variational principles it follows, in particular, that both v(x,t) and u(x,t) are gradients of appropriate principal functions. In the variational principles, the role of the background noise has the intuitive meaning of attempting to contrast the more classical mechanical features of the system by trying to maximize the action in the first principle and by trying to increase the entropy in the second. Combining the two variational

  19. Stochastic Models of Human Errors

    NASA Technical Reports Server (NTRS)

    Elshamy, Maged; Elliott, Dawn M. (Technical Monitor)

    2002-01-01

    Humans play an important role in the overall reliability of engineering systems. More often accidents and systems failure are traced to human errors. Therefore, in order to have meaningful system risk analysis, the reliability of the human element must be taken into consideration. Describing the human error process by mathematical models is a key to analyzing contributing factors. Therefore, the objective of this research effort is to establish stochastic models substantiated by sound theoretic foundation to address the occurrence of human errors in the processing of the space shuttle.

  20. Stochastic elimination of cancer cells.

    PubMed Central

    Michor, Franziska; Nowak, Martin A; Frank, Steven A; Iwasa, Yoh

    2003-01-01

    Tissues of multicellular organisms consist of stem cells and differentiated cells. Stem cells divide to produce new stem cells or differentiated cells. Differentiated cells divide to produce new differentiated cells. We show that such a tissue design can reduce the rate of fixation of mutations that increase the net proliferation rate of cells. It has, however, no consequence for the rate of fixation of neutral mutations. We calculate the optimum relative abundance of stem cells that minimizes the rate of generating cancer cells. There is a critical fraction of stem cell divisions that is required for a stochastic elimination ('wash out') of cancer cells. PMID:14561289

  1. Stochastic thermodynamics of information processing

    NASA Astrophysics Data System (ADS)

    Cardoso Barato, Andre

    2015-03-01

    We consider two recent advancements on theoretical aspects of thermodynamics of information processing. First we show that the theory of stochastic thermodynamics can be generalized to include information reservoirs. These reservoirs can be seen as a sequence of bits which has its Shannon entropy changed due to the interaction with the system. Second we discuss bipartite systems, which provide a convenient description of Maxwell's demon. Analyzing a special class of bipartite systems we show that they can be used to study cellular information processing, allowing for the definition of an entropic rate that quantifies how much a cell learns about a fluctuating external environment and that is bounded by the thermodynamic entropy production.

  2. Constrained Stochastic Extended Redundancy Analysis.

    PubMed

    DeSarbo, Wayne S; Hwang, Heungsun; Stadler Blank, Ashley; Kappe, Eelco

    2015-06-01

    We devise a new statistical methodology called constrained stochastic extended redundancy analysis (CSERA) to examine the comparative impact of various conceptual factors, or drivers, as well as the specific predictor variables that contribute to each driver on designated dependent variable(s). The technical details of the proposed methodology, the maximum likelihood estimation algorithm, and model selection heuristics are discussed. A sports marketing consumer psychology application is provided in a Major League Baseball (MLB) context where the effects of six conceptual drivers of game attendance and their defining predictor variables are estimated. Results compare favorably to those obtained using traditional extended redundancy analysis (ERA). PMID:24327066

  3. Semi-analytical expression of stochastic closed curve attractors in nonlinear dynamical systems under weak noise

    NASA Astrophysics Data System (ADS)

    Guo, Kongming; Jiang, Jun; Xu, Yalan

    2016-09-01

    In this paper, a simple but accurate semi-analytical method to approximate probability density function of stochastic closed curve attractors is proposed. The expression of distribution applies to systems with strong nonlinearities, while only weak noise condition is needed. With the understanding that additive noise does not change the longitudinal distribution of the attractors, the high-dimensional probability density distribution is decomposed into two low-dimensional distributions: the longitudinal and the transverse probability density distributions. The longitudinal distribution can be calculated from the deterministic systems, while the probability density in the transverse direction of the curve can be approximated by the stochastic sensitivity function method. The effectiveness of this approach is verified by comparing the expression of distribution with the results of Monte Carlo numerical simulations in several planar systems.

  4. Application of stochastic Galerkin FEM to the complete electrode model of electrical impedance tomography

    SciTech Connect

    Leinonen, Matti Hakula, Harri Hyvönen, Nuutti

    2014-07-15

    The aim of electrical impedance tomography is to determine the internal conductivity distribution of some physical body from boundary measurements of current and voltage. The most accurate forward model for impedance tomography is the complete electrode model, which consists of the conductivity equation coupled with boundary conditions that take into account the electrode shapes and the contact resistances at the corresponding interfaces. If the reconstruction task of impedance tomography is recast as a Bayesian inference problem, it is essential to be able to solve the complete electrode model forward problem with the conductivity and the contact resistances treated as a random field and random variables, respectively. In this work, we apply a stochastic Galerkin finite element method to the ensuing elliptic stochastic boundary value problem and compare the results with Monte Carlo simulations.

  5. Developments in stochastic coupled cluster theory: The initiator approximation and application to the uniform electron gas.

    PubMed

    Spencer, James S; Thom, Alex J W

    2016-02-28

    We describe further details of the stochastic coupled cluster method and a diagnostic of such calculations, the shoulder height, akin to the plateau found in full configuration interaction quantum Monte Carlo. We describe an initiator modification to stochastic coupled cluster theory and show that initiator calculations can at times be extrapolated to the unbiased limit. We apply this method to the 3D 14-electron uniform electron gas and present complete basis set limit values of the coupled cluster singles and doubles (CCSD) and previously unattainable coupled cluster singles and doubles with perturbative triples (CCSDT) correlation energies for up to r(s) = 2, showing a requirement to include triple excitations to accurately calculate energies at high densities. PMID:26931682

  6. Wavelet-expansion-based stochastic response of chain-like MDOF structures

    NASA Astrophysics Data System (ADS)

    Kong, Fan; Li, Jie

    2015-12-01

    This paper presents a wavelet-expansion-based approach for response determination of a chain-like multi-degree-of-freedom (MDOF) structure subject to full non-stationary stochastic excitations. Specifically, the generalized harmonic wavelet (GHW) is first utilized as the expansion basis to solve the dynamic equation of structures via the Galerkin treatment. In this way, a linear matrix relationship between the deterministic response and excitation can be derived. Further, considering the GHW-based representation of the stochastic processes, a time-varying power spectrum density (PSD) relationship on a certain wavelet scale or frequency band between the excitation and response is derived. Finally, pertinent numerical simulations, including deterministic dynamic analysis and Monte Carlo simulations of both the response PSD and the story-drift-based reliability, are utilized to validate the proposed approach.

  7. Approximating the optimal groundwater pumping policy in a multiaquifer stochastic conjunctive use setting

    NASA Astrophysics Data System (ADS)

    Provencher, Bill; Burt, Oscar

    1994-03-01

    This paper presents two methods for approximating the optimal groundwater pumping policy for several interrelated aquifers in a stochastic setting that also involves conjunctive use of surface water. The first method employs a policy iteration dynamic programming (DP) algorithm where the value function is estimated by Monte Carlo simulation combined with curve-fitting techniques. The second method uses a Taylor series approximation to the functional equation of DP which reduces the problem, for a given observed state, to solving a system of equations equal in number to the aquifers. The methods are compared using a four-state variable, stochastic dynamic programming model of Madera County, California. The two methods yield nearly identical estimates of the optimal pumping policy, as well as the steady state pumping depth, suggesting that either method can be used in similar applications.

  8. Stochastic simulation of inner radiation belt electron decay by atmospheric scattering

    NASA Astrophysics Data System (ADS)

    Selesnick, R. S.

    2016-02-01

    Decay of inner radiation belt electron intensity, resulting from elastic and inelastic collisions with neutral atoms, ions, and free electrons of the upper atmosphere, ionosphere, and plasmasphere, is described by stochastic Monte Carlo simulation. Modified collision cross sections allow detailed simulation of large-angle scattering and large-energy-loss collisions while preserving mean effective scattering and slowing-down rates resulting from all collisions. Scattering from bound electrons and δ-ray production are also included. Results show that traditional methods describing diffusion of the mirror point magnetic field, equivalent to diffusion in equatorial pitch angle, and energy loss by continuous slowing down are generally good approximations. Updated formulae for these approximations are provided. The drift-averaging approximation is also shown to provide a generally accurate description of trapped electron decay. The approximate methods overestimate decay rates by small factors, and the detailed stochastic simulation should be used when greater accuracy is required.

  9. Stochastic path integral approach to continuous quadrature measurement of a single fluorescing qubit

    NASA Astrophysics Data System (ADS)

    Jordan, Andrew N.; Chantasri, Areeya; Huard, Benjamin

    I will present a theory of continuous quantum measurement for a superconducting qubit undergoing fluorescent energy relaxation. The fluorescence of the qubit is detected via a phase-preserving heterodyne measurement, giving the cavity mode quadrature signals as two continuous qubit readout results. By using the stochastic path integral approach to the measurement physics, we obtain the most likely fluorescence paths between chosen boundary conditions on the state, and compute approximate correlation functions between all stochastic variables via diagrammatic perturbation theory. Of particular interest are most-likely paths describing increasing energy during the florescence. Comparison to Monte Carlo numerical simulation and experiment will be discussed. This work was supported by US Army Research Office Grants No. W911NF-09-0-01417 and No. W911NF-15-1-0496, by NSF Grant DMR-1506081, by John Templeton Foundation Grant ID 58558, and by the DPSTT Project Thailand.

  10. Reduced Complexity HMM Filtering With Stochastic Dominance Bounds: A Convex Optimization Approach

    NASA Astrophysics Data System (ADS)

    Krishnamurthy, Vikram; Rojas, Cristian R.

    2014-12-01

    This paper uses stochastic dominance principles to construct upper and lower sample path bounds for Hidden Markov Model (HMM) filters. Given a HMM, by using convex optimization methods for nuclear norm minimization with copositive constraints, we construct low rank stochastic marices so that the optimal filters using these matrices provably lower and upper bound (with respect to a partially ordered set) the true filtered distribution at each time instant. Since these matrices are low rank (say R), the computational cost of evaluating the filtering bounds is O(XR) instead of O(X2). A Monte-Carlo importance sampling filter is presented that exploits these upper and lower bounds to estimate the optimal posterior. Finally, using the Dobrushin coefficient, explicit bounds are given on the variational norm between the true posterior and the upper and lower bounds.

  11. Point group identification algorithm in dynamic response analysis of nonlinear stochastic systems

    NASA Astrophysics Data System (ADS)

    Li, Tao; Chen, Jian-bing; Li, Jie

    2016-03-01

    The point group identification (PGI) algorithm is proposed to determine the representative point sets in response analysis of nonlinear stochastic dynamic systems. The PGI algorithm is employed to identify point groups and their feature points in an initial point set by combining subspace clustering analysis and the graph theory. Further, the representative point set of the random-variate space is determined according to the minimum generalized F-discrepancy. The dynamic responses obtained by incorporating the algorithm PGI into the probability density evolution method (PDEM) are compared with those by the Monte Carlo simulation method. The investigations indicate that the proposed method can reduce the number of the representative points, lower the generalized F-discrepancy of the representative point set, and also ensure the accuracy of stochastic structural dynamic analysis.

  12. Finite volume and asymptotic methods for stochastic neuron models with correlated inputs.

    PubMed

    Rosenbaum, Robert; Marpeau, Fabien; Ma, Jianfu; Barua, Aditya; Josić, Krešimir

    2012-07-01

    We consider a pair of stochastic integrate and fire neurons receiving correlated stochastic inputs. The evolution of this system can be described by the corresponding Fokker-Planck equation with non-trivial boundary conditions resulting from the refractory period and firing threshold. We propose a finite volume method that is orders of magnitude faster than the Monte Carlo methods traditionally used to model such systems. The resulting numerical approximations are proved to be accurate, nonnegative and integrate to 1. We also approximate the transient evolution of the system using an Ornstein-Uhlenbeck process, and use the result to examine the properties of the joint output of cell pairs. The results suggests that the joint output of a cell pair is most sensitive to changes in input variance, and less sensitive to changes in input mean and correlation. PMID:21717104

  13. Monte Carlo Simulation for Perusal and Practice.

    ERIC Educational Resources Information Center

    Brooks, Gordon P.; Barcikowski, Robert S.; Robey, Randall R.

    The meaningful investigation of many problems in statistics can be solved through Monte Carlo methods. Monte Carlo studies can help solve problems that are mathematically intractable through the analysis of random samples from populations whose characteristics are known to the researcher. Using Monte Carlo simulation, the values of a statistic are…

  14. Fossil fuels -- future fuels

    SciTech Connect

    1998-03-01

    Fossil fuels -- coal, oil, and natural gas -- built America`s historic economic strength. Today, coal supplies more than 55% of the electricity, oil more than 97% of the transportation needs, and natural gas 24% of the primary energy used in the US. Even taking into account increased use of renewable fuels and vastly improved powerplant efficiencies, 90% of national energy needs will still be met by fossil fuels in 2020. If advanced technologies that boost efficiency and environmental performance can be successfully developed and deployed, the US can continue to depend upon its rich resources of fossil fuels.

  15. PRELIMINARY COUPLING OF THE MONTE CARLO CODE OPENMC AND THE MULTIPHYSICS OBJECT-ORIENTED SIMULATION ENVIRONMENT (MOOSE) FOR ANALYZING DOPPLER FEEDBACK IN MONTE CARLO SIMULATIONS

    SciTech Connect

    Matthew Ellis; Derek Gaston; Benoit Forget; Kord Smith

    2011-07-01

    In recent years the use of Monte Carlo methods for modeling reactors has become feasible due to the increasing availability of massively parallel computer systems. One of the primary challenges yet to be fully resolved, however, is the efficient and accurate inclusion of multiphysics feedback in Monte Carlo simulations. The research in this paper presents a preliminary coupling of the open source Monte Carlo code OpenMC with the open source Multiphysics Object-Oriented Simulation Environment (MOOSE). The coupling of OpenMC and MOOSE will be used to investigate efficient and accurate numerical methods needed to include multiphysics feedback in Monte Carlo codes. An investigation into the sensitivity of Doppler feedback to fuel temperature approximations using a two dimensional 17x17 PWR fuel assembly is presented in this paper. The results show a functioning multiphysics coupling between OpenMC and MOOSE. The coupling utilizes Functional Expansion Tallies to accurately and efficiently transfer pin power distributions tallied in OpenMC to unstructured finite element meshes used in MOOSE. The two dimensional PWR fuel assembly case also demonstrates that for a simplified model the pin-by-pin doppler feedback can be adequately replicated by scaling a representative pin based on pin relative powers.

  16. TRIGA spent-fuel storage criticality analysis

    SciTech Connect

    Ravnik, M.; Glumac, B.

    1996-06-01

    A criticality safety analysis of a pool-type storage for spent TRIGA Mark II reactor fuel is presented. Two independent computer codes are applied: the MCNP Monte Carlo code and the WIMS lattice cell code. Two types of fuel elements are considered: standard fuel elements with 12 wt% uranium concentration and FLIP fuel elements. A parametric study of spent-fuel storage lattice pitch, fuel element burnup, and water density is presented. Normal conditions and postulated accident conditions are analyzed. A strong dependence of the multiplication factor on the distance between the fuel elements and on the effective water density is observed. A multiplication factor <1 may be expected for an infinite array of fuel rods at center-to-center distances >6.5 cm, regardless of the fuel element type and burnup. At shorter distances, the subcriticality can be ensured only by adding absorbers to the array of fuel rods even if the fuel rods were burned to {approximately}20% burnup. The results of both codes agree well for normal conditions. The results show that WIMS may be used as a complement to the Monte Carlo code in some parts of the criticality analysis.

  17. RHIC stochastic cooling motion control

    SciTech Connect

    Gassner, D.; DeSanto, L.; Olsen, R.H.; Fu, W.; Brennan, J.M.; Liaw, CJ; Bellavia, S.; Brodowski, J.

    2011-03-28

    Relativistic Heavy Ion Collider (RHIC) beams are subject to Intra-Beam Scattering (IBS) that causes an emittance growth in all three-phase space planes. The only way to increase integrated luminosity is to counteract IBS with cooling during RHIC stores. A stochastic cooling system for this purpose has been developed, it includes moveable pick-ups and kickers in the collider that require precise motion control mechanics, drives and controllers. Since these moving parts can limit the beam path aperture, accuracy and reliability is important. Servo, stepper, and DC motors are used to provide actuation solutions for position control. The choice of motion stage, drive motor type, and controls are based on needs defined by the variety of mechanical specifications, the unique performance requirements, and the special needs required for remote operations in an accelerator environment. In this report we will describe the remote motion control related beam line hardware, position transducers, rack electronics, and software developed for the RHIC stochastic cooling pick-ups and kickers.

  18. Stochastic models of viral infection

    NASA Astrophysics Data System (ADS)

    Chou, Tom

    2009-03-01

    We develop biophysical models of viral infections from a stochastic process perspective. The entry of enveloped viruses is treated as a stochastic multiple receptor and coreceptor engagement process that can lead to membrane fusion or endocytosis. The probabilities of entry via fusion and endocytosis are computed as functions of the receptor/coreceptor engagement rates. Since membrane fusion and endocytosis entry pathways can lead to very different infection outcomes, we delineate the parameter regimes conducive to each entry pathway. After entry, viral material is biochemically processed and degraded as it is transported towards the nucleus. Productive infections occur only when the material reaches the nucleus in the proper biochemical state. Thus, entry into the nucleus in an infectious state requires the proper timing of the cytoplasmic transport process. We compute the productive infection probability and show its nonmonotonic dependence on both transport speeds and biochemical transformation rates. Our results carry subtle consequences on the dosage and efficacy of antivirals such as reverse transcription inhibitors.

  19. Stochastic Methods for Aircraft Design

    NASA Technical Reports Server (NTRS)

    Pelz, Richard B.; Ogot, Madara

    1998-01-01

    The global stochastic optimization method, simulated annealing (SA), was adapted and applied to various problems in aircraft design. The research was aimed at overcoming the problem of finding an optimal design in a space with multiple minima and roughness ubiquitous to numerically generated nonlinear objective functions. SA was modified to reduce the number of objective function evaluations for an optimal design, historically the main criticism of stochastic methods. SA was applied to many CFD/MDO problems including: low sonic-boom bodies, minimum drag on supersonic fore-bodies, minimum drag on supersonic aeroelastic fore-bodies, minimum drag on HSCT aeroelastic wings, FLOPS preliminary design code, another preliminary aircraft design study with vortex lattice aerodynamics, HSR complete aircraft aerodynamics. In every case, SA provided a simple, robust and reliable optimization method which found optimal designs in order 100 objective function evaluations. Perhaps most importantly, from this academic/industrial project, technology has been successfully transferred; this method is the method of choice for optimization problems at Northrop Grumman.

  20. Numerical tests of stochastic tomography

    NASA Astrophysics Data System (ADS)

    Ru-Shan, Wu; Xiao-Bi, Xie

    1991-05-01

    The method of stochastic tomography proposed by Wu is tested numerically. This method reconstructs the heterospectra (power spectra of heterogeneities) at all depths of a non-uniform random medium using measured joint transverse-angular coherence functions (JTACF) of transmission fluctuations on an array. The inversion method is based on a constrained least-squares inversion implemented via the singular value decomposition. The inversion is also applicable to reconstructions using transverse coherence functions (TCF) or angular coherence functions (ACF); these are merely special cases of JTACF. Through the analysis of sampling functions and singular values, and through numerical examples of reconstruction using theoretically generated coherence functions, we compare the resolution and robustness of reconstructions using TCF, ACF and JTACF. The JTACF can `focus' the coherence analysis at different depths and therefore has a better depth resolution than TCF and ACF. In addition, the JTACF contains much more information than the sum of TCF and ACF, and has much better noise resistance properties than TCF and ACF. Inversion of JTACF can give a reliable reconstruction of heterospectra at different depths even for data with 20% noise contamination. This demonstrates the feasibility of stochastic tomography using JTACF.

  1. Stochastic models for cell division

    NASA Astrophysics Data System (ADS)

    Stukalin, Evgeny; Sun, Sean

    2013-03-01

    The probability of cell division per unit time strongly depends of age of cells, i.e., time elapsed since their birth. The theory of cell populations in the age-time representation is systematically applied for modeling cell division for different spreads in generation times. We use stochastic simulations to address the same issue at the level of individual cells. Our approach unlike deterministic theory enables to analyze the size fluctuations of cell colonies at different growth conditions (in the absence and in the presence of cell death, for initially synchronized and asynchronous cell populations, for conditions of restricted growth). We find the simple quantitative relation between the asymptotic values of relative size fluctuations around mean values for initially synchronized cell populations under growth and the coefficients of variation of generation times. Effect of initial age distribution for asynchronous growth of cell cultures is also studied by simulations. The influence of constant cell death on fluctuations of sizes of cell populations is found to be essential even for small cell death rates, i.e., for realistic growth conditions. The stochastic model is generalized for biologically relevant case that involves both cell reproduction and cell differentiation.

  2. Stochastic Modeling of Laminar-Turbulent Transition

    NASA Technical Reports Server (NTRS)

    Rubinstein, Robert; Choudhari, Meelan

    2002-01-01

    Stochastic versions of stability equations are developed in order to develop integrated models of transition and turbulence and to understand the effects of uncertain initial conditions on disturbance growth. Stochastic forms of the resonant triad equations, a high Reynolds number asymptotic theory, and the parabolized stability equations are developed.

  3. Bunched Beam Stochastic Cooling and Coherent Lines

    SciTech Connect

    Blaskiewicz, M.; Brennan, J. M.

    2006-03-20

    Strong coherent signals complicate bunched beam stochastic cooling, and development of the longitudinal stochastic cooling system for RHIC required dealing with coherence in heavy ion beams. Studies with proton beams revealed additional forms of coherence. This paper presents data and analysis for both sorts of beams.

  4. Variational principles for stochastic fluid dynamics

    PubMed Central

    Holm, Darryl D.

    2015-01-01

    This paper derives stochastic partial differential equations (SPDEs) for fluid dynamics from a stochastic variational principle (SVP). The paper proceeds by taking variations in the SVP to derive stochastic Stratonovich fluid equations; writing their Itô representation; and then investigating the properties of these stochastic fluid models in comparison with each other, and with the corresponding deterministic fluid models. The circulation properties of the stochastic Stratonovich fluid equations are found to closely mimic those of the deterministic ideal fluid models. As with deterministic ideal flows, motion along the stochastic Stratonovich paths also preserves the helicity of the vortex field lines in incompressible stochastic flows. However, these Stratonovich properties are not apparent in the equivalent Itô representation, because they are disguised by the quadratic covariation drift term arising in the Stratonovich to Itô transformation. This term is a geometric generalization of the quadratic covariation drift term already found for scalar densities in Stratonovich's famous 1966 paper. The paper also derives motion equations for two examples of stochastic geophysical fluid dynamics; namely, the Euler–Boussinesq and quasi-geostropic approximations.

  5. From Complex to Simple: Interdisciplinary Stochastic Models

    ERIC Educational Resources Information Center

    Mazilu, D. A.; Zamora, G.; Mazilu, I.

    2012-01-01

    We present two simple, one-dimensional, stochastic models that lead to a qualitative understanding of very complex systems from biology, nanoscience and social sciences. The first model explains the complicated dynamics of microtubules, stochastic cellular highways. Using the theory of random walks in one dimension, we find analytical expressions…

  6. Attainability analysis in stochastic controlled systems

    SciTech Connect

    Ryashko, Lev

    2015-03-10

    A control problem for stochastically forced nonlinear continuous-time systems is considered. We propose a method for construction of the regulator that provides a preassigned probabilistic distribution of random states in stochastic equilibrium. Geometric criteria of the controllability are obtained. Constructive technique for the specification of attainability sets is suggested.

  7. Renormalization of stochastic lattice models: epitaxial surfaces.

    PubMed

    Haselwandter, Christoph A; Vvedensky, Dimitri D

    2008-06-01

    We present the application of a method [C. A. Haselwandter and D. D. Vvedensky, Phys. Rev. E 76, 041115 (2007)] for deriving stochastic partial differential equations from atomistic processes to the morphological evolution of epitaxial surfaces driven by the deposition of new material. Although formally identical to the one-dimensional (1D) systems considered previously, our methodology presents substantial additional technical issues when applied to two-dimensional (2D) surfaces. Once these are addressed, subsequent coarse-graining is accomplished as before by calculating renormalization-group (RG) trajectories from initial conditions determined by the regularized atomistic models. Our applications are to the Edwards-Wilkinson (EW) model [S. F. Edwards and D. R. Wilkinson, Proc. R. Soc. London, Ser. A 381, 17 (1982)], the Wolf-Villain (WV) model [D. E. Wolf and J. Villain, Europhys. Lett. 13, 389 (1990)], and a model with concurrent random deposition and surface diffusion. With our rules for the EW model no appreciable crossover is obtained for either 1D or 2D substrates. For the 1D WV model, discussed previously, our analysis reproduces the crossover sequence known from kinetic Monte Carlo (KMC) simulations, but for the 2D WV model, we find a transition from smooth to unstable growth under repeated coarse-graining. Concurrent surface diffusion does not change this behavior, but can lead to extended transient regimes with kinetic roughening. This provides an explanation of recent experiments on Ge(001) with the intriguing conclusion that the same relaxation mechanism responsible for ordered structures during the early stages of growth also produces an instability at longer times that leads to epitaxial breakdown. The RG trajectories calculated for concurrent random deposition and surface diffusion reproduce the crossover sequences observed with KMC simulations for all values of the model parameters, and asymptotically always approach the fixed point corresponding

  8. Renormalization of stochastic lattice models: Epitaxial surfaces

    NASA Astrophysics Data System (ADS)

    Haselwandter, Christoph A.; Vvedensky, Dimitri D.

    2008-06-01

    We present the application of a method [C. A. Haselwandter and D. D. Vvedensky, Phys. Rev. E 76, 041115 (2007)] for deriving stochastic partial differential equations from atomistic processes to the morphological evolution of epitaxial surfaces driven by the deposition of new material. Although formally identical to the one-dimensional (1D) systems considered previously, our methodology presents substantial additional technical issues when applied to two-dimensional (2D) surfaces. Once these are addressed, subsequent coarse-graining is accomplished as before by calculating renormalization-group (RG) trajectories from initial conditions determined by the regularized atomistic models. Our applications are to the Edwards-Wilkinson (EW) model [S. F. Edwards and D. R. Wilkinson, Proc. R. Soc. London, Ser. A 381, 17 (1982)], the Wolf-Villain (WV) model [D. E. Wolf and J. Villain, Europhys. Lett. 13, 389 (1990)], and a model with concurrent random deposition and surface diffusion. With our rules for the EW model no appreciable crossover is obtained for either 1D or 2D substrates. For the 1D WV model, discussed previously, our analysis reproduces the crossover sequence known from kinetic Monte Carlo (KMC) simulations, but for the 2D WV model, we find a transition from smooth to unstable growth under repeated coarse-graining. Concurrent surface diffusion does not change this behavior, but can lead to extended transient regimes with kinetic roughening. This provides an explanation of recent experiments on Ge(001) with the intriguing conclusion that the same relaxation mechanism responsible for ordered structures during the early stages of growth also produces an instability at longer times that leads to epitaxial breakdown. The RG trajectories calculated for concurrent random deposition and surface diffusion reproduce the crossover sequences observed with KMC simulations for all values of the model parameters, and asymptotically always approach the fixed point corresponding

  9. Opportunity fuels

    SciTech Connect

    Lutwen, R.C.

    1994-12-31

    Opportunity fuels - fuels that can be converted to other forms of energy at lower cost than standard fossil fuels - are discussed in outline form. The type and source of fuels, types of fuels, combustability, methods of combustion, refinery wastes, petroleum coke, garbage fuels, wood wastes, tires, and economics are discussed.

  10. Stochastic simulation of charged particle transport on the massively parallel processor

    NASA Technical Reports Server (NTRS)

    Earl, James A.

    1988-01-01

    Computations of cosmic-ray transport based upon finite-difference methods are afflicted by instabilities, inaccuracies, and artifacts. To avoid these problems, researchers developed a Monte Carlo formulation which is closely related not only to the finite-difference formulation, but also to the underlying physics of transport phenomena. Implementations of this approach are currently running on the Massively Parallel Processor at Goddard Space Flight Center, whose enormous computing power overcomes the poor statistical accuracy that usually limits the use of stochastic methods. These simulations have progressed to a stage where they provide a useful and realistic picture of solar energetic particle propagation in interplanetary space.

  11. Probabilistic Density Function Method for Stochastic ODEs of Power Systems with Uncertain Power Input

    SciTech Connect

    Wang, Peng; Barajas-Solano, David A.; Constantinescu, Emil; Abhyankar, S.; Ghosh, Donetta L.; Smith, Barry; Huang, Zhenyu; Tartakovsky, Alexandre M.

    2015-09-22

    Wind and solar power generators are commonly described by a system of stochastic ordinary differential equations (SODEs) where random input parameters represent uncertainty in wind and solar energy. The existing methods for SODEs are mostly limited to delta-correlated random parameters (white noise). Here we use the Probability Density Function (PDF) method for deriving a closed-form deterministic partial differential equation (PDE) for the joint probability density function of the SODEs describing a power generator with time-correlated power input. The resulting PDE is solved numerically. A good agreement with Monte Carlo Simulations shows accuracy of the PDF method.

  12. Stochastic bursts in the kinetics of gene expression with regulation by long non-coding RNAs

    NASA Astrophysics Data System (ADS)

    Zhdanov, V. P.

    2010-09-01

    One of the main recent breakthroughs in cellular biology is a discovery of numerous non-coding RNAs (ncR-NAs). We outline abilities of long ncRNAs and articulate that the corresponding kinetics may frequently exhibit stochastic bursts. For example, we scrutinize one of the generic cases when the gene transcription is regulated by competitive attachment of ncRNA and protein to a regulatory site. Our Monte Carlo simulations show that in this case one can observe huge long transcriptional bursts consisting of short bursts.

  13. Inference for discretely observed stochastic kinetic networks with applications to epidemic modeling

    PubMed Central

    Choi, Boseung; Rempala, Grzegorz A.

    2012-01-01

    We present a new method for Bayesian Markov Chain Monte Carlo–based inference in certain types of stochastic models, suitable for modeling noisy epidemic data. We apply the so-called uniformization representation of a Markov process, in order to efficiently generate appropriate conditional distributions in the Gibbs sampler algorithm. The approach is shown to work well in various data-poor settings, that is, when only partial information about the epidemic process is available, as illustrated on the synthetic data from SIR-type epidemics and the Center for Disease Control and Prevention data from the onset of the H1N1 pandemic in the United States. PMID:21835814

  14. Bit corruption correlation and autocorrelation in a stochastic binary nano-bit system

    NASA Astrophysics Data System (ADS)

    Sa-nguansin, Suchittra

    2014-10-01

    The corruption process of a binary nano-bit model resulting from an interaction with N stochastically-independent Brownian agents (BAs) is studied with the help of Monte-Carlo simulations and analytic continuum theory to investigate the data corruption process through the measurement of the spatial two-point correlation and the autocorrelation of bit corruption at the origin. By taking into account a more realistic correlation between bits, this work will contribute to the understanding of the soft error or the corruption of data stored in nano-scale devices.

  15. Stochastic ion acceleration by beating electrostatic waves.

    PubMed

    Jorns, B; Choueiri, E Y

    2013-01-01

    A study is presented of the stochasticity in the orbit of a single, magnetized ion produced by the particle's interaction with two beating electrostatic waves whose frequencies differ by the ion cyclotron frequency. A second-order Lie transform perturbation theory is employed in conjunction with a numerical analysis of the maximum Lyapunov exponent to determine the velocity conditions under which stochasticity occurs in this dynamical system. Upper and lower bounds in ion velocity are found for stochastic orbits with the lower bound approximately equal to the phase velocity of the slower wave. A threshold condition for the onset of stochasticity that is linear with respect to the wave amplitudes is also derived. It is shown that the onset of stochasticity occurs for beating electrostatic waves at lower total wave energy densities than for the case of a single electrostatic wave or two nonbeating electrostatic waves. PMID:23410446

  16. GPU accelerated Monte Carlo simulation of Brownian motors dynamics with CUDA

    NASA Astrophysics Data System (ADS)

    Spiechowicz, J.; Kostur, M.; Machura, L.

    2015-06-01

    This work presents an updated and extended guide on methods of a proper acceleration of the Monte Carlo integration of stochastic differential equations with the commonly available NVIDIA Graphics Processing Units using the CUDA programming environment. We outline the general aspects of the scientific computing on graphics cards and demonstrate them with two models of a well known phenomenon of the noise induced transport of Brownian motors in periodic structures. As a source of fluctuations in the considered systems we selected the three most commonly occurring noises: the Gaussian white noise, the white Poissonian noise and the dichotomous process also known as a random telegraph signal. The detailed discussion on various aspects of the applied numerical schemes is also presented. The measured speedup can be of the astonishing order of about 3000 when compared to a typical CPU. This number significantly expands the range of problems solvable by use of stochastic simulations, allowing even an interactive research in some cases.

  17. Solving the master equation without kinetic Monte Carlo: Tensor train approximations for a CO oxidation model

    NASA Astrophysics Data System (ADS)

    Gelß, Patrick; Matera, Sebastian; Schütte, Christof

    2016-06-01

    In multiscale modeling of heterogeneous catalytic processes, one crucial point is the solution of a Markovian master equation describing the stochastic reaction kinetics. Usually, this is too high-dimensional to be solved with standard numerical techniques and one has to rely on sampling approaches based on the kinetic Monte Carlo method. In this study we break the curse of dimensionality for the direct solution of the Markovian master equation by exploiting the Tensor Train Format for this purpose. The performance of the approach is demonstrated on a first principles based, reduced model for the CO oxidation on the RuO2(110) surface. We investigate the complexity for increasing system size and for various reaction conditions. The advantage over the stochastic simulation approach is illustrated by a problem with increased stiffness.

  18. A Lagrangian Monte Carlo model of turbulent dispersion in the convective planetary boundary layer

    NASA Astrophysics Data System (ADS)

    Liljegren, J. C.; Dunn, W. E.

    A Lagrangian Monte Carlo model for predicting the dispersion of a passive tracer in a convective boundary layer is presented. The stochastic model provides a more realistic treatment of convective turbulence than previous modeling approaches. Accurate input for the dispersion prediction is provided by extensive water-tank measurements of convective turbulence. The dispersion of a large number of passive tracer particles is computationally simulated by using the Langevin equation to model the Lagrangian velocities. The behavior of the autocorrelation of the modeled Lagrangian velocities closely matches the nonexponential form computed from balloon-borne measurements in the atmosphere. A kernel estimation technique is employed to efficiently recover mean concentrations from the trajectory simulations and reduce computational requirements. The predictions of the stochastic model are in close agreement with the dispersion trends and magnitudes observed in the data.

  19. The D0 Monte Carlo

    SciTech Connect

    Womersley, J. . Dept. of Physics)

    1992-10-01

    The D0 detector at the Fermilab Tevatron began its first data taking run in May 1992. For analysis of the expected 25 pb[sup [minus]1] data sample, roughly half a million simulated events will be needed. The GEANT-based Monte Carlo program used to generate these events is described, together with comparisons to test beam data. Some novel techniques used to speed up execution and simplify geometrical input are described.

  20. The probabilistic solution of stochastic oscillators with even nonlinearity under poisson excitation

    NASA Astrophysics Data System (ADS)

    Guo, Siu-Siu; Er, Guo-Kang

    2012-06-01

    The probabilistic solutions of nonlinear stochastic oscillators with even nonlinearity driven by Poisson white noise are investigated in this paper. The stationary probability density function (PDF) of the oscillator responses governed by the reduced Fokker-Planck-Kolmogorov equation is obtained with exponentialpolynomial closure (EPC) method. Different types of nonlinear oscillators are considered. Monte Carlo simulation is conducted to examine the effectiveness and accuracy of the EPC method in this case. It is found that the PDF solutions obtained with EPC agree well with those obtained with Monte Carlo simulation, especially in the tail regions of the PDFs of oscillator responses. Numerical analysis shows that the mean of displacement is nonzero and the PDF of displacement is nonsymmetric about its mean when there is even nonlinearity in displacement in the oscillator. Numerical analysis further shows that the mean of velocity always equals zero and the PDF of velocity is symmetrically distributed about its mean.

  1. Metrics for Diagnosing Undersampling in Monte Carlo Tally Estimates

    SciTech Connect

    Perfetti, Christopher M.; Rearden, Bradley T.

    2015-01-01

    This study explored the potential of using Markov chain convergence diagnostics to predict the prevalence and magnitude of biases due to undersampling in Monte Carlo eigenvalue and flux tally estimates. Five metrics were applied to two models of pressurized water reactor fuel assemblies and their potential for identifying undersampling biases was evaluated by comparing the calculated test metrics with known biases in the tallies. Three of the five undersampling metrics showed the potential to accurately predict the behavior of undersampling biases in the responses examined in this study.

  2. Neutronic calculations for CANDU thorium systems using Monte Carlo techniques

    NASA Astrophysics Data System (ADS)

    Saldideh, M.; Shayesteh, M.; Eshghi, M.

    2014-08-01

    In this paper, we have investigated the prospects of exploiting the rich world thorium reserves using Canada Deuterium Uranium (CANDU) reactors. The analysis is performed using the Monte Carlo MCNP code in order to understand how much time the reactor is in criticality conduction. Four different fuel compositions have been selected for analysis. We have obtained the infinite multiplication factor, k∞, under full power operation of the reactor over 8 years. The neutronic flux distribution in the full core reactor has already been investigated.

  3. Current status of the PSG Monte Carlo neutron transport code

    SciTech Connect

    Leppaenen, J.

    2006-07-01

    PSG is a new Monte Carlo neutron transport code, developed at the Technical Research Centre of Finland (VTT). The code is mainly intended for fuel assembly-level reactor physics calculations, such as group constant generation for deterministic reactor simulator codes. This paper presents the current status of the project and the essential capabilities of the code. Although the main application of PSG is in lattice calculations, the geometry is not restricted in two dimensions. This paper presents the validation of PSG against the experimental results of the three-dimensional MOX fuelled VENUS-2 reactor dosimetry benchmark. (authors)

  4. Extending canonical Monte Carlo methods

    NASA Astrophysics Data System (ADS)

    Velazquez, L.; Curilef, S.

    2010-02-01

    In this paper, we discuss the implications of a recently obtained equilibrium fluctuation-dissipation relation for the extension of the available Monte Carlo methods on the basis of the consideration of the Gibbs canonical ensemble to account for the existence of an anomalous regime with negative heat capacities C < 0. The resulting framework appears to be a suitable generalization of the methodology associated with the so-called dynamical ensemble, which is applied to the extension of two well-known Monte Carlo methods: the Metropolis importance sampling and the Swendsen-Wang cluster algorithm. These Monte Carlo algorithms are employed to study the anomalous thermodynamic behavior of the Potts models with many spin states q defined on a d-dimensional hypercubic lattice with periodic boundary conditions, which successfully reduce the exponential divergence of the decorrelation time τ with increase of the system size N to a weak power-law divergence \\tau \\propto N^{\\alpha } with α≈0.2 for the particular case of the 2D ten-state Potts model.

  5. Compressible generalized hybrid Monte Carlo

    NASA Astrophysics Data System (ADS)

    Fang, Youhan; Sanz-Serna, J. M.; Skeel, Robert D.

    2014-05-01

    One of the most demanding calculations is to generate random samples from a specified probability distribution (usually with an unknown normalizing prefactor) in a high-dimensional configuration space. One often has to resort to using a Markov chain Monte Carlo method, which converges only in the limit to the prescribed distribution. Such methods typically inch through configuration space step by step, with acceptance of a step based on a Metropolis(-Hastings) criterion. An acceptance rate of 100% is possible in principle by embedding configuration space in a higher dimensional phase space and using ordinary differential equations. In practice, numerical integrators must be used, lowering the acceptance rate. This is the essence of hybrid Monte Carlo methods. Presented is a general framework for constructing such methods under relaxed conditions: the only geometric property needed is (weakened) reversibility; volume preservation is not needed. The possibilities are illustrated by deriving a couple of explicit hybrid Monte Carlo methods, one based on barrier-lowering variable-metric dynamics and another based on isokinetic dynamics.

  6. Extension of the fully coupled Monte Carlo/S sub N response matrix method to problems including upscatter and fission

    SciTech Connect

    Baker, R.S.; Filippone, W.F. . Dept. of Nuclear and Energy Engineering); Alcouffe, R.E. )

    1991-01-01

    The neutron transport equation is solved by a hybrid method that iteratively couples regions where deterministic (S{sub N}) and stochastic (Monte Carlo) methods are applied. Unlike previous hybrid methods, the Monte Carlo and S{sub N} regions are fully coupled in the sense that no assumption is made about geometrical separation of decoupling. The fully coupled Monte Carlo/S{sub N} technique consists of defining spatial and/or energy regions of a problem in which either a Monte Carlo calculation or an S{sub N} calculation is to be performed. The Monte Carlo and S{sub N} regions are then connected through the common angular boundary fluxes, which are determined iteratively using the response matrix technique, and group sources. The hybrid method provides a new method of solving problems involving both optically thick and optically thin regions that neither Monte Carlo nor S{sub N} is well suited for by itself. The fully coupled Monte Carlo/S{sub N} method has been implemented in the S{sub N} code TWODANT by adding special-purpose Monte Carlo subroutines to calculate the response matrices and group sources, and linkage subroutines to carry out the interface flux iterations. The common angular boundary fluxes are included in the S{sub N} code as interior boundary sources, leaving the logic for the solution of the transport flux unchanged, while, with minor modifications, the diffusion synthetic accelerator remains effective in accelerating the S{sub N} calculations. The Monte Carlo routines have been successfully vectorized, with approximately a factor of five increases in speed over the nonvectorized version. The hybrid method is capable of solving forward, inhomogeneous source problems in X-Y and R-Z geometries. This capability now includes mulitigroup problems involving upscatter and fission in non-highly multiplying systems. 8 refs., 8 figs., 1 tab.

  7. Goal-oriented sensitivity analysis for lattice kinetic Monte Carlo simulations.

    PubMed

    Arampatzis, Georgios; Katsoulakis, Markos A

    2014-03-28

    In this paper we propose a new class of coupling methods for the sensitivity analysis of high dimensional stochastic systems and in particular for lattice Kinetic Monte Carlo (KMC). Sensitivity analysis for stochastic systems is typically based on approximating continuous derivatives with respect to model parameters by the mean value of samples from a finite difference scheme. Instead of using independent samples the proposed algorithm reduces the variance of the estimator by developing a strongly correlated-"coupled"- stochastic process for both the perturbed and unperturbed stochastic processes, defined in a common state space. The novelty of our construction is that the new coupled process depends on the targeted observables, e.g., coverage, Hamiltonian, spatial correlations, surface roughness, etc., hence we refer to the proposed method as goal-oriented sensitivity analysis. In particular, the rates of the coupled Continuous Time Markov Chain are obtained as solutions to a goal-oriented optimization problem, depending on the observable of interest, by considering the minimization functional of the corresponding variance. We show that this functional can be used as a diagnostic tool for the design and evaluation of different classes of couplings. Furthermore, the resulting KMC sensitivity algorithm has an easy implementation that is based on the Bortz-Kalos-Lebowitz algorithm's philosophy, where events are divided in classes depending on level sets of the observable of interest. Finally, we demonstrate in several examples including adsorption, desorption, and diffusion Kinetic Monte Carlo that for the same confidence interval and observable, the proposed goal-oriented algorithm can be two orders of magnitude faster than existing coupling algorithms for spatial KMC such as the Common Random Number approach. We also provide a complete implementation of the proposed sensitivity analysis algorithms, including various spatial KMC examples, in a supplementary MATLAB

  8. Goal-oriented sensitivity analysis for lattice kinetic Monte Carlo simulations

    NASA Astrophysics Data System (ADS)

    Arampatzis, Georgios; Katsoulakis, Markos A.

    2014-03-01

    In this paper we propose a new class of coupling methods for the sensitivity analysis of high dimensional stochastic systems and in particular for lattice Kinetic Monte Carlo (KMC). Sensitivity analysis for stochastic systems is typically based on approximating continuous derivatives with respect to model parameters by the mean value of samples from a finite difference scheme. Instead of using independent samples the proposed algorithm reduces the variance of the estimator by developing a strongly correlated-"coupled"- stochastic process for both the perturbed and unperturbed stochastic processes, defined in a common state space. The novelty of our construction is that the new coupled process depends on the targeted observables, e.g., coverage, Hamiltonian, spatial correlations, surface roughness, etc., hence we refer to the proposed method as goal-oriented sensitivity analysis. In particular, the rates of the coupled Continuous Time Markov Chain are obtained as solutions to a goal-oriented optimization problem, depending on the observable of interest, by considering the minimization functional of the corresponding variance. We show that this functional can be used as a diagnostic tool for the design and evaluation of different classes of couplings. Furthermore, the resulting KMC sensitivity algorithm has an easy implementation that is based on the Bortz-Kalos-Lebowitz algorithm's philosophy, where events are divided in classes depending on level sets of the observable of interest. Finally, we demonstrate in several examples including adsorption, desorption, and diffusion Kinetic Monte Carlo that for the same confidence interval and observable, the proposed goal-oriented algorithm can be two orders of magnitude faster than existing coupling algorithms for spatial KMC such as the Common Random Number approach. We also provide a complete implementation of the proposed sensitivity analysis algorithms, including various spatial KMC examples, in a supplementary MATLAB

  9. Goal-oriented sensitivity analysis for lattice kinetic Monte Carlo simulations

    SciTech Connect

    Arampatzis, Georgios; Katsoulakis, Markos A.

    2014-03-28

    In this paper we propose a new class of coupling methods for the sensitivity analysis of high dimensional stochastic systems and in particular for lattice Kinetic Monte Carlo (KMC). Sensitivity analysis for stochastic systems is typically based on approximating continuous derivatives with respect to model parameters by the mean value of samples from a finite difference scheme. Instead of using independent samples the proposed algorithm reduces the variance of the estimator by developing a strongly correlated-“coupled”- stochastic process for both the perturbed and unperturbed stochastic processes, defined in a common state space. The novelty of our construction is that the new coupled process depends on the targeted observables, e.g., coverage, Hamiltonian, spatial correlations, surface roughness, etc., hence we refer to the proposed method as goal-oriented sensitivity analysis. In particular, the rates of the coupled Continuous Time Markov Chain are obtained as solutions to a goal-oriented optimization problem, depending on the observable of interest, by considering the minimization functional of the corresponding variance. We show that this functional can be used as a diagnostic tool for the design and evaluation of different classes of couplings. Furthermore, the resulting KMC sensitivity algorithm has an easy implementation that is based on the Bortz–Kalos–Lebowitz algorithm's philosophy, where events are divided in classes depending on level sets of the observable of interest. Finally, we demonstrate in several examples including adsorption, desorption, and diffusion Kinetic Monte Carlo that for the same confidence interval and observable, the proposed goal-oriented algorithm can be two orders of magnitude faster than existing coupling algorithms for spatial KMC such as the Common Random Number approach. We also provide a complete implementation of the proposed sensitivity analysis algorithms, including various spatial KMC examples, in a supplementary

  10. Multilevel Monte Carlo for Two Phase Flow and Transport in a Subsurface Reservoir with Random Permeability

    NASA Astrophysics Data System (ADS)

    Müller, Florian; Jenny, Patrick; Daniel, Meyer

    2014-05-01

    To a large extent, the flow and transport behaviour within a subsurface reservoir is governed by its permeability. Typically, permeability measurements of a subsurface reservoir are affordable at few spatial locations only. Due to this lack of information, permeability fields are preferably described by stochastic models rather than deterministically. A stochastic method is needed to asses the transition of the input uncertainty in permeability through the system of partial differential equations describing flow and transport to the output quantity of interest. Monte Carlo (MC) is an established method for quantifying uncertainty arising in subsurface flow and transport problems. Although robust and easy to implement, MC suffers from slow statistical convergence. To reduce the computational cost of MC, the multilevel Monte Carlo (MLMC) method was introduced. Instead of sampling a random output quantity of interest on the finest affordable grid as in case of MC, MLMC operates on a hierarchy of grids. If parts of the sampling process are successfully delegated to coarser grids where sampling is inexpensive, MLMC can dramatically outperform MC. MLMC has proven to accelerate MC for several applications including integration problems, stochastic ordinary differential equations in finance as well as stochastic elliptic and hyperbolic partial differential equations. In this study, MLMC is combined with a reservoir simulator to assess uncertain two phase (water/oil) flow and transport within a random permeability field. The performance of MLMC is compared to MC for a two-dimensional reservoir with a multi-point Gaussian logarithmic permeability field. It is found that MLMC yields significant speed-ups with respect to MC while providing results of essentially equal accuracy. This finding holds true not only for one specific Gaussian logarithmic permeability model but for a range of correlation lengths and variances.

  11. Stochastic inflation and nonlinear gravity

    NASA Astrophysics Data System (ADS)

    Salopek, D. S.; Bond, J. R.

    1991-02-01

    We show how nonlinear effects of the metric and scalar fields may be included in stochastic inflation. Our formalism can be applied to non-Gaussian fluctuation models for galaxy formation. Fluctuations with wavelengths larger than the horizon length are governed by a network of Langevin equations for the physical fields. Stochastic noise terms arise from quantum fluctuations that are assumed to become classical at horizon crossing and that then contribute to the background. Using Hamilton-Jacobi methods, we solve the Arnowitt-Deser-Misner constraint equations which allows us to separate the growing modes from the decaying ones in the drift phase following each stochastic impulse. We argue that the most reasonable choice of time hypersurfaces for the Langevin system during inflation is T=ln(Ha), where H and a are the local values of the Hubble parameter and the scale factor, since T is the natural time for evolving the short-wavelength scalar field fluctuations in an inhomogeneous background. We derive a Fokker-Planck equation which describes how the probability distribution of scalar field values at a given spatial point evolves in T. Analytic Green's-function solutions obtained for a single scalar field self-interacting through an exponential potential are used to demonstrate (1) if the initial condition of the Hubble parameter is chosen to be consistent with microwave-background limits, H(φ0)/mρ<~10-4, then the fluctuations obey Gaussian statistics to a high precision, independent of the time hypersurface choice and operator-ordering ambiguities in the Fokker-Planck equation, and (2) for scales much larger than our present observable patch of the Universe, the distribution is non-Gaussian, with a tail extending to large energy densities; although there are no observable manifestations, it does show eternal inflation. Lattice simulations of our Langevin network for the exponential potential demonstrate how spatial correlations are incorporated. An initially

  12. Stochastic models of neuronal dynamics

    PubMed Central

    Harrison, L.M; David, O; Friston, K.J

    2005-01-01

    Cortical activity is the product of interactions among neuronal populations. Macroscopic electrophysiological phenomena are generated by these interactions. In principle, the mechanisms of these interactions afford constraints on biologically plausible models of electrophysiological responses. In other words, the macroscopic features of cortical activity can be modelled in terms of the microscopic behaviour of neurons. An evoked response potential (ERP) is the mean electrical potential measured from an electrode on the scalp, in response to some event. The purpose of this paper is to outline a population density approach to modelling ERPs. We propose a biologically plausible model of neuronal activity that enables the estimation of physiologically meaningful parameters from electrophysiological data. The model encompasses four basic characteristics of neuronal activity and organization: (i) neurons are dynamic units, (ii) driven by stochastic forces, (iii) organized into populations with similar biophysical properties and response characteristics and (iv) multiple populations interact to form functional networks. This leads to a formulation of population dynamics in terms of the Fokker–Planck equation. The solution of this equation is the temporal evolution of a probability density over state-space, representing the distribution of an ensemble of trajectories. Each trajectory corresponds to the changing state of a neuron. Measurements can be modelled by taking expectations over this density, e.g. mean membrane potential, firing rate or energy consumption per neuron. The key motivation behind our approach is that ERPs represent an average response over many neurons. This means it is sufficient to model the probability density over neurons, because this implicitly models their average state. Although the dynamics of each neuron can be highly stochastic, the dynamics of the density is not. This means we can use Bayesian inference and estimation tools that have

  13. Kalman filter parameter estimation for a nonlinear diffusion model of epithelial cell migration using stochastic collocation and the Karhunen-Loeve expansion.

    PubMed

    Barber, Jared; Tanase, Roxana; Yotov, Ivan

    2016-06-01

    Several Kalman filter algorithms are presented for data assimilation and parameter estimation for a nonlinear diffusion model of epithelial cell migration. These include the ensemble Kalman filter with Monte Carlo sampling and a stochastic collocation (SC) Kalman filter with structured sampling. Further, two types of noise are considered -uncorrelated noise resulting in one stochastic dimension for each element of the spatial grid and correlated noise parameterized by the Karhunen-Loeve (KL) expansion resulting in one stochastic dimension for each KL term. The efficiency and accuracy of the four methods are investigated for two cases with synthetic data with and without noise, as well as data from a laboratory experiment. While it is observed that all algorithms perform reasonably well in matching the target solution and estimating the diffusion coefficient and the growth rate, it is illustrated that the algorithms that employ SC and KL expansion are computationally more efficient, as they require fewer ensemble members for comparable accuracy. In the case of SC methods, this is due to improved approximation in stochastic space compared to Monte Carlo sampling. In the case of KL methods, the parameterization of the noise results in a stochastic space of smaller dimension. The most efficient method is the one combining SC and KL expansion. PMID:27085426

  14. Time-quantified Monte Carlo algorithm for interacting spin array micromagnetic dynamics

    NASA Astrophysics Data System (ADS)

    Cheng, X. Z.; Jalil, M. B. A.; Lee, Hwee Kuan

    2006-06-01

    In this paper, we reexamine the validity of using time-quantified Monte Carlo (TQMC) method [Phys. Rev. Lett. 84, 163 (2000); 96, 067208 (2006)] in simulating the stochastic dynamics of interacting magnetic nanoparticles. The Fokker-Planck coefficients corresponding to both TQMC and the Langevin dynamical equation (Landau-Lifshitz-Gilbert, LLG) are derived and compared in the presence of interparticle interactions. The time quantification factor is obtained and justified. Numerical verification is shown by using TQMC and Langevin methods in analyzing spin-wave dispersion in a linear array of magnetic nanoparticles.

  15. New approach to dynamical Monte Carlo methods: application to an epidemic model

    NASA Astrophysics Data System (ADS)

    Aiello, O. E.; da Silva, M. A. A.

    2003-09-01

    In this work we introduce a new approach to dynamical Monte Carlo methods to simulate Markovian processes. We apply this approach to formulate and study an epidemic generalized SIRS model. The results are in excellent agreement with the forth order Runge-Kutta Method in a region of deterministic solution. We also show that purely local interactions reproduce a poissonian-like process at mesoscopic level. The simulations for this case are checked self-consistently using a stochastic version of the Euler Method.

  16. Analysis and Monte Carlo simulation of near-terminal aircraft flight paths

    NASA Technical Reports Server (NTRS)

    Schiess, J. R.; Matthews, C. G.

    1982-01-01

    The flight paths of arriving and departing aircraft at an airport are stochastically represented. Radar data of the aircraft movements are used to decompose the flight paths into linear and curvilinear segments. Variables which describe the segments are derived, and the best fitting probability distributions of the variables, based on a sample of flight paths, are found. Conversely, given information on the probability distribution of the variables, generation of a random sample of flight paths in a Monte Carlo simulation is discussed. Actual flight paths at Dulles International Airport are analyzed and simulated.

  17. Concurrent Monte Carlo transport and fluence optimization with fluence adjusting scalable transport Monte Carlo

    PubMed Central

    Svatos, M.; Zankowski, C.; Bednarz, B.

    2016-01-01

    Purpose: The future of radiation therapy will require advanced inverse planning solutions to support single-arc, multiple-arc, and “4π” delivery modes, which present unique challenges in finding an optimal treatment plan over a vast search space, while still preserving dosimetric accuracy. The successful clinical implementation of such methods would benefit from Monte Carlo (MC) based dose calculation methods, which can offer improvements in dosimetric accuracy when compared to deterministic methods. The standard method for MC based treatment planning optimization leverages the accuracy of the MC dose calculation and efficiency of well-developed optimization methods, by precalculating the fluence to dose relationship within a patient with MC methods and subsequently optimizing the fluence weights. However, the sequential nature of this implementation is computationally time consuming and memory intensive. Methods to reduce the overhead of the MC precalculation have been explored in the past, demonstrating promising reductions of computational time overhead, but with limited impact on the memory overhead due to the sequential nature of the dose calculation and fluence optimization. The authors propose an entirely new form of “concurrent” Monte Carlo treat plan optimization: a platform which optimizes the fluence during the dose calculation, reduces wasted computation time being spent on beamlets that weakly contribute to the final dose distribution, and requires only a low memory footprint to function. In this initial investigation, the authors explore the key theoretical and practical considerations of optimizing fluence in such a manner. Methods: The authors present a novel derivation and implementation of a gradient descent algorithm that allows for optimization during MC particle transport, based on highly stochastic information generated through particle transport of very few histories. A gradient rescaling and renormalization algorithm, and the

  18. Probabilistic solutions of some multi-degree-of-freedom nonlinear stochastic dynamical systems excited by filtered Gaussian white noise

    NASA Astrophysics Data System (ADS)

    Er, Guo-Kang

    2014-04-01

    In this paper, the state-space-split method is extended for the dimension reduction of some high-dimensional Fokker-Planck-Kolmogorov equations or the nonlinear stochastic dynamical systems in high dimensions subject to external excitation which is the filtered Gaussian white noise governed by the second order stochastic differential equation. The selection of sub state variables and then the dimension-reduction procedure for a class of nonlinear stochastic dynamical systems is given when the external excitation is the filtered Gaussian white noise. The stretched Euler-Bernoulli beam with hinge support at two ends, point-spring supports, and excited by uniformly distributed load being filtered Gaussian white noise governed by the second-order stochastic differential equation is analyzed and numerical results are presented. The results obtained with the presented procedure are compared with those obtained with the Monte Carlo simulation and equivalent linearization method to show the effectiveness and advantage of the state-space-split method and exponential polynomial closure method in analyzing the stationary probabilistic solutions of the multi-degree-of-freedom nonlinear stochastic dynamical systems excited by filtered Gaussian white noise.

  19. A coupled stochastic inverse-management framework for dealing with nonpoint agriculture pollution under groundwater parameter uncertainty

    NASA Astrophysics Data System (ADS)

    Llopis-Albert, Carlos; Palacios-Marqués, Daniel; Merigó, José M.

    2014-04-01

    In this paper a methodology for the stochastic management of groundwater quality problems is presented, which can be used to provide agricultural advisory services. A stochastic algorithm to solve the coupled flow and mass transport inverse problem is combined with a stochastic management approach to develop methods for integrating uncertainty; thus obtaining more reliable policies on groundwater nitrate pollution control from agriculture. The stochastic inverse model allows identifying non-Gaussian parameters and reducing uncertainty in heterogeneous aquifers by constraining stochastic simulations to data. The management model determines the spatial and temporal distribution of fertilizer application rates that maximizes net benefits in agriculture constrained by quality requirements in groundwater at various control sites. The quality constraints can be taken, for instance, by those given by water laws such as the EU Water Framework Directive (WFD). Furthermore, the methodology allows providing the trade-off between higher economic returns and reliability in meeting the environmental standards. Therefore, this new technology can help stakeholders in the decision-making process under an uncertainty environment. The methodology has been successfully applied to a 2D synthetic aquifer, where an uncertainty assessment has been carried out by means of Monte Carlo simulation techniques.

  20. Stochastic differential equations: singularity of coefficients, regression models, and stochastic approximation

    NASA Astrophysics Data System (ADS)

    Mel'nikov, A. V.

    1996-10-01

    Contents Introduction Chapter I. Basic notions and results from contemporary martingale theory §1.1. General notions of the martingale theory §1.2. Convergence (a.s.) of semimartingales. The strong law of large numbers and the law of the iterated logarithm Chapter II. Stochastic differential equations driven by semimartingales §2.1. Basic notions and results of the theory of stochastic differential equations driven by semimartingales §2.2. The method of monotone approximations. Existence of strong solutions of stochastic equations with non-smooth coefficients §2.3. Linear stochastic equations. Properties of stochastic exponentials §2.4. Linear stochastic equations. Applications to models of the financial market Chapter III. Procedures of stochastic approximation as solutions of stochastic differential equations driven by semimartingales §3.1. Formulation of the problem. A general model and its relation to the classical one §3.2. A general description of the approach to the procedures of stochastic approximation. Convergence (a.s.) and asymptotic normality §3.3. The Gaussian model of stochastic approximation. Averaged procedures and their effectiveness Chapter IV. Statistical estimation in regression models with martingale noises §4.1. The formulation of the problem and classical regression models §4.2. Asymptotic properties of MLS-estimators. Strong consistency, asymptotic normality, the law of the iterated logarithm §4.3. Regression models with deterministic regressors §4.4. Sequential MLS-estimators with guaranteed accuracy and sequential statistical inferences Bibliography

  1. Stochastic dynamics of dengue epidemics

    NASA Astrophysics Data System (ADS)

    de Souza, David R.; Tomé, Tânia; Pinho, Suani T. R.; Barreto, Florisneide R.; de Oliveira, Mário J.

    2013-01-01

    We use a stochastic Markovian dynamics approach to describe the spreading of vector-transmitted diseases, such as dengue, and the threshold of the disease. The coexistence space is composed of two structures representing the human and mosquito populations. The human population follows a susceptible-infected-recovered (SIR) type dynamics and the mosquito population follows a susceptible-infected-susceptible (SIS) type dynamics. The human infection is caused by infected mosquitoes and vice versa, so that the SIS and SIR dynamics are interconnected. We develop a truncation scheme to solve the evolution equations from which we get the threshold of the disease and the reproductive ratio. The threshold of the disease is also obtained by performing numerical simulations. We found that for certain values of the infection rates the spreading of the disease is impossible, for any death rate of infected mosquitoes.

  2. Thermodynamics of stochastic Turing machines

    NASA Astrophysics Data System (ADS)

    Strasberg, Philipp; Cerrillo, Javier; Schaller, Gernot; Brandes, Tobias

    2015-10-01

    In analogy to Brownian computers we explicitly show how to construct stochastic models which mimic the behavior of a general-purpose computer (a Turing machine). Our models are discrete state systems obeying a Markovian master equation, which are logically reversible and have a well-defined and consistent thermodynamic interpretation. The resulting master equation, which describes a simple one-step process on an enormously large state space, allows us to thoroughly investigate the thermodynamics of computation for this situation. Especially in the stationary regime we can well approximate the master equation by a simple Fokker-Planck equation in one dimension. We then show that the entropy production rate at steady state can be made arbitrarily small, but the total (integrated) entropy production is finite and grows logarithmically with the number of computational steps.

  3. Stochastic thermodynamics for active matter

    NASA Astrophysics Data System (ADS)

    Speck, Thomas

    2016-05-01

    The theoretical understanding of active matter, which is driven out of equilibrium by directed motion, is still fragmental and model oriented. Stochastic thermodynamics, on the other hand, is a comprehensive theoretical framework for driven systems that allows to define fluctuating work and heat. We apply these definitions to active matter, assuming that dissipation can be modelled by effective non-conservative forces. We show that, through the work, conjugate extensive and intensive observables can be defined even in non-equilibrium steady states lacking a free energy. As an illustration, we derive the expressions for the pressure and interfacial tension of active Brownian particles. The latter becomes negative despite the observed stable phase separation. We discuss this apparent contradiction, highlighting the role of fluctuations, and we offer a tentative explanation.

  4. Stochastic sensing through covalent interactions

    SciTech Connect

    Bayley, Hagan; Shin, Seong-Ho; Luchian, Tudor; Cheley, Stephen

    2013-03-26

    A system and method for stochastic sensing in which the analyte covalently bonds to the sensor element or an adaptor element. If such bonding is irreversible, the bond may be broken by a chemical reagent. The sensor element may be a protein, such as the engineered P.sub.SH type or .alpha.HL protein pore. The analyte may be any reactive analyte, including chemical weapons, environmental toxins and pharmaceuticals. The analyte covalently bonds to the sensor element to produce a detectable signal. Possible signals include change in electrical current, change in force, and change in fluorescence. Detection of the signal allows identification of the analyte and determination of its concentration in a sample solution. Multiple analytes present in the same solution may be detected.

  5. Thermodynamics of stochastic Turing machines.

    PubMed

    Strasberg, Philipp; Cerrillo, Javier; Schaller, Gernot; Brandes, Tobias

    2015-10-01

    In analogy to Brownian computers we explicitly show how to construct stochastic models which mimic the behavior of a general-purpose computer (a Turing machine). Our models are discrete state systems obeying a Markovian master equation, which are logically reversible and have a well-defined and consistent thermodynamic interpretation. The resulting master equation, which describes a simple one-step process on an enormously large state space, allows us to thoroughly investigate the thermodynamics of computation for this situation. Especially in the stationary regime we can well approximate the master equation by a simple Fokker-Planck equation in one dimension. We then show that the entropy production rate at steady state can be made arbitrarily small, but the total (integrated) entropy production is finite and grows logarithmically with the number of computational steps. PMID:26565165

  6. Multiscale Stochastic Simulation and Modeling

    SciTech Connect

    James Glimm; Xiaolin Li

    2006-01-10

    Acceleration driven instabilities of fluid mixing layers include the classical cases of Rayleigh-Taylor instability, driven by a steady acceleration and Richtmyer-Meshkov instability, driven by an impulsive acceleration. Our program starts with high resolution methods of numerical simulation of two (or more) distinct fluids, continues with analytic analysis of these solutions, and the derivation of averaged equations. A striking achievement has been the systematic agreement we obtained between simulation and experiment by using a high resolution numerical method and improved physical modeling, with surface tension. Our study is accompanies by analysis using stochastic modeling and averaged equations for the multiphase problem. We have quantified the error and uncertainty using statistical modeling methods.

  7. Heuristic-biased stochastic sampling

    SciTech Connect

    Bresina, J.L.

    1996-12-31

    This paper presents a search technique for scheduling problems, called Heuristic-Biased Stochastic Sampling (HBSS). The underlying assumption behind the HBSS approach is that strictly adhering to a search heuristic often does not yield the best solution and, therefore, exploration off the heuristic path can prove fruitful. Within the HBSS approach, the balance between heuristic adherence and exploration can be controlled according to the confidence one has in the heuristic. By varying this balance, encoded as a bias function, the HBSS approach encompasses a family of search algorithms of which greedy search and completely random search are extreme members. We present empirical results from an application of HBSS to the realworld problem of observation scheduling. These results show that with the proper bias function, it can be easy to outperform greedy search.

  8. Extinction of metastable stochastic populations.

    PubMed

    Assaf, Michael; Meerson, Baruch

    2010-02-01

    We investigate the phenomenon of extinction of a long-lived self-regulating stochastic population, caused by intrinsic (demographic) noise. Extinction typically occurs via one of two scenarios depending on whether the absorbing state n=0 is a repelling (scenario A) or attracting (scenario B) point of the deterministic rate equation. In scenario A the metastable stochastic population resides in the vicinity of an attracting fixed point next to the repelling point n=0 . In scenario B there is an intermediate repelling point n=n1 between the attracting point n=0 and another attracting point n=n2 in the vicinity of which the metastable population resides. The crux of the theory is a dissipative variant of WKB (Wentzel-Kramers-Brillouin) approximation which assumes that the typical population size in the metastable state is large. Starting from the master equation, we calculate the quasistationary probability distribution of the population sizes and the (exponentially long) mean time to extinction for each of the two scenarios. When necessary, the WKB approximation is complemented (i) by a recursive solution of the quasistationary master equation at small n and (ii) by the van Kampen system-size expansion, valid near the fixed points of the deterministic rate equation. The theory yields both entropic barriers to extinction and pre-exponential factors, and holds for a general set of multistep processes when detailed balance is broken. The results simplify considerably for single-step processes and near the characteristic bifurcations of scenarios A and B. PMID:20365539

  9. Stochastic dynamics of cancer initiation

    NASA Astrophysics Data System (ADS)

    Foo, Jasmine; Leder, Kevin; Michor, Franziska

    2011-02-01

    Most human cancer types result from the accumulation of multiple genetic and epigenetic alterations in a single cell. Once the first change (or changes) have arisen, tumorigenesis is initiated and the subsequent emergence of additional alterations drives progression to more aggressive and ultimately invasive phenotypes. Elucidation of the dynamics of cancer initiation is of importance for an understanding of tumor evolution and cancer incidence data. In this paper, we develop a novel mathematical framework to study the processes of cancer initiation. Cells at risk of accumulating oncogenic mutations are organized into small compartments of cells and proliferate according to a stochastic process. During each cell division, an (epi)genetic alteration may arise which leads to a random fitness change, drawn from a probability distribution. Cancer is initiated when a cell gains a fitness sufficiently high to escape from the homeostatic mechanisms of the cell compartment. To investigate cancer initiation during a human lifetime, a 'race' between this fitness process and the aging process of the patient is considered; the latter is modeled as a second stochastic Markov process in an aging dimension. This model allows us to investigate the dynamics of cancer initiation and its dependence on the mutational fitness distribution. Our framework also provides a methodology to assess the effects of different life expectancy distributions on lifetime cancer incidence. We apply this methodology to colorectal tumorigenesis while considering life expectancy data of the US population to inform the dynamics of the aging process. We study how the probability of cancer initiation prior to death, the time until cancer initiation, and the mutational profile of the cancer-initiating cell depends on the shape of the mutational fitness distribution and life expectancy of the population.

  10. Stochastic inversion by ray continuation

    SciTech Connect

    Haas, A.; Viallix

    1989-05-01

    The conventional tomographic inversion consists in minimizing residuals between measured and modelled traveltimes. The process tends to be unstable and some additional constraints are required to stabilize it. The stochastic formulation generalizes the technique and sets it on firmer theoretical bases. The Stochastic Inversion by Ray Continuation (SIRC) is a probabilistic approach, which takes a priori geological information into account and uses probability distributions to characterize data correlations and errors. It makes it possible to tie uncertainties to the results. The estimated parameters are interval velocities and B-spline coefficients used to represent smoothed interfaces. Ray tracing is done by a continuation technique between source and receives. The ray coordinates are computed from one path to the next by solving a linear system derived from Fermat's principle. The main advantages are fast computations, accurate traveltimes and derivatives. The seismic traces are gathered in CMPs. For a particular CMP, several reflecting elements are characterized by their time gradient measured on the stacked section, and related to a mean emergence direction. The program capabilities are tested on a synthetic example as well as on a field example. The strategy consists in inverting the parameters for one layer, then for the next one down. An inversion step is divided in two parts. First the parameters for the layer concerned are inverted, while the parameters for the upper layers remain fixed. Then all the parameters are reinverted. The velocity-depth section computed by the program together with the corresponding errors can be used directly for the interpretation, as an initial model for depth migration or for the complete inversion program under development.

  11. Multiple Stochastic Point Processes in Gene Expression

    NASA Astrophysics Data System (ADS)

    Murugan, Rajamanickam

    2008-04-01

    We generalize the idea of multiple-stochasticity in chemical reaction systems to gene expression. Using Chemical Langevin Equation approach we investigate how this multiple-stochasticity can influence the overall molecular number fluctuations. We show that the main sources of this multiple-stochasticity in gene expression could be the randomness in transcription and translation initiation times which in turn originates from the underlying bio-macromolecular recognition processes such as the site-specific DNA-protein interactions and therefore can be internally regulated by the supra-molecular structural factors such as the condensation/super-coiling of DNA. Our theory predicts that (1) in case of gene expression system, the variances ( φ) introduced by the randomness in transcription and translation initiation-times approximately scales with the degree of condensation ( s) of DNA or mRNA as φ ∝ s -6. From the theoretical analysis of the Fano factor as well as coefficient of variation associated with the protein number fluctuations we predict that (2) unlike the singly-stochastic case where the Fano factor has been shown to be a monotonous function of translation rate, in case of multiple-stochastic gene expression the Fano factor is a turn over function with a definite minimum. This in turn suggests that the multiple-stochastic processes can also be well tuned to behave like a singly-stochastic point processes by adjusting the rate parameters.

  12. LETTER TO THE EDITOR: Transfer-matrix density-matrix renormalization group for stochastic models: the Domany-Kinzel cellular automaton

    NASA Astrophysics Data System (ADS)

    Kemper, A.; Schadschneider, A.; Zittartz, J.

    2001-05-01

    We apply the transfer-matrix density-matrix renormalization group (TMRG) to a stochastic model, the Domany-Kinzel cellular automaton, which exhibits a non-equilibrium phase transition in the directed percolation universality class. Estimates for the stochastic time evolution, phase boundaries and critical exponents can be obtained with high precision. This is possible using only modest numerical effort since the thermodynamic limit can be taken analytically in our approach. We also point out further advantages of the TMRG over other numerical approaches, such as classical DMRG or Monte Carlo simulations.

  13. Solving stochastic epidemiological models using computer algebra

    NASA Astrophysics Data System (ADS)

    Hincapie, Doracelly; Ospina, Juan

    2011-06-01

    Mathematical modeling in Epidemiology is an important tool to understand the ways under which the diseases are transmitted and controlled. The mathematical modeling can be implemented via deterministic or stochastic models. Deterministic models are based on short systems of non-linear ordinary differential equations and the stochastic models are based on very large systems of linear differential equations. Deterministic models admit complete, rigorous and automatic analysis of stability both local and global from which is possible to derive the algebraic expressions for the basic reproductive number and the corresponding epidemic thresholds using computer algebra software. Stochastic models are more difficult to treat and the analysis of their properties requires complicated considerations in statistical mathematics. In this work we propose to use computer algebra software with the aim to solve epidemic stochastic models such as the SIR model and the carrier-borne model. Specifically we use Maple to solve these stochastic models in the case of small groups and we obtain results that do not appear in standard textbooks or in the books updated on stochastic models in epidemiology. From our results we derive expressions which coincide with those obtained in the classical texts using advanced procedures in mathematical statistics. Our algorithms can be extended for other stochastic models in epidemiology and this shows the power of computer algebra software not only for analysis of deterministic models but also for the analysis of stochastic models. We also perform numerical simulations with our algebraic results and we made estimations for the basic parameters as the basic reproductive rate and the stochastic threshold theorem. We claim that our algorithms and results are important tools to control the diseases in a globalized world.

  14. Stochastic analysis of immiscible displacement of the fluids with arbitrary viscosities and its dependence on support scale of hydrological data

    NASA Astrophysics Data System (ADS)

    Tartakovsky, Alexandre M.; Meakin, Paul; Huang, Hai

    2004-12-01

    Stochastic analysis is commonly used to address uncertainty in the modeling of flow and transport in porous media. In the stochastic approach, the properties of porous media are treated as random functions with statistics obtained from field measurements. Several studies indicate that hydrological properties depend on the scale of measurements or support scales, but most stochastic analysis does not address the effects of support scale on stochastic predictions of subsurface processes. In this work we propose a new approach to study the scale dependence of stochastic predictions. We present a stochastic analysis of immiscible fluid-fluid displacement in randomly heterogeneous porous media. While existing solutions are applicable only to systems in which the viscosity of one phase is negligible compare with the viscosity of the other (water-air systems for example), our solutions can be applied to the immiscible displacement of fluids having arbitrarily viscosities such as NAPL-water and water-oil. Treating intrinsic permeability as a random field with statistics dependant on the permeability support scale (scale of measurements) we obtained, for one-dimensional systems, analytical solutions for the first moments characterizing unbiased predictions (estimates) of system variables, such as the pressure and fluid-fluid interface position, and we also obtained second moments, which characterize the uncertainties associated with such predictions. Next we obtained empirically scale dependent exponential correlation function of the intrinsic permeability that allowed us to study solutions of stochastic equations as a function of the support scale. We found that the first and second moments converge to asymptotic values as the support scale decreases. In our examples, the statistical moments reached asymptotic values for support scale that were approximately 1/10000 of the flow domain size. We show that analytical moment solutions compare well with the results of Monte

  15. Intrinsic noise analyzer: a software package for the exploration of stochastic biochemical kinetics using the system size expansion.

    PubMed

    Thomas, Philipp; Matuschek, Hannes; Grima, Ramon

    2012-01-01

    The accepted stochastic descriptions of biochemical dynamics under well-mixed conditions are given by the Chemical Master Equation and the Stochastic Simulation Algorithm, which are equivalent. The latter is a Monte-Carlo method, which, despite enjoying broad availability in a large number of existing software packages, is computationally expensive due to the huge amounts of ensemble averaging required for obtaining accurate statistical information. The former is a set of coupled differential-difference equations for the probability of the system being in any one of the possible mesoscopic states; these equations are typically computationally intractable because of the inherently large state space. Here we introduce the software package intrinsic Noise Analyzer (iNA), which allows for systematic analysis of stochastic biochemical kinetics by means of van Kampen's system size expansion of the Chemical Master Equation. iNA is platform independent and supports the popular SBML format natively. The present implementation is the first to adopt a complementary approach that combines state-of-the-art analysis tools using the computer algebra system Ginac with traditional methods of stochastic simulation. iNA integrates two approximation methods based on the system size expansion, the Linear Noise Approximation and effective mesoscopic rate equations, which to-date have not been available to non-expert users, into an easy-to-use graphical user interface. In particular, the present methods allow for quick approximate analysis of time-dependent mean concentrations, variances, covariances and correlations coefficients, which typically outperforms stochastic simulations. These analytical tools are complemented by automated multi-core stochastic simulations with direct statistical evaluation and visualization. We showcase iNA's performance by using it to explore the stochastic properties of cooperative and non-cooperative enzyme kinetics and a gene network associated with

  16. On the Value Function of Weakly Coercive Problems in Nonlinear Stochastic Control

    SciTech Connect

    Motta, Monica; Sartori, Caterina

    2011-08-15

    In this paper we investigate via a dynamic programming approach some nonlinear stochastic control problems where the control set is unbounded and a classical coercivity hypothesis is replaced by some weaker assumptions. We prove that these problems can be approximated by finite fuel problems; show the continuity of the relative value functions and characterize them as unique viscosity solutions of a quasi-variational inequality with suitable boundary conditions.

  17. Analysis of system drought for Manitoba Hydro using stochastic methods

    NASA Astrophysics Data System (ADS)

    Akintug, Bertug

    Stochastic time series models are commonly used in the analysis of large-scale water resources systems. In the stochastic approach, synthetic flow scenarios are generated and used for the analysis of complex events such as multi-year droughts. Conclusions drawn from such analyses are only plausible to the extent that the underlying time series model realistically represents the natural variability of flows. Traditionally, hydrologists have favoured autoregressive moving average (ARMA) models to describe annual flows. In this research project, a class of model called Markov-Switching (MS) model (also referred to as a Hidden Markov model) is presented as an alternative to conventional ARMA models. The basic assumption underlying this model is that a limited number of flow regimes exists and that each flow year can be classified as belonging to one of these regimes. The persistence of and switching between regimes is described by a Markov chain. Within each regime, it is assumed that annual flows follow a normal distribution with mean and variance that depend on the regime. The simplicity of this model makes it possible to derive a number of model characteristics analytically such as moments, autocorrelation, and crosscorrelation. Model estimation is possible with the maximum likelihood method implemented using the Expectation Maximization (EM) algorithm. The uncertainty in the model parameters can be assessed through Bayesian inference using Markov Chain Monte Carlo (MCMC) methods. A Markov-Switching disaggregation (MSD) model is also proposed in this research project to disaggregate higher-level flows generated using the MS model into lower-level flows. The MSD model preserves the additivity property because for a given year both the higher-level and lower-level variables are generated from normal distributions. The 2-state MS and MSD models are applied to Manitoba Hydro's system along with more conventional first order autoregressive and disaggregation models and

  18. Second Cancers After Fractionated Radiotherapy: Stochastic Population Dynamics Effects

    NASA Technical Reports Server (NTRS)

    Sachs, Rainer K.; Shuryak, Igor; Brenner, David; Fakir, Hatim; Hahnfeldt, Philip

    2007-01-01

    When ionizing radiation is used in cancer therapy it can induce second cancers in nearby organs. Mainly due to longer patient survival times, these second cancers have become of increasing concern. Estimating the risk of solid second cancers involves modeling: because of long latency times, available data is usually for older, obsolescent treatment regimens. Moreover, modeling second cancers gives unique insights into human carcinogenesis, since the therapy involves administering well characterized doses of a well studied carcinogen, followed by long-term monitoring. In addition to putative radiation initiation that produces pre-malignant cells, inactivation (i.e. cell killing), and subsequent cell repopulation by proliferation can be important at the doses relevant to second cancer situations. A recent initiation/inactivation/proliferation (IIP) model characterized quantitatively the observed occurrence of second breast and lung cancers, using a deterministic cell population dynamics approach. To analyze ifradiation-initiated pre-malignant clones become extinct before full repopulation can occur, we here give a stochastic version of this I I model. Combining Monte Carlo simulations with standard solutions for time-inhomogeneous birth-death equations, we show that repeated cycles of inactivation and repopulation, as occur during fractionated radiation therapy, can lead to distributions of pre-malignant cells per patient with variance >> mean, even when pre-malignant clones are Poisson-distributed. Thus fewer patients would be affected, but with a higher probability, than a deterministic model, tracking average pre-malignant cell numbers, would predict. Our results are applied to data on breast cancers after radiotherapy for Hodgkin disease. The stochastic IIP analysis, unlike the deterministic one, indicates: a) initiated, pre-malignant cells can have a growth advantage during repopulation, not just during the longer tumor latency period that follows; b) weekend

  19. A one-time truncate and encode multiresolution stochastic framework

    SciTech Connect

    Abgrall, R.; Congedo, P.M.; Geraci, G.

    2014-01-15

    In this work a novel adaptive strategy for stochastic problems, inspired from the classical Harten's framework, is presented. The proposed algorithm allows building, in a very general manner, stochastic numerical schemes starting from a whatever type of deterministic schemes and handling a large class of problems, from unsteady to discontinuous solutions. Its formulations permits to recover the same results concerning the interpolation theory of the classical multiresolution approach, but with an extension to uncertainty quantification problems. The present strategy permits to build numerical scheme with a higher accuracy with respect to other classical uncertainty quantification techniques, but with a strong reduction of the numerical cost and memory requirements. Moreover, the flexibility of the proposed approach allows to employ any kind of probability density function, even discontinuous and time varying, without introducing further complications in the algorithm. The advantages of the present strategy are demonstrated by performing several numerical problems where different forms of uncertainty distributions are taken into account, such as discontinuous and unsteady custom-defined probability density functions. In addition to algebraic and ordinary differential equations, numerical results for the challenging 1D Kraichnan–Orszag are reported in terms of accuracy and convergence. Finally, a two degree-of-freedom aeroelastic model for a subsonic case is presented. Though quite simple, the model allows recovering some physical key aspect, on the fluid/structure interaction, thanks to the quasi-steady aerodynamic approximation employed. The injection of an uncertainty is chosen in order to obtain a complete parameterization of the mass matrix. All the numerical results are compared with respect to classical Monte Carlo solution and with a non-intrusive Polynomial Chaos method.

  20. Detection methods for non-Gaussian gravitational wave stochastic backgrounds

    NASA Astrophysics Data System (ADS)

    Drasco, Steve; Flanagan, Éanna É.

    2003-04-01

    A gravitational wave stochastic background can be produced by a collection of independent gravitational wave events. There are two classes of such backgrounds, one for which the ratio of the average time between events to the average duration of an event is small (i.e., many events are on at once), and one for which the ratio is large. In the first case the signal is continuous, sounds something like a constant hiss, and has a Gaussian probability distribution. In the second case, the discontinuous or intermittent signal sounds something like popcorn popping, and is described by a non-Gaussian probability distribution. In this paper we address the issue of finding an optimal detection method for such a non-Gaussian background. As a first step, we examine the idealized situation in which the event durations are short compared to the detector sampling time, so that the time structure of the events cannot be resolved, and we assume white, Gaussian noise in two collocated, aligned detectors. For this situation we derive an appropriate version of the maximum likelihood detection statistic. We compare the performance of this statistic to that of the standard cross-correlation statistic both analytically and with Monte Carlo simulations. In general the maximum likelihood statistic performs better than the cross-correlation statistic when the stochastic background is sufficiently non-Gaussian, resulting in a gain factor in the minimum gravitational-wave energy density necessary for detection. This gain factor ranges roughly between 1 and 3, depending on the duty cycle of the background, for realistic observing times and signal strengths for both ground and space based detectors. The computational cost of the statistic, although significantly greater than that of the cross-correlation statistic, is not unreasonable. Before the statistic can be used in practice with real detector data, further work is required to generalize our analysis to accommodate separated, misaligned

  1. Immigration-extinction dynamics of stochastic populations

    NASA Astrophysics Data System (ADS)

    Meerson, Baruch; Ovaskainen, Otso

    2013-07-01

    How high should be the rate of immigration into a stochastic population in order to significantly reduce the probability of observing the population become extinct? Is there any relation between the population size distributions with and without immigration? Under what conditions can one justify the simple patch occupancy models, which ignore the population distribution and its dynamics in a patch, and treat a patch simply as either occupied or empty? We answer these questions by exactly solving a simple stochastic model obtained by adding a steady immigration to a variant of the Verhulst model: a prototypical model of an isolated stochastic population.

  2. Stochastic system identification in structural dynamics

    USGS Publications Warehouse

    Safak, Erdal

    1988-01-01

    Recently, new identification methods have been developed by using the concept of optimal-recursive filtering and stochastic approximation. These methods, known as stochastic identification, are based on the statistical properties of the signal and noise, and do not require the assumptions of current methods. The criterion for stochastic system identification is that the difference between the recorded output and the output from the identified system (i.e., the residual of the identification) should be equal to white noise. In this paper, first a brief review of the theory is given. Then, an application of the method is presented by using ambient vibration data from a nine-story building.

  3. Topological charge conservation in stochastic optical fields

    NASA Astrophysics Data System (ADS)

    Roux, Filippus S.

    2016-05-01

    The fact that phase singularities in scalar stochastic optical fields are topologically conserved implies the existence of an associated conserved current, which can be expressed in terms of local correlation functions of the optical field and its transverse derivatives. Here, we derive the topological charge current for scalar stochastic optical fields and show that it obeys a conservation equation. We use the expression for the topological charge current to investigate the topological charge flow in inhomogeneous stochastic optical fields with a one-dimensional topological charge density.

  4. Stochastic deformation of a thermodynamic symplectic structure

    NASA Astrophysics Data System (ADS)

    Kazinski, P. O.

    2009-01-01

    A stochastic deformation of a thermodynamic symplectic structure is studied. The stochastic deformation is analogous to the deformation of an algebra of observables such as deformation quantization, but for an imaginary deformation parameter (the Planck constant). Gauge symmetries of thermodynamics and corresponding stochastic mechanics, which describes fluctuations of a thermodynamic system, are revealed and gauge fields are introduced. A physical interpretation to the gauge transformations and gauge fields is given. An application of the formalism to a description of systems with distributed parameters in a local thermodynamic equilibrium is considered.

  5. Stochastic deformation of a thermodynamic symplectic structure.

    PubMed

    Kazinski, P O

    2009-01-01

    A stochastic deformation of a thermodynamic symplectic structure is studied. The stochastic deformation is analogous to the deformation of an algebra of observables such as deformation quantization, but for an imaginary deformation parameter (the Planck constant). Gauge symmetries of thermodynamics and corresponding stochastic mechanics, which describes fluctuations of a thermodynamic system, are revealed and gauge fields are introduced. A physical interpretation to the gauge transformations and gauge fields is given. An application of the formalism to a description of systems with distributed parameters in a local thermodynamic equilibrium is considered. PMID:19256999

  6. Stochastic string models with continuous semimartingales

    NASA Astrophysics Data System (ADS)

    Bueno-Guerrero, Alberto; Moreno, Manuel; Navas, Javier F.

    2015-09-01

    This paper reformulates the stochastic string model of Santa-Clara and Sornette using stochastic calculus with continuous semimartingales. We present some new results, such as: (a) the dynamics of the short-term interest rate, (b) the PDE that must be satisfied by the bond price, and (c) an analytic expression for the price of a European bond call option. Additionally, we clarify some important features of the stochastic string model and show its relevance to price derivatives and the equivalence with an infinite dimensional HJM model to price European options.

  7. Coupling Deterministic and Monte Carlo Transport Methods for the Simulation of Gamma-Ray Spectroscopy Scenarios

    SciTech Connect

    Smith, Leon E.; Gesh, Christopher J.; Pagh, Richard T.; Miller, Erin A.; Shaver, Mark W.; Ashbaker, Eric D.; Batdorf, Michael T.; Ellis, J. E.; Kaye, William R.; McConn, Ronald J.; Meriwether, George H.; Ressler, Jennifer J.; Valsan, Andrei B.; Wareing, Todd A.

    2008-10-31

    Radiation transport modeling methods used in the radiation detection community fall into one of two broad categories: stochastic (Monte Carlo) and deterministic. Monte Carlo methods are typically the tool of choice for simulating gamma-ray spectrometers operating in homeland and national security settings (e.g. portal monitoring of vehicles or isotope identification using handheld devices), but deterministic codes that discretize the linear Boltzmann transport equation in space, angle, and energy offer potential advantages in computational efficiency for many complex radiation detection problems. This paper describes the development of a scenario simulation framework based on deterministic algorithms. Key challenges include: formulating methods to automatically define an energy group structure that can support modeling of gamma-ray spectrometers ranging from low to high resolution; combining deterministic transport algorithms (e.g. ray-tracing and discrete ordinates) to mitigate ray effects for a wide range of problem types; and developing efficient and accurate methods to calculate gamma-ray spectrometer response functions from the deterministic angular flux solutions. The software framework aimed at addressing these challenges is described and results from test problems that compare coupled deterministic-Monte Carlo methods and purely Monte Carlo approaches are provided.

  8. Stochastic Satbility and Performance Robustness of Linear Multivariable Systems

    NASA Technical Reports Server (NTRS)

    Ryan, Laurie E.; Stengel, Robert F.

    1990-01-01

    Stochastic robustness, a simple technique used to estimate the robustness of linear, time invariant systems, is applied to a single-link robot arm control system. Concepts behind stochastic stability robustness are extended to systems with estimators and to stochastic performance robustness. Stochastic performance robustness measures based on classical design specifications are introduced, and the relationship between stochastic robustness measures and control system design parameters are discussed. The application of stochastic performance robustness, and the relationship between performance objectives and design parameters are demonstrated by means of example. The results prove stochastic robustness to be a good overall robustness analysis method that can relate robustness characteristics to control system design parameters.

  9. Stochastic pump effect and geometric phases in dissipative and stochastic systems

    SciTech Connect

    Sinitsyn, Nikolai

    2008-01-01

    The success of Berry phases in quantum mechanics stimulated the study of similar phenomena in other areas of physics, including the theory of living cell locomotion and motion of patterns in nonlinear media. More recently, geometric phases have been applied to systems operating in a strongly stochastic environment, such as molecular motors. We discuss such geometric effects in purely classical dissipative stochastic systems and their role in the theory of the stochastic pump effect (SPE).

  10. Stochastic resonance during a polymer translocation process

    NASA Astrophysics Data System (ADS)

    Mondal, Debasish; Muthukumar, Murugappan

    We study the translocation of a flexible polymer in a confined geometry subjected to a time-periodic external drive to explore stochastic resonance. We describe the equilibrium translocation process in terms of a Fokker-Planck description and use a discrete two-state model to describe the effect of the external driving force on the translocation dynamics. We observe that no stochastic resonance is possible if the associated free-energy barrier is purely entropic in nature. The polymer chain experiences a stochastic resonance effect only in presence of an energy threshold in terms of polymer-pore interaction. Once stochastic resonance is feasible, the chain entropy controls the optimal synchronization conditions significantly.

  11. Stochastic differential equation model to Prendiville processes

    SciTech Connect

    Granita; Bahar, Arifah

    2015-10-22

    The Prendiville process is another variation of the logistic model which assumes linearly decreasing population growth rate. It is a continuous time Markov chain (CTMC) taking integer values in the finite interval. The continuous time Markov chain can be approximated by stochastic differential equation (SDE). This paper discusses the stochastic differential equation of Prendiville process. The work started with the forward Kolmogorov equation in continuous time Markov chain of Prendiville process. Then it was formulated in the form of a central-difference approximation. The approximation was then used in Fokker-Planck equation in relation to the stochastic differential equation of the Prendiville process. The explicit solution of the Prendiville process was obtained from the stochastic differential equation. Therefore, the mean and variance function of the Prendiville process could be easily found from the explicit solution.

  12. Quadratic Stochastic Operators with Countable State Space

    NASA Astrophysics Data System (ADS)

    Ganikhodjaev, Nasir

    2016-03-01

    In this paper, we provide the classes of Poisson and Geometric quadratic stochastic operators with countable state space, study the dynamics of these operators and discuss their application to economics.

  13. Stochasticity in plant cellular growth and patterning

    PubMed Central

    Meyer, Heather M.; Roeder, Adrienne H. K.

    2014-01-01

    Plants, along with other multicellular organisms, have evolved specialized regulatory mechanisms to achieve proper tissue growth and morphogenesis. During development, growing tissues generate specialized cell types and complex patterns necessary for establishing the function of the organ. Tissue growth is a tightly regulated process that yields highly reproducible outcomes. Nevertheless, the underlying cellular and molecular behaviors are often stochastic. Thus, how does stochasticity, together with strict genetic regulation, give rise to reproducible tissue development? This review draws examples from plants as well as other systems to explore stochasticity in plant cell division, growth, and patterning. We conclude that stochasticity is often needed to create small differences between identical cells, which are amplified and stabilized by genetic and mechanical feedback loops to begin cell differentiation. These first few differentiating cells initiate traditional patterning mechanisms to ensure regular development. PMID:25250034

  14. Extending Stochastic Network Calculus to Loss Analysis

    PubMed Central

    Yu, Li; Zheng, Jun

    2013-01-01

    Loss is an important parameter of Quality of Service (QoS). Though stochastic network calculus is a very useful tool for performance evaluation of computer networks, existing studies on stochastic service guarantees mainly focused on the delay and backlog. Some efforts have been made to analyse loss by deterministic network calculus, but there are few results to extend stochastic network calculus for loss analysis. In this paper, we introduce a new parameter named loss factor into stochastic network calculus and then derive the loss bound through the existing arrival curve and service curve via this parameter. We then prove that our result is suitable for the networks with multiple input flows. Simulations show the impact of buffer size, arrival traffic, and service on the loss factor. PMID:24228019

  15. Synchronization of noisy systems by stochastic signals

    SciTech Connect

    Neiman, A.; Schimansky-Geier, L.; Moss, F.; Schimansky-Geier, L.; Shulgin, B.; Collins, J.J.

    1999-07-01

    We study, in terms of synchronization, the {ital nonlinear response} of noisy bistable systems to a stochastic external signal, represented by Markovian dichotomic noise. We propose a general kinetic model which allows us to conduct a full analytical study of the nonlinear response, including the calculation of cross-correlation measures, the mean switching frequency, and synchronization regions. Theoretical results are compared with numerical simulations of a noisy overdamped bistable oscillator. We show that dichotomic noise can instantaneously synchronize the switching process of the system. We also show that synchronization is most pronounced at an optimal noise level{emdash}this effect connects this phenomenon with aperiodic stochastic resonance. Similar synchronization effects are observed for a stochastic neuron model stimulated by a stochastic spike train. {copyright} {ital 1999} {ital The American Physical Society}

  16. Stochastic structure formation in random media

    NASA Astrophysics Data System (ADS)

    Klyatskin, V. I.

    2016-01-01

    Stochastic structure formation in random media is considered using examples of elementary dynamical systems related to the two-dimensional geophysical fluid dynamics (Gaussian random fields) and to stochastically excited dynamical systems described by partial differential equations (lognormal random fields). In the latter case, spatial structures (clusters) may form with a probability of one in almost every system realization due to rare events happening with vanishing probability. Problems involving stochastic parametric excitation occur in fluid dynamics, magnetohydrodynamics, plasma physics, astrophysics, and radiophysics. A more complicated stochastic problem dealing with anomalous structures on the sea surface (rogue waves) is also considered, where the random Gaussian generation of sea surface roughness is accompanied by parametric excitation.

  17. Bootstrap performance profiles in stochastic algorithms assessment

    SciTech Connect

    Costa, Lino; Espírito Santo, Isabel A.C.P.; Oliveira, Pedro

    2015-03-10

    Optimization with stochastic algorithms has become a relevant research field. Due to its stochastic nature, its assessment is not straightforward and involves integrating accuracy and precision. Performance profiles for the mean do not show the trade-off between accuracy and precision, and parametric stochastic profiles require strong distributional assumptions and are limited to the mean performance for a large number of runs. In this work, bootstrap performance profiles are used to compare stochastic algorithms for different statistics. This technique allows the estimation of the sampling distribution of almost any statistic even with small samples. Multiple comparison profiles are presented for more than two algorithms. The advantages and drawbacks of each assessment methodology are discussed.

  18. Communication: Embedded fragment stochastic density functional theory

    SciTech Connect

    Neuhauser, Daniel; Baer, Roi; Rabani, Eran

    2014-07-28

    We develop a method in which the electronic densities of small fragments determined by Kohn-Sham density functional theory (DFT) are embedded using stochastic DFT to form the exact density of the full system. The new method preserves the scaling and the simplicity of the stochastic DFT but cures the slow convergence that occurs when weakly coupled subsystems are treated. It overcomes the spurious charge fluctuations that impair the applications of the original stochastic DFT approach. We demonstrate the new approach on a fullerene dimer and on clusters of water molecules and show that the density of states and the total energy can be accurately described with a relatively small number of stochastic orbitals.

  19. 1-D EQUILIBRIUM DISCRETE DIFFUSION MONTE CARLO

    SciTech Connect

    T. EVANS; ET AL

    2000-08-01

    We present a new hybrid Monte Carlo method for 1-D equilibrium diffusion problems in which the radiation field coexists with matter in local thermodynamic equilibrium. This method, the Equilibrium Discrete Diffusion Monte Carlo (EqDDMC) method, combines Monte Carlo particles with spatially discrete diffusion solutions. We verify the EqDDMC method with computational results from three slab problems. The EqDDMC method represents an incremental step toward applying this hybrid methodology to non-equilibrium diffusion, where it could be simultaneously coupled to Monte Carlo transport.

  20. Stochastic description of quantum Brownian dynamics

    NASA Astrophysics Data System (ADS)

    Yan, Yun-An; Shao, Jiushu

    2016-08-01

    Classical Brownian motion has well been investigated since the pioneering work of Einstein, which inspired mathematicians to lay the theoretical foundation of stochastic processes. A stochastic formulation for quantum dynamics of dissipative systems described by the system-plus-bath model has been developed and found many applications in chemical dynamics, spectroscopy, quantum transport, and other fields. This article provides a tutorial review of the stochastic formulation for quantum dissipative dynamics. The key idea is to decouple the interaction between the system and the bath by virtue of the Hubbard-Stratonovich transformation or Itô calculus so that the system and the bath are not directly entangled during evolution, rather they are correlated due to the complex white noises introduced. The influence of the bath on the system is thereby defined by an induced stochastic field, which leads to the stochastic Liouville equation for the system. The exact reduced density matrix can be calculated as the stochastic average in the presence of bath-induced fields. In general, the plain implementation of the stochastic formulation is only useful for short-time dynamics, but not efficient for long-time dynamics as the statistical errors go very fast. For linear and other specific systems, the stochastic Liouville equation is a good starting point to derive the master equation. For general systems with decomposable bath-induced processes, the hierarchical approach in the form of a set of deterministic equations of motion is derived based on the stochastic formulation and provides an effective means for simulating the dissipative dynamics. A combination of the stochastic simulation and the hierarchical approach is suggested to solve the zero-temperature dynamics of the spin-boson model. This scheme correctly describes the coherent-incoherent transition (Toulouse limit) at moderate dissipation and predicts a rate dynamics in the overdamped regime. Challenging problems

  1. An efficient distribution method for nonlinear transport problems in stochastic porous media

    NASA Astrophysics Data System (ADS)

    Ibrahima, F.; Tchelepi, H.; Meyer, D. W.

    2015-12-01

    Because geophysical data are inexorably sparse and incomplete, stochastic treatments of simulated responses are convenient to explore possible scenarios and assess risks in subsurface problems. In particular, understanding how uncertainties propagate in porous media with nonlinear two-phase flow is essential, yet challenging, in reservoir simulation and hydrology. We give a computationally efficient and numerically accurate method to estimate the one-point probability density (PDF) and cumulative distribution functions (CDF) of the water saturation for the stochastic Buckley-Leverett problem when the probability distributions of the permeability and porosity fields are available. The method draws inspiration from the streamline approach and expresses the distributions of interest essentially in terms of an analytically derived mapping and the distribution of the time of flight. In a large class of applications the latter can be estimated at low computational costs (even via conventional Monte Carlo). Once the water saturation distribution is determined, any one-point statistics thereof can be obtained, especially its average and standard deviation. Moreover, rarely available in other approaches, yet crucial information such as the probability of rare events and saturation quantiles (e.g. P10, P50 and P90) can be derived from the method. We provide various examples and comparisons with Monte Carlo simulations to illustrate the performance of the method.

  2. A model and variance reduction method for computing statistical outputs of stochastic elliptic partial differential equations

    SciTech Connect

    Vidal-Codina, F.; Nguyen, N.C.; Giles, M.B.; Peraire, J.

    2015-09-15

    We present a model and variance reduction method for the fast and reliable computation of statistical outputs of stochastic elliptic partial differential equations. Our method consists of three main ingredients: (1) the hybridizable discontinuous Galerkin (HDG) discretization of elliptic partial differential equations (PDEs), which allows us to obtain high-order accurate solutions of the governing PDE; (2) the reduced basis method for a new HDG discretization of the underlying PDE to enable real-time solution of the parameterized PDE in the presence of stochastic parameters; and (3) a multilevel variance reduction method that exploits the statistical correlation among the different reduced basis approximations and the high-fidelity HDG discretization to accelerate the convergence of the Monte Carlo simulations. The multilevel variance reduction method provides efficient computation of the statistical outputs by shifting most of the computational burden from the high-fidelity HDG approximation to the reduced basis approximations. Furthermore, we develop a posteriori error estimates for our approximations of the statistical outputs. Based on these error estimates, we propose an algorithm for optimally choosing both the dimensions of the reduced basis approximations and the sizes of Monte Carlo samples to achieve a given error tolerance. We provide numerical examples to demonstrate the performance of the proposed method.

  3. Monte Carlo surface flux tallies

    SciTech Connect

    Favorite, Jeffrey A

    2010-11-19

    Particle fluxes on surfaces are difficult to calculate with Monte Carlo codes because the score requires a division by the surface-crossing angle cosine, and grazing angles lead to inaccuracies. We revisit the standard practice of dividing by half of a cosine 'cutoff' for particles whose surface-crossing cosines are below the cutoff. The theory behind this approximation is sound, but the application of the theory to all possible situations does not account for two implicit assumptions: (1) the grazing band must be symmetric about 0, and (2) a single linear expansion for the angular flux must be applied in the entire grazing band. These assumptions are violated in common circumstances; for example, for separate in-going and out-going flux tallies on internal surfaces, and for out-going flux tallies on external surfaces. In some situations, dividing by two-thirds of the cosine cutoff is more appropriate. If users were able to control both the cosine cutoff and the substitute value, they could use these parameters to make accurate surface flux tallies. The procedure is demonstrated in a test problem in which Monte Carlo surface fluxes in cosine bins are converted to angular fluxes and compared with the results of a discrete ordinates calculation.

  4. Quantifying the Effect of Undersampling in Monte Carlo Simulations Using SCALE

    SciTech Connect

    Perfetti, Christopher M; Rearden, Bradley T

    2014-01-01

    This study explores the effect of undersampling in Monte Carlo calculations on tally estimates and tally variance estimates for burnup credit applications. Steady-state Monte Carlo simulations were performed for models of several critical systems with varying degrees of spatial and isotopic complexity and the impact of undersampling on eigenvalue and flux estimates was examined. Using an inadequate number of particle histories in each generation was found to produce an approximately 100 pcm bias in the eigenvalue estimates, and biases that exceeded 10% in fuel pin flux estimates.

  5. Automated-biasing approach to Monte Carlo shipping-cask calculations

    SciTech Connect

    Hoffman, T.J.; Tang, J.S.; Parks, C.V.; Childs, R.L.

    1982-01-01

    Computer Sciences at Oak Ridge National Laboratory, under a contract with the Nuclear Regulatory Commission, has developed the SCALE system for performing standardized criticality, shielding, and heat transfer analyses of nuclear systems. During the early phase of shielding development in SCALE, it was established that Monte Carlo calculations of radiation levels exterior to a spent fuel shipping cask would be extremely expensive. This cost can be substantially reduced by proper biasing of the Monte Carlo histories. The purpose of this study is to develop and test an automated biasing procedure for the MORSE-SGC/S module of the SCALE system.

  6. Stochastic Optimal Control for Series Hybrid Electric Vehicles

    SciTech Connect

    Malikopoulos, Andreas

    2013-01-01

    Increasing demand for improving fuel economy and reducing emissions has stimulated significant research and investment in hybrid propulsion systems. In this paper, we address the problem of optimizing online the supervisory control in a series hybrid configuration by modeling its operation as a controlled Markov chain using the average cost criterion. We treat the stochastic optimal control problem as a dual constrained optimization problem. We show that the control policy that yields higher probability distribution to the states with low cost and lower probability distribution to the states with high cost is an optimal control policy, defined as an equilibrium control policy. We demonstrate the effectiveness of the efficiency of the proposed controller in a series hybrid configuration and compare it with a thermostat-type controller.

  7. Assessing predictability of a hydrological stochastic-dynamical system

    NASA Astrophysics Data System (ADS)

    Gelfan, Alexander

    2014-05-01

    to those of the corresponding series of the actual data measured at the station. Beginning from the initial conditions and being forced by Monte-Carlo generated synthetic meteorological series, the model simulated diverging trajectories of soil moisture characteristics (water content of soil column, moisture of different soil layers, etc.). Limit of predictability of the specific characteristic was determined through time of stabilization of variance of the characteristic between the trajectories, as they move away from the initial state. Numerical experiments were carried out with the stochastic-dynamical model to analyze sensitivity of the soil moisture predictability assessments to uncertainty in the initial conditions, to determine effects of the soil hydraulic properties and processes of soil freezing on the predictability. It was found, particularly, that soil water content predictability is sensitive to errors in the initial conditions and strongly depends on the hydraulic properties of soil under both unfrozen and frozen conditions. Even if the initial conditions are "well-established", the assessed predictability of water content of unfrozen soil does not exceed 30-40 days, while for frozen conditions it may be as long as 3-4 months. The latter creates opportunity for utilizing the autumn water content of soil as the predictor for spring snowmelt runoff in the region under consideration.

  8. Fuel pin

    DOEpatents

    Christiansen, D.W.; Karnesky, R.A.; Leggett, R.D.; Baker, R.B.

    1987-11-24

    A fuel pin for a liquid metal nuclear reactor is provided. The fuel pin includes a generally cylindrical cladding member with metallic fuel material disposed therein. At least a portion of the fuel material extends radially outwardly to the inner diameter of the cladding member to promote efficient transfer of heat to the reactor coolant system. The fuel material defines at least one void space therein to facilitate swelling of the fuel material during fission.

  9. Structural model uncertainty in stochastic simulation

    SciTech Connect

    McKay, M.D.; Morrison, J.D.

    1997-09-01

    Prediction uncertainty in stochastic simulation models can be described by a hierarchy of components: stochastic variability at the lowest level, input and parameter uncertainty at a higher level, and structural model uncertainty at the top. It is argued that a usual paradigm for analysis of input uncertainty is not suitable for application to structural model uncertainty. An approach more likely to produce an acceptable methodology for analyzing structural model uncertainty is one that uses characteristics specific to the particular family of models.

  10. Complexity and synchronization in stochastic chaotic systems

    NASA Astrophysics Data System (ADS)

    Son Dang, Thai; Palit, Sanjay Kumar; Mukherjee, Sayan; Hoang, Thang Manh; Banerjee, Santo

    2016-02-01

    We investigate the complexity of a hyperchaotic dynamical system perturbed by noise and various nonlinear speech and music signals. The complexity is measured by the weighted recurrence entropy of the hyperchaotic and stochastic systems. The synchronization phenomenon between two stochastic systems with complex coupling is also investigated. These criteria are tested on chaotic and perturbed systems by mean conditional recurrence and normalized synchronization error. Numerical results including surface plots, normalized synchronization errors, complexity variations etc show the effectiveness of the proposed analysis.

  11. Desynchronization of stochastically synchronized chemical oscillators

    SciTech Connect

    Snari, Razan; Tinsley, Mark R. E-mail: kshowalt@wvu.edu; Faramarzi, Sadegh; Showalter, Kenneth E-mail: kshowalt@wvu.edu; Wilson, Dan; Moehlis, Jeff; Netoff, Theoden Ivan

    2015-12-15

    Experimental and theoretical studies are presented on the design of perturbations that enhance desynchronization in populations of oscillators that are synchronized by periodic entrainment. A phase reduction approach is used to determine optimal perturbation timing based upon experimentally measured phase response curves. The effectiveness of the perturbation waveforms is tested experimentally in populations of periodically and stochastically synchronized chemical oscillators. The relevance of the approach to therapeutic methods for disrupting phase coherence in groups of stochastically synchronized neuronal oscillators is discussed.

  12. Sequential decision analysis for nonstationary stochastic processes

    NASA Technical Reports Server (NTRS)

    Schaefer, B.

    1974-01-01

    A formulation of the problem of making decisions concerning the state of nonstationary stochastic processes is given. An optimal decision rule, for the case in which the stochastic process is independent of the decisions made, is derived. It is shown that this rule is a generalization of the Bayesian likelihood ratio test; and an analog to Wald's sequential likelihood ratio test is given, in which the optimal thresholds may vary with time.

  13. Stability of Stochastic Neutral Cellular Neural Networks

    NASA Astrophysics Data System (ADS)

    Chen, Ling; Zhao, Hongyong

    In this paper, we study a class of stochastic neutral cellular neural networks. By constructing a suitable Lyapunov functional and employing the nonnegative semi-martingale convergence theorem we give some sufficient conditions ensuring the almost sure exponential stability of the networks. The results obtained are helpful to design stability of networks when stochastic noise is taken into consideration. Finally, two examples are provided to show the correctness of our analysis.

  14. Desynchronization of stochastically synchronized chemical oscillators.

    PubMed

    Snari, Razan; Tinsley, Mark R; Wilson, Dan; Faramarzi, Sadegh; Netoff, Theoden Ivan; Moehlis, Jeff; Showalter, Kenneth

    2015-12-01

    Experimental and theoretical studies are presented on the design of perturbations that enhance desynchronization in populations of oscillators that are synchronized by periodic entrainment. A phase reduction approach is used to determine optimal perturbation timing based upon experimentally measured phase response curves. The effectiveness of the perturbation waveforms is tested experimentally in populations of periodically and stochastically synchronized chemical oscillators. The relevance of the approach to therapeutic methods for disrupting phase coherence in groups of stochastically synchronized neuronal oscillators is discussed. PMID:26723155

  15. Desynchronization of stochastically synchronized chemical oscillators

    NASA Astrophysics Data System (ADS)

    Snari, Razan; Tinsley, Mark R.; Wilson, Dan; Faramarzi, Sadegh; Netoff, Theoden Ivan; Moehlis, Jeff; Showalter, Kenneth

    2015-12-01

    Experimental and theoretical studies are presented on the design of perturbations that enhance desynchronization in populations of oscillators that are synchronized by periodic entrainment. A phase reduction approach is used to determine optimal perturbation timing based upon experimentally measured phase response curves. The effectiveness of the perturbation waveforms is tested experimentally in populations of periodically and stochastically synchronized chemical oscillators. The relevance of the approach to therapeutic methods for disrupting phase coherence in groups of stochastically synchronized neuronal oscillators is discussed.

  16. Stochastic resonance during a polymer translocation process.

    PubMed

    Mondal, Debasish; Muthukumar, M

    2016-04-14

    We have studied the occurrence of stochastic resonance when a flexible polymer chain undergoes a single-file translocation through a nano-pore separating two spherical cavities, under a time-periodic external driving force. The translocation of the chain is controlled by a free energy barrier determined by chain length, pore length, pore-polymer interaction, and confinement inside the donor and receiver cavities. The external driving force is characterized by a frequency and amplitude. By combining the Fokker-Planck formalism for polymer translocation and a two-state model for stochastic resonance, we have derived analytical formulas for criteria for emergence of stochastic resonance during polymer translocation. We show that no stochastic resonance is possible if the free energy barrier for polymer translocation is purely entropic in nature. The polymer chain exhibits stochastic resonance only in the presence of an energy threshold in terms of polymer-pore interactions. Once stochastic resonance is feasible, the chain entropy controls the optimal synchronization conditions significantly. PMID:27083746

  17. Stochastic resonance during a polymer translocation process

    NASA Astrophysics Data System (ADS)

    Mondal, Debasish; Muthukumar, M.

    2016-04-01

    We have studied the occurrence of stochastic resonance when a flexible polymer chain undergoes a single-file translocation through a nano-pore separating two spherical cavities, under a time-periodic external driving force. The translocation of the chain is controlled by a free energy barrier determined by chain length, pore length, pore-polymer interaction, and confinement inside the donor and receiver cavities. The external driving force is characterized by a frequency and amplitude. By combining the Fokker-Planck formalism for polymer translocation and a two-state model for stochastic resonance, we have derived analytical formulas for criteria for emergence of stochastic resonance during polymer translocation. We show that no stochastic resonance is possible if the free energy barrier for polymer translocation is purely entropic in nature. The polymer chain exhibits stochastic resonance only in the presence of an energy threshold in terms of polymer-pore interactions. Once stochastic resonance is feasible, the chain entropy controls the optimal synchronization conditions significantly.

  18. Automated Flight Routing Using Stochastic Dynamic Programming

    NASA Technical Reports Server (NTRS)

    Ng, Hok K.; Morando, Alex; Grabbe, Shon

    2010-01-01

    Airspace capacity reduction due to convective weather impedes air traffic flows and causes traffic congestion. This study presents an algorithm that reroutes flights in the presence of winds, enroute convective weather, and congested airspace based on stochastic dynamic programming. A stochastic disturbance model incorporates into the reroute design process the capacity uncertainty. A trajectory-based airspace demand model is employed for calculating current and future airspace demand. The optimal routes minimize the total expected traveling time, weather incursion, and induced congestion costs. They are compared to weather-avoidance routes calculated using deterministic dynamic programming. The stochastic reroutes have smaller deviation probability than the deterministic counterpart when both reroutes have similar total flight distance. The stochastic rerouting algorithm takes into account all convective weather fields with all severity levels while the deterministic algorithm only accounts for convective weather systems exceeding a specified level of severity. When the stochastic reroutes are compared to the actual flight routes, they have similar total flight time, and both have about 1% of travel time crossing congested enroute sectors on average. The actual flight routes induce slightly less traffic congestion than the stochastic reroutes but intercept more severe convective weather.

  19. Stochastic models of intracellular transport

    NASA Astrophysics Data System (ADS)

    Bressloff, Paul C.; Newby, Jay M.

    2013-01-01

    The interior of a living cell is a crowded, heterogenuous, fluctuating environment. Hence, a major challenge in modeling intracellular transport is to analyze stochastic processes within complex environments. Broadly speaking, there are two basic mechanisms for intracellular transport: passive diffusion and motor-driven active transport. Diffusive transport can be formulated in terms of the motion of an overdamped Brownian particle. On the other hand, active transport requires chemical energy, usually in the form of adenosine triphosphate hydrolysis, and can be direction specific, allowing biomolecules to be transported long distances; this is particularly important in neurons due to their complex geometry. In this review a wide range of analytical methods and models of intracellular transport is presented. In the case of diffusive transport, narrow escape problems, diffusion to a small target, confined and single-file diffusion, homogenization theory, and fractional diffusion are considered. In the case of active transport, Brownian ratchets, random walk models, exclusion processes, random intermittent search processes, quasi-steady-state reduction methods, and mean-field approximations are considered. Applications include receptor trafficking, axonal transport, membrane diffusion, nuclear transport, protein-DNA interactions, virus trafficking, and the self-organization of subcellular structures.

  20. Stochastic slowdown in evolutionary processes.

    PubMed

    Altrock, Philipp M; Gokhale, Chaitanya S; Traulsen, Arne

    2010-07-01

    We examine birth-death processes with state dependent transition probabilities and at least one absorbing boundary. In evolution, this describes selection acting on two different types in a finite population where reproductive events occur successively. If the two types have equal fitness the system performs a random walk. If one type has a fitness advantage it is favored by selection, which introduces a bias (asymmetry) in the transition probabilities. How long does it take until advantageous mutants have invaded and taken over? Surprisingly, we find that the average time of such a process can increase, even if the mutant type always has a fitness advantage. We discuss this finding for the Moran process and develop a simplified model which allows a more intuitive understanding. We show that this effect can occur for weak but nonvanishing bias (selection) in the state dependent transition rates and infer the scaling with system size. We also address the Wright-Fisher model commonly used in population genetics, which shows that this stochastic slowdown is not restricted to birth-death processes. PMID:20866666