Science.gov

Sample records for fuel stochastic monte

  1. Multidimensional stochastic approximation Monte Carlo.

    PubMed

    Zablotskiy, Sergey V; Ivanov, Victor A; Paul, Wolfgang

    2016-06-01

    Stochastic Approximation Monte Carlo (SAMC) has been established as a mathematically founded powerful flat-histogram Monte Carlo method, used to determine the density of states, g(E), of a model system. We show here how it can be generalized for the determination of multidimensional probability distributions (or equivalently densities of states) of macroscopic or mesoscopic variables defined on the space of microstates of a statistical mechanical system. This establishes this method as a systematic way for coarse graining a model system, or, in other words, for performing a renormalization group step on a model. We discuss the formulation of the Kadanoff block spin transformation and the coarse-graining procedure for polymer models in this language. We also apply it to a standard case in the literature of two-dimensional densities of states, where two competing energetic effects are present g(E_{1},E_{2}). We show when and why care has to be exercised when obtaining the microcanonical density of states g(E_{1}+E_{2}) from g(E_{1},E_{2}). PMID:27415383

  2. Multidimensional stochastic approximation Monte Carlo

    NASA Astrophysics Data System (ADS)

    Zablotskiy, Sergey V.; Ivanov, Victor A.; Paul, Wolfgang

    2016-06-01

    Stochastic Approximation Monte Carlo (SAMC) has been established as a mathematically founded powerful flat-histogram Monte Carlo method, used to determine the density of states, g (E ) , of a model system. We show here how it can be generalized for the determination of multidimensional probability distributions (or equivalently densities of states) of macroscopic or mesoscopic variables defined on the space of microstates of a statistical mechanical system. This establishes this method as a systematic way for coarse graining a model system, or, in other words, for performing a renormalization group step on a model. We discuss the formulation of the Kadanoff block spin transformation and the coarse-graining procedure for polymer models in this language. We also apply it to a standard case in the literature of two-dimensional densities of states, where two competing energetic effects are present g (E1,E2) . We show when and why care has to be exercised when obtaining the microcanonical density of states g (E1+E2) from g (E1,E2) .

  3. Optimization of Monte Carlo transport simulations in stochastic media

    SciTech Connect

    Liang, C.; Ji, W.

    2012-07-01

    This paper presents an accurate and efficient approach to optimize radiation transport simulations in a stochastic medium of high heterogeneity, like the Very High Temperature Gas-cooled Reactor (VHTR) configurations packed with TRISO fuel particles. Based on a fast nearest neighbor search algorithm, a modified fast Random Sequential Addition (RSA) method is first developed to speed up the generation of the stochastic media systems packed with both mono-sized and poly-sized spheres. A fast neutron tracking method is then developed to optimize the next sphere boundary search in the radiation transport procedure. In order to investigate their accuracy and efficiency, the developed sphere packing and neutron tracking methods are implemented into an in-house continuous energy Monte Carlo code to solve an eigenvalue problem in VHTR unit cells. Comparison with the MCNP benchmark calculations for the same problem indicates that the new methods show considerably higher computational efficiency. (authors)

  4. Stabilized multilevel Monte Carlo method for stiff stochastic differential equations

    NASA Astrophysics Data System (ADS)

    Abdulle, Assyr; Blumenthal, Adrian

    2013-10-01

    A multilevel Monte Carlo (MLMC) method for mean square stable stochastic differential equations with multiple scales is proposed. For such problems, that we call stiff, the performance of MLMC methods based on classical explicit methods deteriorates because of the time step restriction to resolve the fastest scales that prevents to exploit all the levels of the MLMC approach. We show that by switching to explicit stabilized stochastic methods and balancing the stabilization procedure simultaneously with the hierarchical sampling strategy of MLMC methods, the computational cost for stiff systems is significantly reduced, while keeping the computational algorithm fully explicit and easy to implement. Numerical experiments on linear and nonlinear stochastic differential equations and on a stochastic partial differential equation illustrate the performance of the stabilized MLMC method and corroborate our theoretical findings.

  5. Semi-stochastic full configuration interaction quantum Monte Carlo

    NASA Astrophysics Data System (ADS)

    Holmes, Adam; Petruzielo, Frank; Khadilkar, Mihir; Changlani, Hitesh; Nightingale, M. P.; Umrigar, C. J.

    2012-02-01

    In the recently proposed full configuration interaction quantum Monte Carlo (FCIQMC) [1,2], the ground state is projected out stochastically, using a population of walkers each of which represents a basis state in the Hilbert space spanned by Slater determinants. The infamous fermion sign problem manifests itself in the fact that walkers of either sign can be spawned on a given determinant. We propose an improvement on this method in the form of a hybrid stochastic/deterministic technique, which we expect will improve the efficiency of the algorithm by ameliorating the sign problem. We test the method on atoms and molecules, e.g., carbon, carbon dimer, N2 molecule, and stretched N2. [4pt] [1] Fermion Monte Carlo without fixed nodes: a Game of Life, death and annihilation in Slater Determinant space. George Booth, Alex Thom, Ali Alavi. J Chem Phys 131, 050106, (2009).[0pt] [2] Survival of the fittest: Accelerating convergence in full configuration-interaction quantum Monte Carlo. Deidre Cleland, George Booth, and Ali Alavi. J Chem Phys 132, 041103 (2010).

  6. Monte Carlo solution for uncertainty propagation in particle transport with a stochastic Galerkin method

    SciTech Connect

    Franke, B. C.; Prinja, A. K.

    2013-07-01

    The stochastic Galerkin method (SGM) is an intrusive technique for propagating data uncertainty in physical models. The method reduces the random model to a system of coupled deterministic equations for the moments of stochastic spectral expansions of result quantities. We investigate solving these equations using the Monte Carlo technique. We compare the efficiency with brute-force Monte Carlo evaluation of uncertainty, the non-intrusive stochastic collocation method (SCM), and an intrusive Monte Carlo implementation of the stochastic collocation method. We also describe the stability limitations of our SGM implementation. (authors)

  7. Stochastic Kinetic Monte Carlo algorithms for long-range Hamiltonians

    SciTech Connect

    Mason, D R; Rudd, R E; Sutton, A P

    2003-10-13

    We present a higher order kinetic Monte Carlo methodology suitable to model the evolution of systems in which the transition rates are non- trivial to calculate or in which Monte Carlo moves are likely to be non- productive flicker events. The second order residence time algorithm first introduced by Athenes et al.[1] is rederived from the n-fold way algorithm of Bortz et al.[2] as a fully stochastic algorithm. The second order algorithm can be dynamically called when necessary to eliminate unproductive flickering between a metastable state and its neighbors. An algorithm combining elements of the first order and second order methods is shown to be more efficient, in terms of the number of rate calculations, than the first order or second order methods alone while remaining statistically identical. This efficiency is of prime importance when dealing with computationally expensive rate functions such as those arising from long- range Hamiltonians. Our algorithm has been developed for use when considering simulations of vacancy diffusion under the influence of elastic stress fields. We demonstrate the improved efficiency of the method over that of the n-fold way in simulations of vacancy diffusion in alloys. Our algorithm is seen to be an order of magnitude more efficient than the n-fold way in these simulations. We show that when magnesium is added to an Al-2at.%Cu alloy, this has the effect of trapping vacancies. When trapping occurs, we see that our algorithm performs thousands of events for each rate calculation performed.

  8. Stochastic resonance phenomenon in Monte Carlo simulations of silver adsorbed on gold

    NASA Astrophysics Data System (ADS)

    Gimenez, María Cecilia

    2016-03-01

    The possibility of observing the stochastic resonance phenomenon was analyzed by means of Monte Carlo simulations of silver adsorbed on 100 gold surfaces. The coverage degree was studied as a function of the periodical variation of the chemical potential. The signal-noise relationship was studied as a function of the amplitude and frequency of chemical potential and temperature. When this value is plotted as a function of temperature, a maximum is found, indicating the possible presence of stochastic resonance.

  9. Monte Carlo Hybrid Applied to Binary Stochastic Mixtures

    Energy Science and Technology Software Center (ESTSC)

    2008-08-11

    The purpose of this set of codes isto use an inexpensive, approximate deterministic flux distribution to generate weight windows, wihich will then be used to bound particle weights for the Monte Carlo code run. The process is not automated; the user must run the deterministic code and use the output file as a command-line argument for the Monte Carlo code. Two sets of text input files are included as test problems/templates.

  10. Semi-stochastic full configuration interaction quantum Monte Carlo: Developments and application.

    PubMed

    Blunt, N S; Smart, Simon D; Kersten, J A F; Spencer, J S; Booth, George H; Alavi, Ali

    2015-05-14

    We expand upon the recent semi-stochastic adaptation to full configuration interaction quantum Monte Carlo (FCIQMC). We present an alternate method for generating the deterministic space without a priori knowledge of the wave function and present stochastic efficiencies for a variety of both molecular and lattice systems. The algorithmic details of an efficient semi-stochastic implementation are presented, with particular consideration given to the effect that the adaptation has on parallel performance in FCIQMC. We further demonstrate the benefit for calculation of reduced density matrices in FCIQMC through replica sampling, where the semi-stochastic adaptation seems to have even larger efficiency gains. We then combine these ideas to produce explicitly correlated corrected FCIQMC energies for the beryllium dimer, for which stochastic errors on the order of wavenumber accuracy are achievable. PMID:25978883

  11. Semi-stochastic full configuration interaction quantum Monte Carlo: Developments and application

    SciTech Connect

    Blunt, N. S. Kersten, J. A. F.; Smart, Simon D.; Spencer, J. S.; Booth, George H.; Alavi, Ali

    2015-05-14

    We expand upon the recent semi-stochastic adaptation to full configuration interaction quantum Monte Carlo (FCIQMC). We present an alternate method for generating the deterministic space without a priori knowledge of the wave function and present stochastic efficiencies for a variety of both molecular and lattice systems. The algorithmic details of an efficient semi-stochastic implementation are presented, with particular consideration given to the effect that the adaptation has on parallel performance in FCIQMC. We further demonstrate the benefit for calculation of reduced density matrices in FCIQMC through replica sampling, where the semi-stochastic adaptation seems to have even larger efficiency gains. We then combine these ideas to produce explicitly correlated corrected FCIQMC energies for the beryllium dimer, for which stochastic errors on the order of wavenumber accuracy are achievable.

  12. Empirical Analysis of Stochastic Volatility Model by Hybrid Monte Carlo Algorithm

    NASA Astrophysics Data System (ADS)

    Takaishi, Tetsuya

    2013-04-01

    The stochastic volatility model is one of volatility models which infer latent volatility of asset returns. The Bayesian inference of the stochastic volatility (SV) model is performed by the hybrid Monte Carlo (HMC) algorithm which is superior to other Markov Chain Monte Carlo methods in sampling volatility variables. We perform the HMC simulations of the SV model for two liquid stock returns traded on the Tokyo Stock Exchange and measure the volatilities of those stock returns. Then we calculate the accuracy of the volatility measurement using the realized volatility as a proxy of the true volatility and compare the SV model with the GARCH model which is one of other volatility models. Using the accuracy calculated with the realized volatility we find that empirically the SV model performs better than the GARCH model.

  13. A Hybrid Monte Carlo-Deterministic Method for Global Binary Stochastic Medium Transport Problems

    SciTech Connect

    Keady, K P; Brantley, P

    2010-03-04

    Global deep-penetration transport problems are difficult to solve using traditional Monte Carlo techniques. In these problems, the scalar flux distribution is desired at all points in the spatial domain (global nature), and the scalar flux typically drops by several orders of magnitude across the problem (deep-penetration nature). As a result, few particle histories may reach certain regions of the domain, producing a relatively large variance in tallies in those regions. Implicit capture (also known as survival biasing or absorption suppression) can be used to increase the efficiency of the Monte Carlo transport algorithm to some degree. A hybrid Monte Carlo-deterministic technique has previously been developed by Cooper and Larsen to reduce variance in global problems by distributing particles more evenly throughout the spatial domain. This hybrid method uses an approximate deterministic estimate of the forward scalar flux distribution to automatically generate weight windows for the Monte Carlo transport simulation, avoiding the necessity for the code user to specify the weight window parameters. In a binary stochastic medium, the material properties at a given spatial location are known only statistically. The most common approach to solving particle transport problems involving binary stochastic media is to use the atomic mix (AM) approximation in which the transport problem is solved using ensemble-averaged material properties. The most ubiquitous deterministic model developed specifically for solving binary stochastic media transport problems is the Levermore-Pomraning (L-P) model. Zimmerman and Adams proposed a Monte Carlo algorithm (Algorithm A) that solves the Levermore-Pomraning equations and another Monte Carlo algorithm (Algorithm B) that is more accurate as a result of improved local material realization modeling. Recent benchmark studies have shown that Algorithm B is often significantly more accurate than Algorithm A (and therefore the L-P model

  14. Stochastic Monte-Carlo Markov Chain Inversions on Models Regionalized Using Receiver Functions

    NASA Astrophysics Data System (ADS)

    Larmat, C. S.; Maceira, M.; Kato, Y.; Bodin, T.; Calo, M.; Romanowicz, B. A.; Chai, C.; Ammon, C. J.

    2014-12-01

    There is currently a strong interest in stochastic approaches to seismic modeling - versus deterministic methods such as gradient methods - due to the ability of these methods to better deal with highly non-linear problems. Another advantage of stochastic methods is that they allow the estimation of the a posteriori probability distribution of the derived parameters, meaning the envisioned Bayesian inversion of Tarantola allowing the quantification of the solution error. The cost to pay of stochastic methods is that they require testing thousands of variations of each unknown parameter and their associated weights to ensure reliable probabilistic inferences. Even with the best High-Performance Computing resources available, 3D stochastic full waveform modeling at the regional scale still remains out-of-reach. We are exploring regionalization as one way to reduce the dimension of the parameter space, allowing the identification of areas in the models that can be treated as one block in a subsequent stochastic inversion. Regionalization is classically performed through the identification of tectonic or structural elements. Lekic & Romanowicz (2011) proposed a new approach with a cluster analysis of the tomographic velocity models instead. Here we present the results of a clustering analysis on the P-wave receiver-functions used in the subsequent inversion. Different clustering algorithms and quality of clustering are tested for different datasets of North America and China. Preliminary results with the kmean clustering algorithm show that an interpolated receiver function wavefield (Chai et al., GRL, in review) improve the agreement with the geological and tectonic regions of North America compared to the traditional approach of stacked receiver functions. After regionalization, 1D profile for each region is stochastically inferred using a parallelized code based on Monte-Carlo Markov Chains (MCMC), and modeling surfacewave-dispersion and receiver

  15. A Monte Carlo simulation based inverse propagation method for stochastic model updating

    NASA Astrophysics Data System (ADS)

    Bao, Nuo; Wang, Chunjie

    2015-08-01

    This paper presents an efficient stochastic model updating method based on statistical theory. Significant parameters have been selected implementing the F-test evaluation and design of experiments, and then the incomplete fourth-order polynomial response surface model (RSM) has been developed. Exploiting of the RSM combined with Monte Carlo simulation (MCS), reduces the calculation amount and the rapid random sampling becomes possible. The inverse uncertainty propagation is given by the equally weighted sum of mean and covariance matrix objective functions. The mean and covariance of parameters are estimated synchronously by minimizing the weighted objective function through hybrid of particle-swarm and Nelder-Mead simplex optimization method, thus the better correlation between simulation and test is achieved. Numerical examples of a three degree-of-freedom mass-spring system under different conditions and GARTEUR assembly structure validated the feasibility and effectiveness of the proposed method.

  16. Improving multilevel Monte Carlo for stochastic differential equations with application to the Langevin equation

    PubMed Central

    Müller, Eike H.; Scheichl, Rob; Shardlow, Tony

    2015-01-01

    This paper applies several well-known tricks from the numerical treatment of deterministic differential equations to improve the efficiency of the multilevel Monte Carlo (MLMC) method for stochastic differential equations (SDEs) and especially the Langevin equation. We use modified equations analysis as an alternative to strong-approximation theory for the integrator, and we apply this to introduce MLMC for Langevin-type equations with integrators based on operator splitting. We combine this with extrapolation and investigate the use of discrete random variables in place of the Gaussian increments, which is a well-known technique for the weak approximation of SDEs. We show that, for small-noise problems, discrete random variables can lead to an increase in efficiency of almost two orders of magnitude for practical levels of accuracy.

  17. Fuel temperature reactivity coefficient calculation by Monte Carlo perturbation techniques

    SciTech Connect

    Shim, H. J.; Kim, C. H.

    2013-07-01

    We present an efficient method to estimate the fuel temperature reactivity coefficient (FTC) by the Monte Carlo adjoint-weighted correlated sampling method. In this method, a fuel temperature change is regarded as variations of the microscopic cross sections and the temperature in the free gas model which is adopted to correct the asymptotic double differential scattering kernel. The effectiveness of the new method is examined through the continuous energy MC neutronics calculations for PWR pin cell problems. The isotope-wise and reaction-type-wise contributions to the FTCs are investigated for two free gas models - the constant scattering cross section model and the exact model. It is shown that the proposed method can efficiently predict the reactivity change due to the fuel temperature variation. (authors)

  18. Stochastic sensitivity analysis of the biosphere model for Canadian nuclear fuel waste management

    SciTech Connect

    Reid, J.A.K.; Corbett, B.J. . Whiteshell Labs.)

    1993-01-01

    The biosphere model, BIOTRAC, was constructed to assess Canada's concept for nuclear fuel waste disposal in a vault deep in crystalline rock at some as yet undetermined location in the Canadian Shield. The model is therefore very general and based on the shield as a whole. BIOTRAC is made up of four linked submodels for surface water, soil, atmosphere, and food chain and dose. The model simulates physical conditions and radionuclide flows from the discharge of a hypothetical nuclear fuel waste disposal vault through groundwater, a well, a lake, air, soil, and plants to a critical group of individuals, i.e., those who are most exposed and therefore receive the highest dose. This critical group is totally self-sufficient and is represented by the International Commission for Radiological Protection reference man for dose prediction. BIOTRAC is a dynamic model that assumes steady-state physical conditions for each simulation, and deals with variation and uncertainty through Monte Carlo simulation techniques. This paper describes SENSYV, a technique for analyzing pathway and parameter sensitivities for the BIOTRAC code run in stochastic mode. Results are presented for [sup 129]I from the disposal of used fuel, and they confirm the importance of doses via the soil/plant/man and the air/plant/man ingestion pathways. The results also indicate that the lake/well water use switch, the aquatic iodine mass loading parameter, the iodine soil evasion rate, and the iodine plant/soil concentration ratio are important parameters.

  19. A stochastic model updating method for parameter variability quantification based on response surface models and Monte Carlo simulation

    NASA Astrophysics Data System (ADS)

    Fang, Sheng-En; Ren, Wei-Xin; Perera, Ricardo

    2012-11-01

    Stochastic model updating must be considered for quantifying uncertainties inherently existing in real-world engineering structures. By this means the statistical properties, instead of deterministic values, of structural parameters can be sought indicating the parameter variability. However, the implementation of stochastic model updating is much more complicated than that of deterministic methods particularly in the aspects of theoretical complexity and low computational efficiency. This study attempts to propose a simple and cost-efficient method by decomposing a stochastic updating process into a series of deterministic ones with the aid of response surface models and Monte Carlo simulation. The response surface models are used as surrogates for original FE models in the interest of programming simplification, fast response computation and easy inverse optimization. Monte Carlo simulation is adopted for generating samples from the assumed or measured probability distributions of responses. Each sample corresponds to an individual deterministic inverse process predicting the deterministic values of parameters. Then the parameter means and variances can be statistically estimated based on all the parameter predictions by running all the samples. Meanwhile, the analysis of variance approach is employed for the evaluation of parameter variability significance. The proposed method has been demonstrated firstly on a numerical beam and then a set of nominally identical steel plates tested in the laboratory. It is found that compared with the existing stochastic model updating methods, the proposed method presents similar accuracy while its primary merits consist in its simple implementation and cost efficiency in response computation and inverse optimization.

  20. Chaotic versus nonchaotic stochastic dynamics in Monte Carlo simulations: a route for accurate energy differences in N-body systems.

    PubMed

    Assaraf, Roland; Caffarel, Michel; Kollias, A C

    2011-04-15

    We present a method to efficiently evaluate small energy differences of two close N-body systems by employing stochastic processes having a stability versus chaos property. By using the same random noise, energy differences are computed from close trajectories without reweighting procedures. The approach is presented for quantum systems but can be applied to classical N-body systems as well. It is exemplified with diffusion Monte Carlo simulations for long chains of hydrogen atoms and molecules for which it is shown that the long-standing problem of computing energy derivatives is solved. PMID:21568537

  1. Evaluation of Monte Carlo Electron-Transport Algorithms in the Integrated Tiger Series Codes for Stochastic-Media Simulations

    NASA Astrophysics Data System (ADS)

    Franke, Brian C.; Kensek, Ronald P.; Prinja, Anil K.

    2014-06-01

    Stochastic-media simulations require numerous boundary crossings. We consider two Monte Carlo electron transport approaches and evaluate accuracy with numerous material boundaries. In the condensed-history method, approximations are made based on infinite-medium solutions for multiple scattering over some track length. Typically, further approximations are employed for material-boundary crossings where infinite-medium solutions become invalid. We have previously explored an alternative "condensed transport" formulation, a Generalized Boltzmann-Fokker-Planck GBFP method, which requires no special boundary treatment but instead uses approximations to the electron-scattering cross sections. Some limited capabilities for analog transport and a GBFP method have been implemented in the Integrated Tiger Series (ITS) codes. Improvements have been made to the condensed history algorithm. The performance of the ITS condensed-history and condensed-transport algorithms are assessed for material-boundary crossings. These assessments are made both by introducing artificial material boundaries and by comparison to analog Monte Carlo simulations.

  2. Monte Carlo method based radiative transfer simulation of stochastic open forest generated by circle packing application

    NASA Astrophysics Data System (ADS)

    Jin, Shengye; Tamura, Masayuki

    2013-10-01

    Monte Carlo Ray Tracing (MCRT) method is a versatile application for simulating radiative transfer regime of the Solar - Atmosphere - Landscape system. Moreover, it can be used to compute the radiation distribution over a complex landscape configuration, as an example like a forest area. Due to its robustness to the complexity of the 3-D scene altering, MCRT method is also employed for simulating canopy radiative transfer regime as the validation source of other radiative transfer models. In MCRT modeling within vegetation, one basic step is the canopy scene set up. 3-D scanning application was used for representing canopy structure as accurately as possible, but it is time consuming. Botanical growth function can be used to model the single tree growth, but cannot be used to express the impaction among trees. L-System is also a functional controlled tree growth simulation model, but it costs large computing memory. Additionally, it only models the current tree patterns rather than tree growth during we simulate the radiative transfer regime. Therefore, it is much more constructive to use regular solid pattern like ellipsoidal, cone, cylinder etc. to indicate single canopy. Considering the allelopathy phenomenon in some open forest optical images, each tree in its own `domain' repels other trees. According to this assumption a stochastic circle packing algorithm is developed to generate the 3-D canopy scene in this study. The canopy coverage (%) and the tree amount (N) of the 3-D scene are declared at first, similar to the random open forest image. Accordingly, we randomly generate each canopy radius (rc). Then we set the circle central coordinate on XY-plane as well as to keep circles separate from each other by the circle packing algorithm. To model the individual tree, we employ the Ishikawa's tree growth regressive model to set the tree parameters including DBH (dt), tree height (H). However, the relationship between canopy height (Hc) and trunk height (Ht) is

  3. Comparing three stochastic search algorithms for computational protein design: Monte Carlo, replica exchange Monte Carlo, and a multistart, steepest-descent heuristic.

    PubMed

    Mignon, David; Simonson, Thomas

    2016-07-15

    Computational protein design depends on an energy function and an algorithm to search the sequence/conformation space. We compare three stochastic search algorithms: a heuristic, Monte Carlo (MC), and a Replica Exchange Monte Carlo method (REMC). The heuristic performs a steepest-descent minimization starting from thousands of random starting points. The methods are applied to nine test proteins from three structural families, with a fixed backbone structure, a molecular mechanics energy function, and with 1, 5, 10, 20, 30, or all amino acids allowed to mutate. Results are compared to an exact, "Cost Function Network" method that identifies the global minimum energy conformation (GMEC) in favorable cases. The designed sequences accurately reproduce experimental sequences in the hydrophobic core. The heuristic and REMC agree closely and reproduce the GMEC when it is known, with a few exceptions. Plain MC performs well for most cases, occasionally departing from the GMEC by 3-4 kcal/mol. With REMC, the diversity of the sequences sampled agrees with exact enumeration where the latter is possible: up to 2 kcal/mol above the GMEC. Beyond, room temperature replicas sample sequences up to 10 kcal/mol above the GMEC, providing thermal averages and a solution to the inverse protein folding problem. © 2016 Wiley Periodicals, Inc. PMID:27197555

  4. Nano-structural analysis of effective transport paths in fuel-cell catalyst layers by using stochastic material network methods

    NASA Astrophysics Data System (ADS)

    Shin, Seungho; Kim, Ah-Reum; Um, Sukkee

    2016-02-01

    A two-dimensional material network model has been developed to visualize the nano-structures of fuel-cell catalysts and to search for effective transport paths for the optimal performance of fuel cells in randomly-disordered composite catalysts. Stochastic random modeling based on the Monte Carlo method is developed using random number generation processes over a catalyst layer domain at a 95% confidence level. After the post-determination process of the effective connectivity, particularly for mass transport, the effective catalyst utilization factors are introduced to determine the extent of catalyst utilization in the fuel cells. The results show that the superficial pore volume fractions of 600 trials approximate a normal distribution curve with a mean of 0.5. In contrast, the estimated volume fraction of effectively inter-connected void clusters ranges from 0.097 to 0.420, which is much smaller than the superficial porosity of 0.5 before the percolation process. Furthermore, the effective catalyst utilization factor is determined to be linearly proportional to the effective porosity. More importantly, this study reveals that the average catalyst utilization is less affected by the variations of the catalyst's particle size and the absolute catalyst loading at a fixed volume fraction of void spaces.

  5. Comparison of Ensemble Kalman Filter groundwater-data assimilation methods based on stochastic moment equations and Monte Carlo simulation

    NASA Astrophysics Data System (ADS)

    Panzeri, M.; Riva, M.; Guadagnini, A.; Neuman, S. P.

    2014-04-01

    Traditional Ensemble Kalman Filter (EnKF) data assimilation requires computationally intensive Monte Carlo (MC) sampling, which suffers from filter inbreeding unless the number of simulations is large. Recently we proposed an alternative EnKF groundwater-data assimilation method that obviates the need for sampling and is free of inbreeding issues. In our new approach, theoretical ensemble moments are approximated directly by solving a system of corresponding stochastic groundwater flow equations. Like MC-based EnKF, our moment equations (ME) approach allows Bayesian updating of system states and parameters in real-time as new data become available. Here we compare the performances and accuracies of the two approaches on two-dimensional transient groundwater flow toward a well pumping water in a synthetic, randomly heterogeneous confined aquifer subject to prescribed head and flux boundary conditions.

  6. Stochastic Inversion of Electrical Resistivity Changes Using a Markov Chain, Monte Carlo Approach

    SciTech Connect

    Ramirez, A; Nitao, J; Hanley, W; Aines, R; Glaser, R; Sengupta, S; Dyer, K; Hickling, T; Daily, W

    2004-09-21

    We describe a stochastic inversion method for mapping subsurface regions where the electrical resistivity is changing. The technique combines prior information, electrical resistance data and forward models to produce subsurface resistivity models that are most consistent with all available data. Bayesian inference and a Metropolis simulation algorithm form the basis for this approach. Attractive features include its ability to: (1) provide quantitative measures of the uncertainty of a generated estimate and, (2) allow alternative model estimates to be identified, compared and ranked. Methods that monitor convergence and summarize important trends of the posterior distribution are introduced. Results from a physical model test and a field experiment were used to assess performance. The stochastic inversions presented provide useful estimates of the most probable location, shape, and volume of the changing region, and the most likely resistivity change. The proposed method is computationally expensive, requiring the use of extensive computational resources to make its application practical.

  7. Stochastic method for accommodation of equilibrating basins in kinetic Monte Carlo simulations

    SciTech Connect

    Van Siclen, Clinton D

    2007-02-01

    A computationally simple way to accommodate "basins" of trapping states in standard kinetic Monte Carlo simulations is presented. By assuming the system is effectively equilibrated in the basin, the residence time (time spent in the basin before escape) and the probabilities for transition to states outside the basin may be calculated. This is demonstrated for point defect diffusion over a periodic grid of sites containing a complex basin.

  8. Stochastic theory of interfacial enzyme kinetics: A kinetic Monte Carlo study

    NASA Astrophysics Data System (ADS)

    Das, Biswajit; Gangopadhyay, Gautam

    2012-01-01

    In the spirit of Gillespie's stochastic approach we have formulated a theory to explore the advancement of the interfacial enzyme kinetics at the single enzyme level which is ultimately utilized to obtain the ensemble average macroscopic feature, lag-burst kinetics. We have provided a theory of the transition from the lag phase to the burst phase kinetics by considering the gradual development of electrostatic interaction among the positively charged enzyme and negatively charged product molecules deposited on the phospholipid surface. It is shown that the different diffusion time scales of the enzyme over the fluid and product regions are responsible for the memory effect in the correlation of successive turnover events of the hopping mode in the single trajectory analysis which again is reflected on the non-Gaussian distribution of turnover times on the macroscopic kinetics in the lag phase unlike the burst phase kinetics.

  9. Stochastic modeling of polarized light scattering using a Monte Carlo based stencil method.

    PubMed

    Sormaz, Milos; Stamm, Tobias; Jenny, Patrick

    2010-05-01

    This paper deals with an efficient and accurate simulation algorithm to solve the vector Boltzmann equation for polarized light transport in scattering media. The approach is based on a stencil method, which was previously developed for unpolarized light scattering and proved to be much more efficient (speedup factors of up to 10 were reported) than the classical Monte Carlo while being equally accurate. To validate what we believe to be the new stencil method, a substrate composed of spherical non-absorbing particles embedded in a non-absorbing medium was considered. The corresponding single scattering Mueller matrix, which is required to model scattering of polarized light, was determined based on the Lorenz-Mie theory. From simulations of a reflected polarized laser beam, the Mueller matrix of the substrate was computed and compared with an established reference. The agreement is excellent, and it could be demonstrated that a significant speedup of the simulations is achieved due to the stencil approach compared with the classical Monte Carlo. PMID:20448777

  10. A Monte Carlo based spent fuel analysis safeguards strategy assessment

    SciTech Connect

    Fensin, Michael L; Tobin, Stephen J; Swinhoe, Martyn T; Menlove, Howard O; Sandoval, Nathan P

    2009-01-01

    Safeguarding nuclear material involves the detection of diversions of significant quantities of nuclear materials, and the deterrence of such diversions by the risk of early detection. There are a variety of motivations for quantifying plutonium in spent fuel assemblies by means of nondestructive assay (NDA) including the following: strengthening the capabilities of the International Atomic Energy Agencies ability to safeguards nuclear facilities, shipper/receiver difference, input accountability at reprocessing facilities and burnup credit at repositories. Many NDA techniques exist for measuring signatures from spent fuel; however, no single NDA technique can, in isolation, quantify elemental plutonium and other actinides of interest in spent fuel. A study has been undertaken to determine the best integrated combination of cost effective techniques for quantifying plutonium mass in spent fuel for nuclear safeguards. A standardized assessment process was developed to compare the effective merits and faults of 12 different detection techniques in order to integrate a few techniques and to down-select among the techniques in preparation for experiments. The process involves generating a basis burnup/enrichment/cooling time dependent spent fuel assembly library, creating diversion scenarios, developing detector models and quantifying the capability of each NDA technique. Because hundreds of input and output files must be managed in the couplings of data transitions for the different facets of the assessment process, a graphical user interface (GUI) was development that automates the process. This GUI allows users to visually create diversion scenarios with varied replacement materials, and generate a MCNPX fixed source detector assessment input file. The end result of the assembly library assessment is to select a set of common source terms and diversion scenarios for quantifying the capability of each of the 12 NDA techniques. We present here the generalized

  11. a Monte Carlo Assisted Simulation of Stochastic Molecular Dynamics for Folding of the Protein Crambin in a Viscous Environment

    NASA Astrophysics Data System (ADS)

    Taneri, Sencer

    We investigate the folding dynamics of the plant-seed protein Crambin in a liquid environment, that usually happens to be water with some certain viscosity. To take into account the viscosity, necessitates a stochastic approach. This can be summarized by a 2D-Langevin equation, even though the simulation is still carried out in 3D. Solution of the Langevin equation will be the basic task in order to proceed with a Molecular Dynamics simulation, which will accompany a delicate Monte Carlo technique. The potential wells, used to engineer the energy space assuming the interaction of monomers constituting the protein-chain, are simply modeled by a combination of two parabola. This combination will approximate the real physical interactions, that is given by the well known Lennard-Jones potential. Contributions to the total potential from torsion, bending and distance dependent potentials are good to the fourth nearest neighbor. The final image is in a very good geometric agreement with the real shape of the protein chain, which can be obtained from the protein data bank. The quantitative measure of this agreement is the similarity parameter with the native structure, which is found to be 0.91 < 1 for the best sample. The folding time can be determined from Debye-relaxation process. We apply two regimes and calculate the folding time, corresponding to the elastic domain mode, which yields 5.2 ps for the same sample.

  12. Solution of deterministic-stochastic epidemic models by dynamical Monte Carlo method

    NASA Astrophysics Data System (ADS)

    Aièllo, O. E.; Haas, V. J.; daSilva, M. A. A.; Caliri, A.

    2000-07-01

    This work is concerned with dynamical Monte Carlo (MC) method and its application to models originally formulated in a continuous-deterministic approach. Specifically, a susceptible-infected-removed-susceptible (SIRS) model is used in order to analyze aspects of the dynamical MC algorithm and achieve its applications in epidemic contexts. We first examine two known approaches to the dynamical interpretation of the MC method and follow with the application of one of them in the SIRS model. The working method chosen is based on the Poisson process where hierarchy of events, properly calculated waiting time between events, and independence of the events simulated, are the basic requirements. To verify the consistence of the method, some preliminary MC results are compared against exact steady-state solutions and other general numerical results (provided by Runge-Kutta method): good agreement is found. Finally, a space-dependent extension of the SIRS model is introduced and treated by MC. The results are interpreted under and in accordance with aspects of the herd-immunity concept.

  13. A stochastic Markov chain approach for tennis: Monte Carlo simulation and modeling

    NASA Astrophysics Data System (ADS)

    Aslam, Kamran

    This dissertation describes the computational formulation of probability density functions (pdfs) that facilitate head-to-head match simulations in tennis along with ranking systems developed from their use. A background on the statistical method used to develop the pdfs , the Monte Carlo method, and the resulting rankings are included along with a discussion on ranking methods currently being used both in professional sports and in other applications. Using an analytical theory developed by Newton and Keller in [34] that defines a tennis player's probability of winning a game, set, match and single elimination tournament, a computational simulation has been developed in Matlab that allows further modeling not previously possible with the analytical theory alone. Such experimentation consists of the exploration of non-iid effects, considers the concept the varying importance of points in a match and allows an unlimited number of matches to be simulated between unlikely opponents. The results of these studies have provided pdfs that accurately model an individual tennis player's ability along with a realistic, fair and mathematically sound platform for ranking them.

  14. Monte Carlo Simulation of the TRIGA Mark II Benchmark Experiment with Burned Fuel

    SciTech Connect

    Jeraj, Robert; Zagar, Tomaz; Ravnik, Matjaz

    2002-03-15

    Monte Carlo calculations of a criticality experiment with burned fuel on the TRIGA Mark II research reactor are presented. The main objective was to incorporate burned fuel composition calculated with the WIMSD4 deterministic code into the MCNP4B Monte Carlo code and compare the calculated k{sub eff} with the measurements. The criticality experiment was performed in 1998 at the ''Jozef Stefan'' Institute TRIGA Mark II reactor in Ljubljana, Slovenia, with the same fuel elements and loading pattern as in the TRIGA criticality benchmark experiment with fresh fuel performed in 1991. The only difference was that in 1998, the fuel elements had on average burnup of {approx}3%, corresponding to 1.3-MWd energy produced in the core in the period between 1991 and 1998. The fuel element burnup accumulated during 1991-1998 was calculated with the TRIGLAV in-house-developed fuel management two-dimensional multigroup diffusion code. The burned fuel isotopic composition was calculated with the WIMSD4 code and compared to the ORIGEN2 calculations. Extensive comparison of burned fuel material composition was performed for both codes for burnups up to 20% burned {sup 235}U, and the differences were evaluated in terms of reactivity. The WIMSD4 and ORIGEN2 results agreed well for all isotopes important in reactivity calculations, giving increased confidence in the WIMSD4 calculation of the burned fuel material composition. The k{sub eff} calculated with the combined WIMSD4 and MCNP4B calculations showed good agreement with the experimental values. This shows that linking of WIMSD4 with MCNP4B for criticality calculations with burned fuel is feasible and gives reliable results.

  15. Stochastic geometrical model and Monte Carlo optimization methods for building reconstruction from InSAR data

    NASA Astrophysics Data System (ADS)

    Zhang, Yue; Sun, Xian; Thiele, Antje; Hinz, Stefan

    2015-10-01

    Synthetic aperture radar (SAR) systems, such as TanDEM-X, TerraSAR-X and Cosmo-SkyMed, acquire imagery with high spatial resolution (HR), making it possible to observe objects in urban areas with high detail. In this paper, we propose a new top-down framework for three-dimensional (3D) building reconstruction from HR interferometric SAR (InSAR) data. Unlike most methods proposed before, we adopt a generative model and utilize the reconstruction process by maximizing a posteriori estimation (MAP) through Monte Carlo methods. The reason for this strategy refers to the fact that the noisiness of SAR images calls for a thorough prior model to better cope with the inherent amplitude and phase fluctuations. In the reconstruction process, according to the radar configuration and the building geometry, a 3D building hypothesis is mapped to the SAR image plane and decomposed to feature regions such as layover, corner line, and shadow. Then, the statistical properties of intensity, interferometric phase and coherence of each region are explored respectively, and are included as region terms. Roofs are not directly considered as they are mixed with wall into layover area in most cases. When estimating the similarity between the building hypothesis and the real data, the prior, the region term, together with the edge term related to the contours of layover and corner line, are taken into consideration. In the optimization step, in order to achieve convergent reconstruction outputs and get rid of local extrema, special transition kernels are designed. The proposed framework is evaluated on the TanDEM-X dataset and performs well for buildings reconstruction.

  16. Developments in Stochastic Fuel Efficient Cruise Control and Constrained Control with Applications to Aircraft

    NASA Astrophysics Data System (ADS)

    McDonough, Kevin K.

    The dissertation presents contributions to fuel-efficient control of vehicle speed and constrained control with applications to aircraft. In the first part of this dissertation a stochastic approach to fuel-efficient vehicle speed control is developed. This approach encompasses stochastic modeling of road grade and traffic speed, modeling of fuel consumption through the use of a neural network, and the application of stochastic dynamic programming to generate vehicle speed control policies that are optimized for the trade-off between fuel consumption and travel time. The fuel economy improvements with the proposed policies are quantified through simulations and vehicle experiments. It is shown that the policies lead to the emergence of time-varying vehicle speed patterns that are referred to as time-varying cruise. Through simulations and experiments it is confirmed that these time-varying vehicle speed profiles are more fuel-efficient than driving at a comparable constant speed. Motivated by these results, a simpler implementation strategy that is more appealing for practical implementation is also developed. This strategy relies on a finite state machine and state transition threshold optimization, and its benefits are quantified through model-based simulations and vehicle experiments. Several additional contributions are made to approaches for stochastic modeling of road grade and vehicle speed that include the use of Kullback-Liebler divergence and divergence rate and a stochastic jump-like model for the behavior of the road grade. In the second part of the dissertation, contributions to constrained control with applications to aircraft are described. Recoverable sets and integral safe sets of initial states of constrained closed-loop systems are introduced first and computational procedures of such sets based on linear discrete-time models are given. The use of linear discrete-time models is emphasized as they lead to fast computational procedures. Examples of

  17. Improvements of MCOR: A Monte Carlo depletion code system for fuel assembly reference calculations

    SciTech Connect

    Tippayakul, C.; Ivanov, K.; Misu, S.

    2006-07-01

    This paper presents the improvements of MCOR, a Monte Carlo depletion code system for fuel assembly reference calculations. The improvements of MCOR were initiated by the cooperation between the Penn State Univ. and AREVA NP to enhance the original Penn State Univ. MCOR version in order to be used as a new Monte Carlo depletion analysis tool. Essentially, a new depletion module using KORIGEN is utilized to replace the existing ORIGEN-S depletion module in MCOR. Furthermore, the online burnup cross section generation by the Monte Carlo calculation is implemented in the improved version instead of using the burnup cross section library pre-generated by a transport code. Other code features have also been added to make the new MCOR version easier to use. This paper, in addition, presents the result comparisons of the original and the improved MCOR versions against CASMO-4 and OCTOPUS. It was observed in the comparisons that there were quite significant improvements of the results in terms of k{sub inf}, fission rate distributions and isotopic contents. (authors)

  18. Properties of Solar Thermal Fuels by Accurate Quantum Monte Carlo Calculations

    NASA Astrophysics Data System (ADS)

    Saritas, Kayahan; Ataca, Can; Grossman, Jeffrey C.

    2014-03-01

    Efficient utilization of the sun as a renewable and clean energy source is one of the major goals of this century due to increasing energy demand and environmental impact. Solar thermal fuels are materials that capture and store the sun's energy in the form of chemical bonds, which can then be released as heat on demand and charged again. Previous work on solar thermal fuels faced challenges related to the cyclability of the fuel over time, as well as the need for higher energy densities. Recently, it was shown that by templating photoswitches onto carbon nanostructures, both high energy density as well as high stability can be achieved. In this work, we explore alternative molecules to azobenzene in such a nano-templated system. We employ the highly accurate quantum Monte Carlo (QMC) method to predict the energy storage potential for each molecule. Our calculations show that in many cases the level of accuracy provided by density functional theory (DFT) is sufficient. However, in some cases, such as dihydroazulene, the drastic change in conjugation upon light absorption causes the DFT predictions to be inconsistent and incorrect. For this case, we compare our QMC results for the geometric structure, band gap and reaction enthalpy with different DFT functionals.

  19. Monte carlo Techniques for the Comprehensive Modeling of Isotopic Inventories in Future Nuclear Systems and Fuel Cycles

    SciTech Connect

    Paul P.H. Wilson

    2005-07-30

    The development of Monte Carlo techniques for isotopic inventory analysis has been explored in order to facilitate the modeling of systems with flowing streams of material through varying neutron irradiation environments. This represents a novel application of Monte Carlo methods to a field that has traditionally relied on deterministic solutions to systems of first-order differential equations. The Monte Carlo techniques were based largely on the known modeling techniques of Monte Carlo radiation transport, but with important differences, particularly in the area of variance reduction and efficiency measurement. The software that was developed to implement and test these methods now provides a basis for validating approximate modeling techniques that are available to deterministic methodologies. The Monte Carlo methods have been shown to be effective in reproducing the solutions of simple problems that are possible using both stochastic and deterministic methods. The Monte Carlo methods are also effective for tracking flows of materials through complex systems including the ability to model removal of individual elements or isotopes in the system. Computational performance is best for flows that have characteristic times that are large fractions of the system lifetime. As the characteristic times become short, leading to thousands or millions of passes through the system, the computational performance drops significantly. Further research is underway to determine modeling techniques to improve performance within this range of problems. This report describes the technical development of Monte Carlo techniques for isotopic inventory analysis. The primary motivation for this solution methodology is the ability to model systems of flowing material being exposed to varying and stochastically varying radiation environments. The methodology was developed in three stages: analog methods which model each atom with true reaction probabilities (Section 2), non-analog methods

  20. A Stochastic Method for Estimating the Effect of Isotopic Uncertainties in Spent Nuclear Fuel

    SciTech Connect

    DeHart, M.D.

    2001-08-24

    This report describes a novel approach developed at the Oak Ridge National Laboratory (ORNL) for the estimation of the uncertainty in the prediction of the neutron multiplication factor for spent nuclear fuel. This technique focuses on burnup credit, where credit is taken in criticality safety analysis for the reduced reactivity of fuel irradiated in and discharged from a reactor. Validation methods for burnup credit have attempted to separate the uncertainty associated with isotopic prediction methods from that of criticality eigenvalue calculations. Biases and uncertainties obtained in each step are combined additively. This approach, while conservative, can be excessive because of a physical assumptions employed. This report describes a statistical approach based on Monte Carlo sampling to directly estimate the total uncertainty in eigenvalue calculations resulting from uncertainties in isotopic predictions. The results can also be used to demonstrate the relative conservatism and statistical confidence associated with the method of additively combining uncertainties. This report does not make definitive conclusions on the magnitude of biases and uncertainties associated with isotopic predictions in a burnup credit analysis. These terms will vary depending on system design and the set of isotopic measurements used as a basis for estimating isotopic variances. Instead, the report describes a method that can be applied with a given design and set of isotopic data for estimating design-specific biases and uncertainties.

  1. Application of stochastic approach based on Monte Carlo (MC) simulation for life cycle inventory (LCI) to the steel process chain: case study.

    PubMed

    Bieda, Bogusław

    2014-05-15

    The purpose of the paper is to present the results of application of stochastic approach based on Monte Carlo (MC) simulation for life cycle inventory (LCI) data of Mittal Steel Poland (MSP) complex in Kraków, Poland. In order to assess the uncertainty, the software CrystalBall® (CB), which is associated with Microsoft® Excel spreadsheet model, is used. The framework of the study was originally carried out for 2005. The total production of steel, coke, pig iron, sinter, slabs from continuous steel casting (CSC), sheets from hot rolling mill (HRM) and blast furnace gas, collected in 2005 from MSP was analyzed and used for MC simulation of the LCI model. In order to describe random nature of all main products used in this study, normal distribution has been applied. The results of the simulation (10,000 trials) performed with the use of CB consist of frequency charts and statistical reports. The results of this study can be used as the first step in performing a full LCA analysis in the steel industry. Further, it is concluded that the stochastic approach is a powerful method for quantifying parameter uncertainty in LCA/LCI studies and it can be applied to any steel industry. The results obtained from this study can help practitioners and decision-makers in the steel production management. PMID:24290145

  2. A Monte Carlo simulation method for assessing biotransformation effects on groundwater fuel hydrocarbon plume lengths

    NASA Astrophysics Data System (ADS)

    McNab, Walt W.

    2001-02-01

    Biotransformation of dissolved groundwater hydrocarbon plumes emanating from leaking underground fuel tanks should, in principle, result in plume length stabilization over relatively short distances, thus diminishing the environmental risk. However, because the behavior of hydrocarbon plumes is usually poorly constrained at most leaking underground fuel tank sites in terms of release history, groundwater velocity, dispersion, as well as the biotransformation rate, demonstrating such a limitation in plume length is problematic. Biotransformation signatures in the aquifer geochemistry, most notably elevated bicarbonate, may offer a means of constraining the relationship between plume length and the mean biotransformation rate. In this study, modeled plume lengths and spatial bicarbonate differences among a population of synthetic hydrocarbon plumes, generated through Monte Carlo simulation of an analytical solute transport model, are compared to field observations from six underground storage tank (UST) sites at military bases in California. Simulation results indicate that the relationship between plume length and the distribution of bicarbonate is best explained by biotransformation rates that are consistent with ranges commonly reported in the literature. This finding suggests that bicarbonate can indeed provide an independent means for evaluating limitations in hydrocarbon plume length resulting from biotransformation.

  3. Gamma-ray spectrometry analysis of pebble bed reactor fuel using Monte Carlo simulations

    NASA Astrophysics Data System (ADS)

    Chen, Jianwei; Hawari, Ayman I.; Zhao, Zhongxiang; Su, Bingjing

    2003-06-01

    Monte Carlo simulations were used to study the gamma-ray spectra of pebble bed reactor fuel at various levels of burnup. A fuel depletion calculation was performed using the ORIGEN2.1 code, which yielded the gamma-ray source term that was introduced into the input of an MCNP4C simulation. The simulation assumed the use of a 100% efficient high-purity coaxial germanium (HPGe) detector, a pebble placed at a distance of 100 cm from the detector, and accounted for Gaussian broadening of the gamma-ray peaks. Previously, it was shown that 137Cs, 60Co (introduced as a dopant), and 134Cs are the relevant burnup indicators. The results show that the 662 keV line of 137Cs lies in close proximity to the intense 658 keV of 197Nb, which results in spectral interference between the lines. However, the 1333 keV line of 60Co, and selected 134Cs lines (e.g., at 605 keV) are free from spectral interference, which enhances the possibility of their utilization as relative burnup indicators.

  4. An extended stochastic reconstruction method for catalyst layers in proton exchange membrane fuel cells

    NASA Astrophysics Data System (ADS)

    Kang, Jinfen; Moriyama, Koji; Kim, Seung Hyun

    2016-09-01

    This paper presents an extended, stochastic reconstruction method for catalyst layers (CLs) of Proton Exchange Membrane Fuel Cells (PEMFCs). The focus is placed on the reconstruction of customized, low platinum (Pt) loading CLs where the microstructure of CLs can substantially influence the performance. The sphere-based simulated annealing (SSA) method is extended to generate the CL microstructures with specified and controllable structural properties for agglomerates, ionomer, and Pt catalysts. In the present method, the agglomerate structures are controlled by employing a trial two-point correlation function used in the simulated annealing process. An off-set method is proposed to generate more realistic ionomer structures. The variations of ionomer structures at different humidity conditions are considered to mimic the swelling effects. A method to control Pt loading, distribution, and utilization is presented. The extension of the method to consider heterogeneity in structural properties, which can be found in manufactured CL samples, is presented. Various reconstructed CLs are generated to demonstrate the capability of the proposed method. Proton transport properties of the reconstructed CLs are calculated and validated with experimental data.

  5. SENSITIVITY STUDIES FOR AN IN-SITU PARTIAL DEFECT DETECTOR (PDET) IN SPENT FUEL USING MONTE CARLO TECHNIQUES

    SciTech Connect

    Sitaraman, S; Ham, Y S

    2008-04-28

    This study presents results from Monte Carlo radiation transport calculations aimed at characterizing a novel methodology being developed to detect partial defects in Pressurized Water Reactor (PWR) spent fuel assemblies (SFAs). The methodology uses a combination of measured neutron and gamma fields inside a spent fuel assembly in an in-situ condition where no movement of the fuel assembly is required. Previous studies performed on single isolated assemblies resulted in a unique base signature that would change when some of the fuel in the assembly is replaced with dummy fuel. These studies indicate that this signature is still valid in the in-situ condition enhancing the prospect of building a practical tool, Partial Defect Detector (PDET), which can be used in the field for partial defect detection.

  6. Effects of fuel cetane number on the structure of diesel spray combustion: An accelerated Eulerian stochastic fields method

    NASA Astrophysics Data System (ADS)

    Jangi, Mehdi; Lucchini, Tommaso; Gong, Cheng; Bai, Xue-Song

    2015-09-01

    An Eulerian stochastic fields (ESF) method accelerated with the chemistry coordinate mapping (CCM) approach for modelling spray combustion is formulated, and applied to model diesel combustion in a constant volume vessel. In ESF-CCM, the thermodynamic states of the discretised stochastic fields are mapped into a low-dimensional phase space. Integration of the chemical stiff ODEs is performed in the phase space and the results are mapped back to the physical domain. After validating the ESF-CCM, the method is used to investigate the effects of fuel cetane number on the structure of diesel spray combustion. It is shown that, depending of the fuel cetane number, liftoff length is varied, which can lead to a change in combustion mode from classical diesel spray combustion to fuel-lean premixed burned combustion. Spray combustion with a shorter liftoff length exhibits the characteristics of the classical conceptual diesel combustion model proposed by Dec in 1997 (http://dx.doi.org/10.4271/970873), whereas in a case with a lower cetane number the liftoff length is much larger and the spray combustion probably occurs in a fuel-lean-premixed mode of combustion. Nevertheless, the transport budget at the liftoff location shows that stabilisation at all cetane numbers is governed primarily by the auto-ignition process.

  7. Sampling errors for satellite-derived tropical rainfall - Monte Carlo study using a space-time stochastic model

    NASA Technical Reports Server (NTRS)

    Bell, Thomas L.; Abdullah, A.; Martin, Russell L.; North, Gerald R.

    1990-01-01

    Estimates of monthly average rainfall based on satellite observations from a low earth orbit will differ from the true monthly average because the satellite observes a given area only intermittently. This sampling error inherent in satellite monitoring of rainfall would occur even if the satellite instruments could measure rainfall perfectly. The size of this error is estimated for a satellite system being studied at NASA, the Tropical Rainfall Measuring Mission (TRMM). First, the statistical description of rainfall on scales from 1 to 1000 km is examined in detail, based on rainfall data from the Global Atmospheric Research Project Atlantic Tropical Experiment (GATE). A TRMM-like satellite is flown over a two-dimensional time-evolving simulation of rainfall using a stochastic model with statistics tuned to agree with GATE statistics. The distribution of sampling errors found from many months of simulated observations is found to be nearly normal, even though the distribution of area-averaged rainfall is far from normal. For a range of orbits likely to be employed in TRMM, sampling error is found to be less than 10 percent of the mean for rainfall averaged over a 500 x 500 sq km area.

  8. Sampling errors for satellite-derived tropical rainfall: Monte Carlo study using a space-time stochastic model

    SciTech Connect

    Bell, T.L. ); Abdullah, A.; Martin, R.L. ); North, G.R. )

    1990-02-28

    Estimates of monthly average rainfall based on satellite observations from a low Earth orbit will differ from the true monthly average because the satellite observes a given area only intermittently. This sampling error inherent in satellite monitoring of rainfall would occur even if the satellite instruments could measure rainfall perfectly. The authors estimate the size of this error for a satellite system being studied at NASA, the Tropical Rainfall Measuring Mission (TRMM). They first examine in detail the statistical description of rainfall on scales from 1 to 10{sup 3} km, based on rainfall data from the Global Atmospheric Research Project Atlantic Tropical Experiment (GATE). A TRMM-like satellite is flown over a two-dimensional time-evolving simulation of rainfall using a stochastic model with statistics tuned to agree with GATE statistics. The distribution of sampling errors found from many months of simulated observations is found to be nearly normal, even though the distribution of area-averaged rainfall is far from normal. For a range of orbits likely to be employed in TRMM, sampling error is found to be less than 10% of the mean for rainfall averaged over a 500 {times} 500 km{sup 2} area.

  9. Predicting fissile content of spent nuclear fuel assemblies with the passive neutron Albedo reactivity technique and Monte Carlo code emulation

    SciTech Connect

    Conlin, Jeremy Lloyd; Tobin, Stephen J

    2010-10-13

    There is a great need in the safeguards community to be able to nondestructively quantify the mass of plutonium of a spent nuclear fuel assembly. As part of the Next Generation of Safeguards Initiative, we are investigating several techniques, or detector systems, which, when integrated, will be capable of quantifying the plutonium mass of a spent fuel assembly without dismantling the assembly. This paper reports on the simulation of one of these techniques, the Passive Neutron Albedo Reactivity with Fission Chambers (PNAR-FC) system. The response of this system over a wide range of spent fuel assemblies with different burnup, initial enrichment, and cooling time characteristics is shown. A Monte Carlo method of using these modeled results to estimate the fissile content of a spent fuel assembly has been developed. A few numerical simulations of using this method are shown. Finally, additional developments still needed and being worked on are discussed.

  10. Stochastic Convection Parameterizations

    NASA Technical Reports Server (NTRS)

    Teixeira, Joao; Reynolds, Carolyn; Suselj, Kay; Matheou, Georgios

    2012-01-01

    computational fluid dynamics, radiation, clouds, turbulence, convection, gravity waves, surface interaction, radiation interaction, cloud and aerosol microphysics, complexity (vegetation, biogeochemistry, radiation versus turbulence/convection stochastic approach, non-linearities, Monte Carlo, high resolutions, large-Eddy Simulations, cloud structure, plumes, saturation in tropics, forecasting, parameterizations, stochastic, radiation-clod interaction, hurricane forecasts

  11. A Comparison of Three Stochastic Approaches for Parameter Estimation and Prediction of Steady-State Groundwater Flow: Nonlocal Moment Equations and Monte Carlo Method Coupled with Ensemble Kalman Filter and Geostatistical Stochastic Inversion.

    NASA Astrophysics Data System (ADS)

    Morales-Casique, E.; Briseño-Ruiz, J. V.; Hernández, A. F.; Herrera, G. S.; Escolero-Fuentes, O.

    2014-12-01

    We present a comparison of three stochastic approaches for estimating log hydraulic conductivity (Y) and predicting steady-state groundwater flow. Two of the approaches are based on the data assimilation technique known as ensemble Kalman filter (EnKF) and differ in the way prior statistical moment estimates (PSME) (required to build the Kalman gain matrix) are obtained. In the first approach, the Monte Carlo method is employed to compute PSME of the variables and parameters; we denote this approach by EnKFMC. In the second approach PSME are computed through the direct solution of approximate nonlocal (integrodifferential) equations that govern the spatial conditional ensemble means (statistical expectations) and covariances of hydraulic head (h) and fluxes; we denote this approach by EnKFME. The third approach consists of geostatistical stochastic inversion of the same nonlocal moment equations; we denote this approach by IME. In addition to testing the EnKFMC and EnKFME methods in the traditional manner that estimate Y over the entire grid, we propose novel corresponding algorithms that estimate Y at a few selected locations and then interpolate over all grid elements via kriging as done in the IME method. We tested these methods to estimate Y and h in steady-state groundwater flow in a synthetic two-dimensional domain with a well pumping at a constant rate, located at the center of the domain. In addition, to evaluate the performance of the estimation methods, we generated four unconditional different realizations that served as "true" fields. The results of our numerical experiments indicate that the three methods were effective in estimating h, reaching at least 80% of predictive coverage, although both EnKF were superior to the IME method. With respect to estimating Y, the three methods reached similar accuracy in terms of the mean absolute value error. Coupling the EnKF methods with kriging to estimate Y reduces to one fourth the CPU time required for data

  12. Development of a practical fuel management system for PSBR based on advanced three-dimensional Monte Carlo coupled depletion methodology

    NASA Astrophysics Data System (ADS)

    Tippayakul, Chanatip

    The main objective of this research is to develop a practical fuel management system for the Pennsylvania State University Breazeale research reactor (PSBR) based on several advanced Monte Carlo coupled depletion methodologies. Primarily, this research involved two major activities: model and method developments and analyses and validations of the developed models and methods. The starting point of this research was the utilization of the earlier developed fuel management tool, TRIGSIM, to create the Monte Carlo model of core loading 51 (end of the core loading). It was found when comparing the normalized power results of the Monte Carlo model to those of the current fuel management system (using HELIOS/ADMARC-H) that they agreed reasonably well (within 2%--3% differences on average). Moreover, the reactivity of some fuel elements was calculated by the Monte Carlo model and it was compared with measured data. It was also found that the fuel element reactivity results of the Monte Carlo model were in good agreement with the measured data. However, the subsequent task of analyzing the conversion from the core loading 51 to the core loading 52 using TRIGSIM showed quite significant difference of each control rod worth between the Monte Carlo model and the current methodology model. The differences were mainly caused by inconsistent absorber atomic number densities between the two models. Hence, the model of the first operating core (core loading 2) was revised in light of new information about the absorber atomic densities to validate the Monte Carlo model with the measured data. With the revised Monte Carlo model, the results agreed better to the measured data. Although TRIGSIM showed good modeling and capabilities, the accuracy of TRIGSIM could be further improved by adopting more advanced algorithms. Therefore, TRIGSIM was planned to be upgraded. The first task of upgrading TRIGSIM involved the improvement of the temperature modeling capability. The new TRIGSIM was

  13. Stochastic simulation of fission product activity in primary coolant due to fuel rod failures in typical PWRs under power transients

    NASA Astrophysics Data System (ADS)

    Javed Iqbal, M.; Mirza, Nasir M.; Mirza, Sikander M.

    2008-01-01

    During normal operation of PWRs, routine fuel rods failures result in release of radioactive fission products (RFPs) in the primary coolant of PWRs. In this work, a stochastic model has been developed for simulation of failure time sequences and release rates for the estimation of fission product activity in primary coolant of a typical PWR under power perturbations. In the first part, a stochastic approach is developed, based on generation of fuel failure event sequences by sampling the time dependent intensity functions. Then a three-stage model based deterministic methodology of the FPCART code has been extended to include failure sequences and random release rates in a computer code FPCART-ST, which uses state-of-the-art LEOPARD and ODMUG codes as its subroutines. The value of the 131I activity in primary coolant predicted by FPCART-ST code has been found in good agreement with the corresponding values measured at ANGRA-1 nuclear power plant. The predictions of FPCART-ST code with constant release option have also been found to have good agreement with corresponding experimental values for time dependent 135I, 135Xe and 89Kr concentrations in primary coolant measured during EDITHMOX-1 experiments.

  14. Monte Carlo characterization of PWR spent fuel assemblies to determine the detectability of pin diversion

    NASA Astrophysics Data System (ADS)

    Burdo, James S.

    This research is based on the concept that the diversion of nuclear fuel pins from Light Water Reactor (LWR) spent fuel assemblies is feasible by a careful comparison of spontaneous fission neutron and gamma levels in the guide tube locations of the fuel assemblies. The goal is to be able to determine whether some of the assembly fuel pins are either missing or have been replaced with dummy or fresh fuel pins. It is known that for typical commercial power spent fuel assemblies, the dominant spontaneous neutron emissions come from Cm-242 and Cm-244. Because of the shorter half-life of Cm-242 (0.45 yr) relative to that of Cm-244 (18.1 yr), Cm-244 is practically the only neutron source contributing to the neutron source term after the spent fuel assemblies are more than two years old. Initially, this research focused upon developing MCNP5 models of PWR fuel assemblies, modeling their depletion using the MONTEBURNS code, and by carrying out a preliminary depletion of a ¼ model 17x17 assembly from the TAKAHAMA-3 PWR. Later, the depletion and more accurate isotopic distribution in the pins at discharge was modeled using the TRITON depletion module of the SCALE computer code. Benchmarking comparisons were performed with the MONTEBURNS and TRITON results. Subsequently, the neutron flux in each of the guide tubes of the TAKAHAMA-3 PWR assembly at two years after discharge as calculated by the MCNP5 computer code was determined for various scenarios. Cases were considered for all spent fuel pins present and for replacement of a single pin at a position near the center of the assembly (10,9) and at the corner (17,1). Some scenarios were duplicated with a gamma flux calculation for high energies associated with Cm-244. For each case, the difference between the flux (neutron or gamma) for all spent fuel pins and with a pin removed or replaced is calculated for each guide tube. Different detection criteria were established. The first was whether the relative error of the

  15. Decadal climatic variability and regional weather simulation: stochastic nature of forest fuel moisture and climatic forcing

    NASA Astrophysics Data System (ADS)

    Tsinko, Y.; Johnson, E. A.; Martin, Y. E.

    2014-12-01

    Natural range of variability of forest fire frequency is of great interest due to the current changing climate and seeming increase in the number of fires. The variability of the annual area burned in Canada has not been stable in the 20th century. Recently, these changes have been linked to large scale climate cycles, such as Pacific Decadal Oscillation (PDO) phases and El Nino Southern Oscillation (ENSO). The positive phase of the PDO was associated with the increased probability of hot dry spells leading to drier fuels and increased area burned. However, so far only one historical timeline was used to assess correlations between the natural climate oscillations and forest fire frequency. To counteract similar problems, weather generators are extensively used in hydrological and agricultural modeling to extend short instrumental record and to synthesize long sequences of daily weather parameters that are different from but statistically similar to historical weather. In the current study synthetic weather models were used to assess effects of alternative weather timelines on fuel moisture in Canada by using Canadian Forest Fire Weather Index moisture codes and potential fire frequency. The variability of fuel moisture codes was found to increase with the increased length of simulated series, thus indicating that the natural range of variability of forest fire frequency may be larger than that calculated from available short records. It may be viewed as a manifestation of a Hurst effect. Since PDO phases are thought to be caused by diverse mechanisms including overturning oceanic circulation, some of the lower frequency signals may be attributed to the long term memory of the oceanic system. Thus, care must be taken when assessing natural variability of climate dependent processes without accounting for potential long-term mechanisms.

  16. A stochastic model and Monte Carlo algorithm for fluctuation-induced H2 formation on the surface of interstellar dust grains

    NASA Astrophysics Data System (ADS)

    Sabelfeld, K. K.

    2015-09-01

    A stochastic algorithm for simulation of fluctuation-induced kinetics of H2 formation on grain surfaces is suggested as a generalization of the technique developed in our recent studies [1] where this method was developed to describe the annihilation of spatially separate electrons and holes in a disordered semiconductor. The stochastic model is based on the spatially inhomogeneous, nonlinear integro-differential Smoluchowski equations with random source term. In this paper we derive the general system of Smoluchowski type equations for the formation of H2 from two hydrogen atoms on the surface of interstellar dust grains with physisorption and chemisorption sites. We focus in this study on the spatial distribution, and numerically investigate the segregation in the case of a source with a continuous generation in time and randomly distributed in space. The stochastic particle method presented is based on a probabilistic interpretation of the underlying process as a stochastic Markov process of interacting particle system in discrete but randomly progressed time instances. The segregation is analyzed through the correlation analysis of the vector random field of concentrations which appears to be isotropic in space and stationary in time.

  17. Preliminary TRIGA fuel burn-up evaluation by means of Monte Carlo code and computation based on total energy released during reactor operation

    SciTech Connect

    Borio Di Tigliole, A.; Bruni, J.; Panza, F.; Alloni, D.; Cagnazzo, M.; Magrotti, G.; Manera, S.; Prata, M.; Salvini, A.; Chiesa, D.; Clemenza, M.; Pattavina, L.; Previtali, E.; Sisti, M.; Cammi, A.

    2012-07-01

    Aim of this work was to perform a rough preliminary evaluation of the burn-up of the fuel of TRIGA Mark II research reactor of the Applied Nuclear Energy Laboratory (LENA) of the Univ. of Pavia. In order to achieve this goal a computation of the neutron flux density in each fuel element was performed by means of Monte Carlo code MCNP (Version 4C). The results of the simulations were used to calculate the effective cross sections (fission and capture) inside fuel and, at the end, to evaluate the burn-up and the uranium consumption in each fuel element. The evaluation, showed a fair agreement with the computation for fuel burn-up based on the total energy released during reactor operation. (authors)

  18. Neutron analysis of spent fuel storage installation using parallel computing and advance discrete ordinates and Monte Carlo techniques.

    PubMed

    Shedlock, Daniel; Haghighat, Alireza

    2005-01-01

    In the United States, the Nuclear Waste Policy Act of 1982 mandated centralised storage of spent nuclear fuel by 1988. However, the Yucca Mountain project is currently scheduled to start accepting spent nuclear fuel in 2010. Since many nuclear power plants were only designed for -10 y of spent fuel pool storage, > 35 plants have been forced into alternate means of spent fuel storage. In order to continue operation and make room in spent fuel pools, nuclear generators are turning towards independent spent fuel storage installations (ISFSIs). Typical vertical concrete ISFSIs are -6.1 m high and 3.3 m in diameter. The inherently large system, and the presence of thick concrete shields result in difficulties for both Monte Carlo (MC) and discrete ordinates (SN) calculations. MC calculations require significant variance reduction and multiple runs to obtain a detailed dose distribution. SN models need a large number of spatial meshes to accurately model the geometry and high quadrature orders to reduce ray effects, therefore, requiring significant amounts of computer memory and time. The use of various differencing schemes is needed to account for radial heterogeneity in material cross sections and densities. Two P3, S12, discrete ordinate, PENTRAN (parallel environment neutral-particle TRANsport) models were analysed and different MC models compared. A multigroup MCNP model was developed for direct comparison to the SN models. The biased A3MCNP (automated adjoint accelerated MCNP) and unbiased (MCNP) continuous energy MC models were developed to assess the adequacy of the CASK multigroup (22 neutron, 18 gamma) cross sections. The PENTRAN SN results are in close agreement (5%) with the multigroup MC results; however, they differ by -20-30% from the continuous-energy MC predictions. This large difference can be attributed to the expected difference between multigroup and continuous energy cross sections, and the fact that the CASK library is based on the old ENDF

  19. Stochastic Optimization of Complex Systems

    SciTech Connect

    Birge, John R.

    2014-03-20

    This project focused on methodologies for the solution of stochastic optimization problems based on relaxation and penalty methods, Monte Carlo simulation, parallel processing, and inverse optimization. The main results of the project were the development of a convergent method for the solution of models that include expectation constraints as in equilibrium models, improvement of Monte Carlo convergence through the use of a new method of sample batch optimization, the development of new parallel processing methods for stochastic unit commitment models, and the development of improved methods in combination with parallel processing for incorporating automatic differentiation methods into optimization.

  20. Estimation of water distribution and degradation mechanisms in polymer electrolyte membrane fuel cell gas diffusion layers using a 3D Monte Carlo model

    NASA Astrophysics Data System (ADS)

    Seidenberger, K.; Wilhelm, F.; Schmitt, T.; Lehnert, W.; Scholta, J.

    Understanding of both water management in PEM fuel cells and degradation mechanisms of the gas diffusion layer (GDL) and their mutual impact is still at least incomplete. Different modelling approaches contribute to gain deeper insight into the processes occurring during fuel cell operation. Considering the GDL, the models can help to obtain information about the distribution of liquid water within the material. Especially, flooded regions can be identified, and the water distribution can be linked to the system geometry. Employed for material development, this information can help to increase the life time of the GDL as a fuel cell component and the fuel cell as the entire system. The Monte Carlo (MC) model presented here helps to simulate and analyse the water household in PEM fuel cell GDLs. This model comprises a three-dimensional, voxel-based representation of the GDL substrate, a section of the flowfield channel and the corresponding rib. Information on the water distribution within the substrate part of the GDL can be estimated.

  1. Stochastic microstructural modeling of fuel cell gas diffusion layers and numerical determination of transport properties in different liquid water saturation levels

    NASA Astrophysics Data System (ADS)

    Tayarani-Yoosefabadi, Z.; Harvey, D.; Bellerive, J.; Kjeang, E.

    2016-01-01

    Gas diffusion layer (GDL) materials in polymer electrolyte membrane fuel cells (PEMFCs) are commonly made hydrophobic to enhance water management by avoiding liquid water blockage of the pores and facilitating reactant gas transport to the adjacent catalyst layer. In this work, a stochastic microstructural modeling approach is developed to simulate the transport properties of a commercial carbon paper based GDL under a range of PTFE loadings and liquid water saturation levels. The proposed novel stochastic method mimics the GDL manufacturing process steps and resolves all relevant phases including fiber, binder, PTFE, liquid water, and gas. After thorough validation of the general microstructure with literature and in-house data, a comprehensive set of anisotropic transport properties is simulated for the reconstructed GDL in different PTFE loadings and liquid water saturation levels and validated through a comparison with in-house ex situ experimental data and empirical formulations. In general, the results show good agreement between simulated and measured data. Decreasing trends in porosity, gas diffusivity, and permeability is obtained by increasing the PTFE loading and liquid water content, while the thermal conductivity is found to increase with liquid water saturation. Using the validated model, new correlations for saturation dependent GDL properties are proposed.

  2. Accelerating population balance-Monte Carlo simulation for coagulation dynamics from the Markov jump model, stochastic algorithm and GPU parallel computing

    NASA Astrophysics Data System (ADS)

    Xu, Zuwei; Zhao, Haibo; Zheng, Chuguang

    2015-01-01

    This paper proposes a comprehensive framework for accelerating population balance-Monte Carlo (PBMC) simulation of particle coagulation dynamics. By combining Markov jump model, weighted majorant kernel and GPU (graphics processing unit) parallel computing, a significant gain in computational efficiency is achieved. The Markov jump model constructs a coagulation-rule matrix of differentially-weighted simulation particles, so as to capture the time evolution of particle size distribution with low statistical noise over the full size range and as far as possible to reduce the number of time loopings. Here three coagulation rules are highlighted and it is found that constructing appropriate coagulation rule provides a route to attain the compromise between accuracy and cost of PBMC methods. Further, in order to avoid double looping over all simulation particles when considering the two-particle events (typically, particle coagulation), the weighted majorant kernel is introduced to estimate the maximum coagulation rates being used for acceptance-rejection processes by single-looping over all particles, and meanwhile the mean time-step of coagulation event is estimated by summing the coagulation kernels of rejected and accepted particle pairs. The computational load of these fast differentially-weighted PBMC simulations (based on the Markov jump model) is reduced greatly to be proportional to the number of simulation particles in a zero-dimensional system (single cell). Finally, for a spatially inhomogeneous multi-dimensional (multi-cell) simulation, the proposed fast PBMC is performed in each cell, and multiple cells are parallel processed by multi-cores on a GPU that can implement the massively threaded data-parallel tasks to obtain remarkable speedup ratio (comparing with CPU computation, the speedup ratio of GPU parallel computing is as high as 200 in a case of 100 cells with 10 000 simulation particles per cell). These accelerating approaches of PBMC are

  3. Accelerating population balance-Monte Carlo simulation for coagulation dynamics from the Markov jump model, stochastic algorithm and GPU parallel computing

    SciTech Connect

    Xu, Zuwei; Zhao, Haibo Zheng, Chuguang

    2015-01-15

    This paper proposes a comprehensive framework for accelerating population balance-Monte Carlo (PBMC) simulation of particle coagulation dynamics. By combining Markov jump model, weighted majorant kernel and GPU (graphics processing unit) parallel computing, a significant gain in computational efficiency is achieved. The Markov jump model constructs a coagulation-rule matrix of differentially-weighted simulation particles, so as to capture the time evolution of particle size distribution with low statistical noise over the full size range and as far as possible to reduce the number of time loopings. Here three coagulation rules are highlighted and it is found that constructing appropriate coagulation rule provides a route to attain the compromise between accuracy and cost of PBMC methods. Further, in order to avoid double looping over all simulation particles when considering the two-particle events (typically, particle coagulation), the weighted majorant kernel is introduced to estimate the maximum coagulation rates being used for acceptance–rejection processes by single-looping over all particles, and meanwhile the mean time-step of coagulation event is estimated by summing the coagulation kernels of rejected and accepted particle pairs. The computational load of these fast differentially-weighted PBMC simulations (based on the Markov jump model) is reduced greatly to be proportional to the number of simulation particles in a zero-dimensional system (single cell). Finally, for a spatially inhomogeneous multi-dimensional (multi-cell) simulation, the proposed fast PBMC is performed in each cell, and multiple cells are parallel processed by multi-cores on a GPU that can implement the massively threaded data-parallel tasks to obtain remarkable speedup ratio (comparing with CPU computation, the speedup ratio of GPU parallel computing is as high as 200 in a case of 100 cells with 10 000 simulation particles per cell). These accelerating approaches of PBMC are

  4. A multilevel stochastic collocation method for SPDEs

    SciTech Connect

    Gunzburger, Max; Jantsch, Peter; Teckentrup, Aretha; Webster, Clayton

    2015-03-10

    We present a multilevel stochastic collocation method that, as do multilevel Monte Carlo methods, uses a hierarchy of spatial approximations to reduce the overall computational complexity when solving partial differential equations with random inputs. For approximation in parameter space, a hierarchy of multi-dimensional interpolants of increasing fidelity are used. Rigorous convergence and computational cost estimates for the new multilevel stochastic collocation method are derived and used to demonstrate its advantages compared to standard single-level stochastic collocation approximations as well as multilevel Monte Carlo methods.

  5. Stochastic Feedforward Control Technique

    NASA Technical Reports Server (NTRS)

    Halyo, Nesim

    1990-01-01

    Class of commanded trajectories modeled as stochastic process. Advanced Transport Operating Systems (ATOPS) research and development program conducted by NASA Langley Research Center aimed at developing capabilities for increases in capacities of airports, safe and accurate flight in adverse weather conditions including shear, winds, avoidance of wake vortexes, and reduced consumption of fuel. Advances in techniques for design of modern controls and increased capabilities of digital flight computers coupled with accurate guidance information from Microwave Landing System (MLS). Stochastic feedforward control technique developed within context of ATOPS program.

  6. Monte Carlo simulations of differential die-away instrument for determination of fissile content in spent fuel assemblies

    NASA Astrophysics Data System (ADS)

    Lee, Tae-Hoon; Menlove, Howard O.; Swinhoe, Martyn T.; Tobin, Stephen J.

    2011-10-01

    The differential die-away (DDA) technique has been simulated by using the MCNPX code to quantify its capability of measuring the fissile content in spent fuel assemblies. For 64 different spent fuel cases of various initial enrichment, burnup and cooling time, the count rate and signal to background ratios of the DDA system were obtained, where neutron backgrounds are mainly coming from the 244Cm of the spent fuel. To quantify the total fissile mass of spent fuel, a concept of the effective 239Pu mass was introduced by weighing the relative contribution to the signal of 235U and 241Pu compared to 239Pu and the calibration curves of DDA count rate vs. 239Pu eff were obtained by using the MCNPX code. With a deuterium-tritium (DT) neutron generator of 10 9 n/s strength, signal to background ratios of sufficient magnitude are acquired for a DDA system with the spent fuel assembly in water.

  7. Stochastic games

    PubMed Central

    Solan, Eilon; Vieille, Nicolas

    2015-01-01

    In 1953, Lloyd Shapley contributed his paper “Stochastic games” to PNAS. In this paper, he defined the model of stochastic games, which were the first general dynamic model of a game to be defined, and proved that it admits a stationary equilibrium. In this Perspective, we summarize the historical context and the impact of Shapley’s contribution. PMID:26556883

  8. Stochastic models: theory and simulation.

    SciTech Connect

    Field, Richard V., Jr.

    2008-03-01

    Many problems in applied science and engineering involve physical phenomena that behave randomly in time and/or space. Examples are diverse and include turbulent flow over an aircraft wing, Earth climatology, material microstructure, and the financial markets. Mathematical models for these random phenomena are referred to as stochastic processes and/or random fields, and Monte Carlo simulation is the only general-purpose tool for solving problems of this type. The use of Monte Carlo simulation requires methods and algorithms to generate samples of the appropriate stochastic model; these samples then become inputs and/or boundary conditions to established deterministic simulation codes. While numerous algorithms and tools currently exist to generate samples of simple random variables and vectors, no cohesive simulation tool yet exists for generating samples of stochastic processes and/or random fields. There are two objectives of this report. First, we provide some theoretical background on stochastic processes and random fields that can be used to model phenomena that are random in space and/or time. Second, we provide simple algorithms that can be used to generate independent samples of general stochastic models. The theory and simulation of random variables and vectors is also reviewed for completeness.

  9. Monte Carlo Modeling of Fast Sub-critical Assembly with MOX Fuel for Research of Accelerator-Driven Systems

    NASA Astrophysics Data System (ADS)

    Polanski, A.; Barashenkov, V.; Puzynin, I.; Rakhno, I.; Sissakian, A.

    It is considered a sub-critical assembly driven with existing 660 MeV JINR proton accelerator. The assembly consists of a central cylindrical lead target surrounded with a mixed-oxide (MOX) fuel (PuO2 + UO2) and with reflector made of beryllium. Dependence of the energetic gain on the proton energy, the neutron multiplication coefficient, and the neutron energetic spectra have been calculated. It is shown that for subcritical assembly with a mixed-oxide (MOX) BN-600 fuel (28%PuO 2 + 72%UO2) with effective density of fuel material equal to 9 g/cm 3 , the multiplication coefficient keff is equal to 0.945, the energetic gain is equal to 27, and the neutron flux density is 1012 cm˜2 s˜x for the protons with energy of 660 MeV and accelerator beam current of 1 uA.

  10. Stochastic kinetic mean field model

    NASA Astrophysics Data System (ADS)

    Erdélyi, Zoltán; Pasichnyy, Mykola; Bezpalchuk, Volodymyr; Tomán, János J.; Gajdics, Bence; Gusak, Andriy M.

    2016-07-01

    This paper introduces a new model for calculating the change in time of three-dimensional atomic configurations. The model is based on the kinetic mean field (KMF) approach, however we have transformed that model into a stochastic approach by introducing dynamic Langevin noise. The result is a stochastic kinetic mean field model (SKMF) which produces results similar to the lattice kinetic Monte Carlo (KMC). SKMF is, however, far more cost-effective and easier to implement the algorithm (open source program code is provided on

  11. QB1 - Stochastic Gene Regulation

    SciTech Connect

    Munsky, Brian

    2012-07-23

    Summaries of this presentation are: (1) Stochastic fluctuations or 'noise' is present in the cell - Random motion and competition between reactants, Low copy, quantization of reactants, Upstream processes; (2) Fluctuations may be very important - Cell-to-cell variability, Cell fate decisions (switches), Signal amplification or damping, stochastic resonances; and (3) Some tools are available to mode these - Kinetic Monte Carlo simulations (SSA and variants), Moment approximation methods, Finite State Projection. We will see how modeling these reactions can tell us more about the underlying processes of gene regulation.

  12. Experiments and Theoretical Data for Studying the Impact of Fission Yield Uncertainties on the Nuclear Fuel Cycle with TALYS/GEF and the Total Monte Carlo Method

    SciTech Connect

    Pomp, S.; Al-Adili, A.; Alhassan, E.; Gustavsson, C.; Helgesson, P.; Hellesen, C.; Koning, A.J.; Lantz, M.; Österlund, M.; Rochman, D.; Simutkin, V.; Sjöstrand, H.; Solders, A.

    2015-01-15

    We describe the research program of the nuclear reactions research group at Uppsala University concerning experimental and theoretical efforts to quantify and reduce nuclear data uncertainties relevant for the nuclear fuel cycle. We briefly describe the Total Monte Carlo (TMC) methodology and how it can be used to study fuel cycle and accident scenarios, and summarize our relevant experimental activities. Input from the latter is to be used to guide the nuclear models and constrain parameter space for TMC. The TMC method relies on the availability of good nuclear models. For this we use the TALYS code which is currently being extended to include the GEF model for the fission channel. We present results from TALYS-1.6 using different versions of GEF with both default and randomized input parameters and compare calculations with experimental data for {sup 234}U(n,f) in the fast energy range. These preliminary studies reveal some systematic differences between experimental data and calculations but give overall good and promising results.

  13. Experiments and Theoretical Data for Studying the Impact of Fission Yield Uncertainties on the Nuclear Fuel Cycle with TALYS/GEF and the Total Monte Carlo Method

    NASA Astrophysics Data System (ADS)

    Pomp, S.; Al-Adili, A.; Alhassan, E.; Gustavsson, C.; Helgesson, P.; Hellesen, C.; Koning, A. J.; Lantz, M.; Österlund, M.; Rochman, D.; Simutkin, V.; Sjöstrand, H.; Solders, A.

    2015-01-01

    We describe the research program of the nuclear reactions research group at Uppsala University concerning experimental and theoretical efforts to quantify and reduce nuclear data uncertainties relevant for the nuclear fuel cycle. We briefly describe the Total Monte Carlo (TMC) methodology and how it can be used to study fuel cycle and accident scenarios, and summarize our relevant experimental activities. Input from the latter is to be used to guide the nuclear models and constrain parameter space for TMC. The TMC method relies on the availability of good nuclear models. For this we use the TALYS code which is currently being extended to include the GEF model for the fission channel. We present results from TALYS-1.6 using different versions of GEF with both default and randomized input parameters and compare calculations with experimental data for 234U(n,f) in the fast energy range. These preliminary studies reveal some systematic differences between experimental data and calculations but give overall good and promising results.

  14. The Effect of Stochastic Perturbation of Fuel Distribution on the Criticality of a One Speed Reactor and the Development of Multi-Material Multinomial Line Statistics

    NASA Technical Reports Server (NTRS)

    Jahshan, S. N.; Singleterry, R. C.

    2001-01-01

    The effect of random fuel redistribution on the eigenvalue of a one-speed reactor is investigated. An ensemble of such reactors that are identical to a homogeneous reference critical reactor except for the fissile isotope density distribution is constructed such that it meets a set of well-posed redistribution requirements. The average eigenvalue, , is evaluated when the total fissile loading per ensemble element, or realization, is conserved. The perturbation is proven to increase the reactor criticality on average when it is uniformly distributed. The various causes of the change in reactivity, and their relative effects are identified and ranked. From this, a path towards identifying the causes. and relative effects of reactivity fluctuations for the energy dependent problem is pointed to. The perturbation method of using multinomial distributions for representing the perturbed reactor is developed. This method has some advantages that can be of use in other stochastic problems. Finally, some of the features of this perturbation problem are related to other techniques that have been used for addressing similar problems.

  15. A refined model of the thyrotropin-releasing hormone (TRH) receptor binding pocket. Novel mixed mode Monte Carlo/stochastic dynamics simulations of the complex between TRH and TRH receptor.

    PubMed

    Laakkonen, L J; Guarnieri, F; Perlman, J H; Gershengorn, M C; Osman, R

    1996-06-18

    Previous mutational and computational studies of the thyrotropin-releasing hormone (TRH) receptor identified several residues in its binding pocket [see accompanying paper, Perlman et al. (1996) Biochemistry 35, 7643-7650]. On the basis of the initial model constructed with standard energy minimization techniques, we have conducted 15 mixed mode Monte Carlo/stochastic dynamics (MC-SD) simulations to allow for extended sampling of the conformational states of the ligand and the receptor in the complex. A simulated annealing protocol was adopted in which the complex was cooled from 600 to 310 K in segments of 30 ps of the MC-SD simulations for each change of 100 K. Analysis of the simulation results demonstrated that the mixed mode MC-SD protocol maintained the desired temperature in the constant temperature simulation segments. The elevated temperature and the repeating simulations allowed for adequate sampling of the torsional space of the complex with successful conservation of the general structure and good helicity of the receptor. For the analysis of the interaction between TRH and the binding pocket, TRH was divided into four groups consisting of pyroGlu, His, ProNH2, and the backbone. The pairwise interaction energies of the four separate portions of TRH with the corresponding residues in the receptor provide a physicochemical basis for the understanding of ligand-receptor complexes. The interaction of pyroGlu with Tyr106 shows a bimodal distribution that represents two populations: one with a H-bond and another without it. Asp195 was shown to compete with pyroGlu for the H-bond to Tyr106. Simulations in which Asp195 was interacting with Arg283, thus removing it from the vicinity of Tyr106, resulted in a stable H-bond to pyroGlu. In all simulations His showed a van der Waals attraction to Tyr282 and a weak electrostatic repulsion from Arg 306. The ProNH2 had a strong and frequent H-bonding interaction with Arg306. The backbone carbonyls show a frequent H

  16. Monte Carlo techniques for real-time quantum dynamics

    SciTech Connect

    Dowling, Mark R. . E-mail: dowling@physics.uq.edu.au; Davis, Matthew J.; Drummond, Peter D.; Corney, Joel F.

    2007-01-10

    The stochastic-gauge representation is a method of mapping the equation of motion for the quantum mechanical density operator onto a set of equivalent stochastic differential equations. One of the stochastic variables is termed the 'weight', and its magnitude is related to the importance of the stochastic trajectory. We investigate the use of Monte Carlo algorithms to improve the sampling of the weighted trajectories and thus reduce sampling error in a simulation of quantum dynamics. The method can be applied to calculations in real time, as well as imaginary time for which Monte Carlo algorithms are more-commonly used. The Monte-Carlo algorithms are applicable when the weight is guaranteed to be real, and we demonstrate how to ensure this is the case. Examples are given for the anharmonic oscillator, where large improvements over stochastic sampling are observed.

  17. Stochastic thermodynamics

    NASA Astrophysics Data System (ADS)

    Eichhorn, Ralf; Aurell, Erik

    2014-04-01

    'Stochastic thermodynamics as a conceptual framework combines the stochastic energetics approach introduced a decade ago by Sekimoto [1] with the idea that entropy can consistently be assigned to a single fluctuating trajectory [2]'. This quote, taken from Udo Seifert's [3] 2008 review, nicely summarizes the basic ideas behind stochastic thermodynamics: for small systems, driven by external forces and in contact with a heat bath at a well-defined temperature, stochastic energetics [4] defines the exchanged work and heat along a single fluctuating trajectory and connects them to changes in the internal (system) energy by an energy balance analogous to the first law of thermodynamics. Additionally, providing a consistent definition of trajectory-wise entropy production gives rise to second-law-like relations and forms the basis for a 'stochastic thermodynamics' along individual fluctuating trajectories. In order to construct meaningful concepts of work, heat and entropy production for single trajectories, their definitions are based on the stochastic equations of motion modeling the physical system of interest. Because of this, they are valid even for systems that are prevented from equilibrating with the thermal environment by external driving forces (or other sources of non-equilibrium). In that way, the central notions of equilibrium thermodynamics, such as heat, work and entropy, are consistently extended to the non-equilibrium realm. In the (non-equilibrium) ensemble, the trajectory-wise quantities acquire distributions. General statements derived within stochastic thermodynamics typically refer to properties of these distributions, and are valid in the non-equilibrium regime even beyond the linear response. The extension of statistical mechanics and of exact thermodynamic statements to the non-equilibrium realm has been discussed from the early days of statistical mechanics more than 100 years ago. This debate culminated in the development of linear response

  18. Stochastic cooling

    SciTech Connect

    Bisognano, J.; Leemann, C.

    1982-03-01

    Stochastic cooling is the damping of betatron oscillations and momentum spread of a particle beam by a feedback system. In its simplest form, a pickup electrode detects the transverse positions or momenta of particles in a storage ring, and the signal produced is amplified and applied downstream to a kicker. The time delay of the cable and electronics is designed to match the transit time of particles along the arc of the storage ring between the pickup and kicker so that an individual particle receives the amplified version of the signal it produced at the pick-up. If there were only a single particle in the ring, it is obvious that betatron oscillations and momentum offset could be damped. However, in addition to its own signal, a particle receives signals from other beam particles. In the limit of an infinite number of particles, no damping could be achieved; we have Liouville's theorem with constant density of the phase space fluid. For a finite, albeit large number of particles, there remains a residue of the single particle damping which is of practical use in accumulating low phase space density beams of particles such as antiprotons. It was the realization of this fact that led to the invention of stochastic cooling by S. van der Meer in 1968. Since its conception, stochastic cooling has been the subject of much theoretical and experimental work. The earliest experiments were performed at the ISR in 1974, with the subsequent ICE studies firmly establishing the stochastic cooling technique. This work directly led to the design and construction of the Antiproton Accumulator at CERN and the beginnings of p anti p colliding beam physics at the SPS. Experiments in stochastic cooling have been performed at Fermilab in collaboration with LBL, and a design is currently under development for a anti p accumulator for the Tevatron.

  19. An advanced deterministic method for spent fuel criticality safety analysis

    SciTech Connect

    DeHart, M.D.

    1998-01-01

    Over the past two decades, criticality safety analysts have come to rely to a large extent on Monte Carlo methods for criticality calculations. Monte Carlo has become popular because of its capability to model complex, non-orthogonal configurations or fissile materials, typical of real world problems. Over the last few years, however, interest in determinist transport methods has been revived, due shortcomings in the stochastic nature of Monte Carlo approaches for certain types of analyses. Specifically, deterministic methods are superior to stochastic methods for calculations requiring accurate neutron density distributions or differential fluxes. Although Monte Carlo methods are well suited for eigenvalue calculations, they lack the localized detail necessary to assess uncertainties and sensitivities important in determining a range of applicability. Monte Carlo methods are also inefficient as a transport solution for multiple pin depletion methods. Discrete ordinates methods have long been recognized as one of the most rigorous and accurate approximations used to solve the transport equation. However, until recently, geometric constraints in finite differencing schemes have made discrete ordinates methods impractical for non-orthogonal configurations such as reactor fuel assemblies. The development of an extended step characteristic (ESC) technique removes the grid structure limitations of traditional discrete ordinates methods. The NEWT computer code, a discrete ordinates code built upon the ESC formalism, is being developed as part of the SCALE code system. This paper will demonstrate the power, versatility, and applicability of NEWT as a state-of-the-art solution for current computational needs.

  20. Radiation Transport Computation in Stochastic Media: Method and Application

    NASA Astrophysics Data System (ADS)

    Liang, Chao

    Stochastic media, characterized by the stochastic distribution of inclusions in a background medium, are typical radiation transport media encountered in natural or engineering systems. In the community of radiation transport computation, there is always a demand of accurate and efficient methods that can account for the nature of the stochastic distribution. In this dissertation, we focus on methodology development for the radiation transport computation that is applied to neutronic analyses of nuclear reactor designs characterized by the stochastic distribution of particle fuel. Reactor concepts with the employment of a fuel design consisting of a random heterogeneous mixture of fissile material and non-fissile moderator are constantly proposed. Key physical quantities such as core criticality and power distribution, reactivity control design parameters, depletion and fuel burn-up need to be carefully evaluated. In order to meet these practical requirements, we first need to develop accurate and fast computational methods that can effectively account for the stochastic nature of double heterogeneity configuration. A Monte Carlo based method called Chord Length Sampling (CLS) method is considered to be a promising method for analyzing those TRISO-type fueled reactors. Although the CLS method has been proposed for more than two decades and much research has been conducted to enhance its applicability, further efforts are still needed to address some key research gaps that exist for the CLS method. (1) There is a general lack of thorough investigation of the factors that give rise to the inaccuracy of the CLS method found by many researchers. The accuracy of the CLS method depends on the optical and geometric properties of the system. In some specific scenarios, considerable inaccuracies have been reported. However, no research has been providing a clear interpretation of the reasons responsible for the inaccuracy in the reported scenarios. Furthermore, no any

  1. Stochastic-field cavitation model

    SciTech Connect

    Dumond, J.; Magagnato, F.; Class, A.

    2013-07-15

    Nonlinear phenomena can often be well described using probability density functions (pdf) and pdf transport models. Traditionally, the simulation of pdf transport requires Monte-Carlo codes based on Lagrangian “particles” or prescribed pdf assumptions including binning techniques. Recently, in the field of combustion, a novel formulation called the stochastic-field method solving pdf transport based on Eulerian fields has been proposed which eliminates the necessity to mix Eulerian and Lagrangian techniques or prescribed pdf assumptions. In the present work, for the first time the stochastic-field method is applied to multi-phase flow and, in particular, to cavitating flow. To validate the proposed stochastic-field cavitation model, two applications are considered. First, sheet cavitation is simulated in a Venturi-type nozzle. The second application is an innovative fluidic diode which exhibits coolant flashing. Agreement with experimental results is obtained for both applications with a fixed set of model constants. The stochastic-field cavitation model captures the wide range of pdf shapes present at different locations.

  2. Stochastic solution to quantum dynamics

    NASA Technical Reports Server (NTRS)

    John, Sarah; Wilson, John W.

    1994-01-01

    The quantum Liouville equation in the Wigner representation is solved numerically by using Monte Carlo methods. For incremental time steps, the propagation is implemented as a classical evolution in phase space modified by a quantum correction. The correction, which is a momentum jump function, is simulated in the quasi-classical approximation via a stochastic process. The technique, which is developed and validated in two- and three- dimensional momentum space, extends an earlier one-dimensional work. Also, by developing a new algorithm, the application to bound state motion in an anharmonic quartic potential shows better agreement with exact solutions in two-dimensional phase space.

  3. Stochastic Cooling

    SciTech Connect

    Blaskiewicz, M.

    2011-01-01

    Stochastic Cooling was invented by Simon van der Meer and was demonstrated at the CERN ISR and ICE (Initial Cooling Experiment). Operational systems were developed at Fermilab and CERN. A complete theory of cooling of unbunched beams was developed, and was applied at CERN and Fermilab. Several new and existing rings employ coasting beam cooling. Bunched beam cooling was demonstrated in ICE and has been observed in several rings designed for coasting beam cooling. High energy bunched beams have proven more difficult. Signal suppression was achieved in the Tevatron, though operational cooling was not pursued at Fermilab. Longitudinal cooling was achieved in the RHIC collider. More recently a vertical cooling system in RHIC cooled both transverse dimensions via betatron coupling.

  4. Planning under uncertainty solving large-scale stochastic linear programs

    SciTech Connect

    Infanger, G. . Dept. of Operations Research Technische Univ., Vienna . Inst. fuer Energiewirtschaft)

    1992-12-01

    For many practical problems, solutions obtained from deterministic models are unsatisfactory because they fail to hedge against certain contingencies that may occur in the future. Stochastic models address this shortcoming, but up to recently seemed to be intractable due to their size. Recent advances both in solution algorithms and in computer technology now allow us to solve important and general classes of practical stochastic problems. We show how large-scale stochastic linear programs can be efficiently solved by combining classical decomposition and Monte Carlo (importance) sampling techniques. We discuss the methodology for solving two-stage stochastic linear programs with recourse, present numerical results of large problems with numerous stochastic parameters, show how to efficiently implement the methodology on a parallel multi-computer and derive the theory for solving a general class of multi-stage problems with dependency of the stochastic parameters within a stage and between different stages.

  5. Binomial moment equations for stochastic reaction systems.

    PubMed

    Barzel, Baruch; Biham, Ofer

    2011-04-15

    A highly efficient formulation of moment equations for stochastic reaction networks is introduced. It is based on a set of binomial moments that capture the combinatorics of the reaction processes. The resulting set of equations can be easily truncated to include moments up to any desired order. The number of equations is dramatically reduced compared to the master equation. This formulation enables the simulation of complex reaction networks, involving a large number of reactive species much beyond the feasibility limit of any existing method. It provides an equation-based paradigm to the analysis of stochastic networks, complementing the commonly used Monte Carlo simulations. PMID:21568538

  6. Algorithmic advances in stochastic programming

    SciTech Connect

    Morton, D.P.

    1993-07-01

    Practical planning problems with deterministic forecasts of inherently uncertain parameters often yield unsatisfactory solutions. Stochastic programming formulations allow uncertain parameters to be modeled as random variables with known distributions, but the size of the resulting mathematical programs can be formidable. Decomposition-based algorithms take advantage of special structure and provide an attractive approach to such problems. We consider two classes of decomposition-based stochastic programming algorithms. The first type of algorithm addresses problems with a ``manageable`` number of scenarios. The second class incorporates Monte Carlo sampling within a decomposition algorithm. We develop and empirically study an enhanced Benders decomposition algorithm for solving multistage stochastic linear programs within a prespecified tolerance. The enhancements include warm start basis selection, preliminary cut generation, the multicut procedure, and decision tree traversing strategies. Computational results are presented for a collection of ``real-world`` multistage stochastic hydroelectric scheduling problems. Recently, there has been an increased focus on decomposition-based algorithms that use sampling within the optimization framework. These approaches hold much promise for solving stochastic programs with many scenarios. A critical component of such algorithms is a stopping criterion to ensure the quality of the solution. With this as motivation, we develop a stopping rule theory for algorithms in which bounds on the optimal objective function value are estimated by sampling. Rules are provided for selecting sample sizes and terminating the algorithm under which asymptotic validity of confidence interval statements for the quality of the proposed solution can be verified. Issues associated with the application of this theory to two sampling-based algorithms are considered, and preliminary empirical coverage results are presented.

  7. Shell model Monte Carlo methods

    SciTech Connect

    Koonin, S.E.; Dean, D.J.

    1996-10-01

    We review quantum Monte Carlo methods for dealing with large shell model problems. These methods reduce the imaginary-time many-body evolution operator to a coherent superposition of one-body evolutions in fluctuating one-body fields; resultant path integral is evaluated stochastically. We first discuss the motivation, formalism, and implementation of such Shell Model Monte Carlo methods. There then follows a sampler of results and insights obtained from a number of applications. These include the ground state and thermal properties of pf-shell nuclei, thermal behavior of {gamma}-soft nuclei, and calculation of double beta-decay matrix elements. Finally, prospects for further progress in such calculations are discussed. 87 refs.

  8. An advanced deterministic method for spent-fuel criticality safety analysis

    SciTech Connect

    DeHart, M.D.

    1998-09-01

    Over the past two decades, criticality safety analysts have come to rely to a large extent on Monte Carlo methods for criticality calculations. Monte Carlo has become popular because of its capability to model complex, nonorthogonal configurations or fissile materials, typical of real-world problems. In the last few years, however, interest in determinist transport methods has been revived, due to shortcomings in the stochastic nature of Monte Carlo approaches for certain types of analyses. Specifically, deterministic methods are superior to stochastic methods for calculations requiring accurate neutron density distributions or differential fluxes. Although Monte Carlo methods are well suited for eigenvalue calculations, they lack the localized detail necessary to assess uncertainties and sensitivities important in determining a range of applicability. Monte Carlo methods are also inefficient as a transport solution for multiple-pin depletion methods. Discrete ordinates methods have long been recognized as one of the most rigorous and accurate approximations used to solve the transport equation. However, until recently, geometric constrains in finite differencing schemes have made discrete ordinates methods impractical for nonorthogonal configurations such as reactor fuel assemblies. The development of an extended step characteristic (ESC) technique removes the grid structure limitation of traditional discrete ordinates methods. The NEWT computer code, a discrete ordinates code built on the ESC formalism, is being developed as part of the SCALE code system. This paper demonstrates the power, versatility, and applicability of NEWT as a state-of-the-art solution for current computational needs.

  9. Interaction picture density matrix quantum Monte Carlo

    SciTech Connect

    Malone, Fionn D. Lee, D. K. K.; Foulkes, W. M. C.; Blunt, N. S.; Shepherd, James J.; Spencer, J. S.

    2015-07-28

    The recently developed density matrix quantum Monte Carlo (DMQMC) algorithm stochastically samples the N-body thermal density matrix and hence provides access to exact properties of many-particle quantum systems at arbitrary temperatures. We demonstrate that moving to the interaction picture provides substantial benefits when applying DMQMC to interacting fermions. In this first study, we focus on a system of much recent interest: the uniform electron gas in the warm dense regime. The basis set incompleteness error at finite temperature is investigated and extrapolated via a simple Monte Carlo sampling procedure. Finally, we provide benchmark calculations for a four-electron system, comparing our results to previous work where possible.

  10. Efficiency of Health Care Production in Low-Resource Settings: A Monte-Carlo Simulation to Compare the Performance of Data Envelopment Analysis, Stochastic Distance Functions, and an Ensemble Model

    PubMed Central

    Giorgio, Laura Di; Flaxman, Abraham D.; Moses, Mark W.; Fullman, Nancy; Hanlon, Michael; Conner, Ruben O.; Wollum, Alexandra; Murray, Christopher J. L.

    2016-01-01

    Low-resource countries can greatly benefit from even small increases in efficiency of health service provision, supporting a strong case to measure and pursue efficiency improvement in low- and middle-income countries (LMICs). However, the knowledge base concerning efficiency measurement remains scarce for these contexts. This study shows that current estimation approaches may not be well suited to measure technical efficiency in LMICs and offers an alternative approach for efficiency measurement in these settings. We developed a simulation environment which reproduces the characteristics of health service production in LMICs, and evaluated the performance of Data Envelopment Analysis (DEA) and Stochastic Distance Function (SDF) for assessing efficiency. We found that an ensemble approach (ENS) combining efficiency estimates from a restricted version of DEA (rDEA) and restricted SDF (rSDF) is the preferable method across a range of scenarios. This is the first study to analyze efficiency measurement in a simulation setting for LMICs. Our findings aim to heighten the validity and reliability of efficiency analyses in LMICs, and thus inform policy dialogues about improving the efficiency of health service production in these settings. PMID:26812685

  11. Monte Carlo Methods in the Physical Sciences

    SciTech Connect

    Kalos, M H

    2007-06-06

    I will review the role that Monte Carlo methods play in the physical sciences. They are very widely used for a number of reasons: they permit the rapid and faithful transformation of a natural or model stochastic process into a computer code. They are powerful numerical methods for treating the many-dimensional problems that derive from important physical systems. Finally, many of the methods naturally permit the use of modern parallel computers in efficient ways. In the presentation, I will emphasize four aspects of the computations: whether or not the computation derives from a natural or model stochastic process; whether the system under study is highly idealized or realistic; whether the Monte Carlo methodology is straightforward or mathematically sophisticated; and finally, the scientific role of the computation.

  12. Collisionally induced stochastic dynamics of fast ions in solids

    SciTech Connect

    Burgdoerfer, J.

    1989-01-01

    Recent developments in the theory of excited state formation in collisions of fast highly charged ions with solids are reviewed. We discuss a classical transport theory employing Monte-Carlo sampling of solutions of a microscopic Langevin equation. Dynamical screening by the dielectric medium as well as multiple collisions are incorporated through the drift and stochastic forces in the Langevin equation. The close relationship between the extrinsically stochastic dynamics described by the Langevin and the intrinsic stochasticity in chaotic nonlinear dynamical systems is stressed. Comparison with experimental data and possible modification by quantum corrections are discussed. 49 refs., 11 figs.

  13. Primal and Dual Integrated Force Methods Used for Stochastic Analysis

    NASA Technical Reports Server (NTRS)

    Patnaik, Surya N.

    2005-01-01

    At the NASA Glenn Research Center, the primal and dual integrated force methods are being extended for the stochastic analysis of structures. The stochastic simulation can be used to quantify the consequence of scatter in stress and displacement response because of a specified variation in input parameters such as load (mechanical, thermal, and support settling loads), material properties (strength, modulus, density, etc.), and sizing design variables (depth, thickness, etc.). All the parameters are modeled as random variables with given probability distributions, means, and covariances. The stochastic response is formulated through a quadratic perturbation theory, and it is verified through a Monte Carlo simulation.

  14. A heterogeneous stochastic FEM framework for elliptic PDEs

    SciTech Connect

    Hou, Thomas Y. Liu, Pengfei

    2015-01-15

    We introduce a new concept of sparsity for the stochastic elliptic operator −div(a(x,ω)∇(⋅)), which reflects the compactness of its inverse operator in the stochastic direction and allows for spatially heterogeneous stochastic structure. This new concept of sparsity motivates a heterogeneous stochastic finite element method (HSFEM) framework for linear elliptic equations, which discretizes the equations using the heterogeneous coupling of spatial basis with local stochastic basis to exploit the local stochastic structure of the solution space. We also provide a sampling method to construct the local stochastic basis for this framework using the randomized range finding techniques. The resulting HSFEM involves two stages and suits the multi-query setting: in the offline stage, the local stochastic structure of the solution space is identified; in the online stage, the equation can be efficiently solved for multiple forcing functions. An online error estimation and correction procedure through Monte Carlo sampling is given. Numerical results for several problems with high dimensional stochastic input are presented to demonstrate the efficiency of the HSFEM in the online stage.

  15. The theory of hybrid stochastic algorithms

    SciTech Connect

    Kennedy, A.D. . Supercomputer Computations Research Inst.)

    1989-11-21

    These lectures introduce the family of Hybrid Stochastic Algorithms for performing Monte Carlo calculations in Quantum Field Theory. After explaining the basic concepts of Monte Carlo integration we discuss the properties of Markov processes and one particularly useful example of them: the Metropolis algorithm. Building upon this framework we consider the Hybrid and Langevin algorithms from the viewpoint that they are approximate versions of the Hybrid Monte Carlo method; and thus we are led to consider Molecular Dynamics using the Leapfrog algorithm. The lectures conclude by reviewing recent progress in these areas, explaining higher-order integration schemes, the asymptotic large-volume behaviour of the various algorithms, and some simple exact results obtained by applying them to free field theory. It is attempted throughout to give simple yet correct proofs of the various results encountered. 38 refs.

  16. Stochastic cooling in RHIC

    SciTech Connect

    Brennan,J.M.; Blaskiewicz, M. M.; Severino, F.

    2009-05-04

    After the success of longitudinal stochastic cooling of bunched heavy ion beam in RHIC, transverse stochastic cooling in the vertical plane of Yellow ring was installed and is being commissioned with proton beam. This report presents the status of the effort and gives an estimate, based on simulation, of the RHIC luminosity with stochastic cooling in all planes.

  17. Stochastic volatility models and Kelvin waves

    NASA Astrophysics Data System (ADS)

    Lipton, Alex; Sepp, Artur

    2008-08-01

    We use stochastic volatility models to describe the evolution of an asset price, its instantaneous volatility and its realized volatility. In particular, we concentrate on the Stein and Stein model (SSM) (1991) for the stochastic asset volatility and the Heston model (HM) (1993) for the stochastic asset variance. By construction, the volatility is not sign definite in SSM and is non-negative in HM. It is well known that both models produce closed-form expressions for the prices of vanilla option via the Lewis-Lipton formula. However, the numerical pricing of exotic options by means of the finite difference and Monte Carlo methods is much more complex for HM than for SSM. Until now, this complexity was considered to be an acceptable price to pay for ensuring that the asset volatility is non-negative. We argue that having negative stochastic volatility is a psychological rather than financial or mathematical problem, and advocate using SSM rather than HM in most applications. We extend SSM by adding volatility jumps and obtain a closed-form expression for the density of the asset price and its realized volatility. We also show that the current method of choice for solving pricing problems with stochastic volatility (via the affine ansatz for the Fourier-transformed density function) can be traced back to the Kelvin method designed in the 19th century for studying wave motion problems arising in fluid dynamics.

  18. The Analysis of the Patterns of Radiation-Induced DNA Damage Foci by a Stochastic Monte Carlo Model of DNA Double Strand Breaks Induction by Heavy Ions and Image Segmentation Software

    NASA Technical Reports Server (NTRS)

    Ponomarev, Artem; Cucinotta, F.

    2011-01-01

    To create a generalized mechanistic model of DNA damage in human cells that will generate analytical and image data corresponding to experimentally observed DNA damage foci and will help to improve the experimental foci yields by simulating spatial foci patterns and resolving problems with quantitative image analysis. Material and Methods: The analysis of patterns of RIFs (radiation-induced foci) produced by low- and high-LET (linear energy transfer) radiation was conducted by using a Monte Carlo model that combines the heavy ion track structure with characteristics of the human genome on the level of chromosomes. The foci patterns were also simulated in the maximum projection plane for flat nuclei. Some data analysis was done with the help of image segmentation software that identifies individual classes of RIFs and colocolized RIFs, which is of importance to some experimental assays that assign DNA damage a dual phosphorescent signal. Results: The model predicts the spatial and genomic distributions of DNA DSBs (double strand breaks) and associated RIFs in a human cell nucleus for a particular dose of either low- or high-LET radiation. We used the model to do analyses for different irradiation scenarios. In the beam-parallel-to-the-disk-of-a-flattened-nucleus scenario we found that the foci appeared to be merged due to their high density, while, in the perpendicular-beam scenario, the foci appeared as one bright spot per hit. The statistics and spatial distribution of regions of densely arranged foci, termed DNA foci chains, were predicted numerically using this model. Another analysis was done to evaluate the number of ion hits per nucleus, which were visible from streaks of closely located foci. In another analysis, our image segmentaiton software determined foci yields directly from images with single-class or colocolized foci. Conclusions: We showed that DSB clustering needs to be taken into account to determine the true DNA damage foci yield, which helps to

  19. Monte Carlo Benchmark

    Energy Science and Technology Software Center (ESTSC)

    2010-10-20

    The "Monte Carlo Benchmark" (MCB) is intended to model the computatiional performance of Monte Carlo algorithms on parallel architectures. It models the solution of a simple heuristic transport equation using a Monte Carlo technique. The MCB employs typical features of Monte Carlo algorithms such as particle creation, particle tracking, tallying particle information, and particle destruction. Particles are also traded among processors using MPI calls.

  20. Fast Quantum Algorithms for Numerical Integrals and Stochastic Processes

    NASA Technical Reports Server (NTRS)

    Abrams, D.; Williams, C.

    1999-01-01

    We discuss quantum algorithms that calculate numerical integrals and descriptive statistics of stochastic processes. With either of two distinct approaches, one obtains an exponential speed increase in comparison to the fastest known classical deterministic algotithms and a quadratic speed increase incomparison to classical Monte Carlo methods.

  1. Algebraic, geometric, and stochastic aspects of genetic operators

    NASA Technical Reports Server (NTRS)

    Foo, N. Y.; Bosworth, J. L.

    1972-01-01

    Genetic algorithms for function optimization employ genetic operators patterned after those observed in search strategies employed in natural adaptation. Two of these operators, crossover and inversion, are interpreted in terms of their algebraic and geometric properties. Stochastic models of the operators are developed which are employed in Monte Carlo simulations of their behavior.

  2. Stochastic Collocation Method for Three-dimensional Groundwater Flow

    NASA Astrophysics Data System (ADS)

    Shi, L.; Zhang, D.

    2008-12-01

    The stochastic collocation method (SCM) has recently gained extensive attention in several disciplines. The numerical implementation of SCM only requires repetitive runs of an existing deterministic solver or code as in the Monte Carlo simulation. But it is generally much more efficient than the Monte Carlo method. In this paper, the stochastic collocation method is used to efficiently qualify uncertainty of three-dimensional groundwater flow. We introduce the basic principles of common collocation methods, i.e., the tensor product collocation method (TPCM), Smolyak collocation method (SmCM), Stround-2 collocation method (StCM), and probability collocation method (PCM). Their accuracy, computational cost, and limitation are discussed. Illustrative examples reveal that the seamless combination of collocation techniques and existing simulators makes the new framework possible to efficiently handle complex stochastic problems.

  3. Digital simulation and modeling of nonlinear stochastic systems

    SciTech Connect

    Richardson, J M; Rowland, J R

    1981-04-01

    Digitally generated solutions of nonlinear stochastic systems are not unique but depend critically on the numerical integration algorithm used. Some theoretical and practical implications of this dependence are examined. The Ito-Stratonovich controversy concerning the solution of nonlinear stochastic systems is shown to be more than a theoretical debate on maintaining Markov properties as opposed to utilizing the computational rules of ordinary calculus. The theoretical arguments give rise to practical considerations in the formation and solution of discrete models from continuous stochastic systems. Well-known numerical integration algorithms are shown not only to provide different solutions for the same stochastic system but also to correspond to different stochastic integral definitions. These correspondences are proved by considering first and second moments of solutions that result from different integration algorithms and then comparing the moments to those arising from various stochastic integral definitions. This algorithm-dependence of solutions is in sharp contrast to the deterministic and linear stochastic cases in which unique solutions are determined by any convergent numerical algorithm. Consequences of the relationship between stochastic system solutions and simulation procedures are presented for a nonlinear filtering example. Monte Carlo simulations and statistical tests are applied to the example to illustrate the determining role which computational procedures play in generating solutions.

  4. Sensitivity Analysis and Stochastic Simulations of Non-equilibrium Plasma Flow

    SciTech Connect

    Lin, Guang; Karniadakis, George E.

    2009-11-05

    We study parametric uncertainties involved in plasma flows and apply stochastic sensitivity analysis to rank the importance of all inputs to guide large-scale stochastic simulations. Specifically, we employ different gradient-based sensitivity methods, namely Morris, multi-element probabilistic collocation method (ME-PCM) on sparse grids, Quasi-Monte Carlo, and Monte Carlo methods. These approaches go beyond the standard ``One-At-a-Time" sensitivity analysis and provide a measure of the nonlinear interaction effects for the uncertain inputs. The objective is to perform systematic stochastic simulations of plasma flows treating only as {\\em stochastic processes} the inputs with the highest sensitivity index, hence reducing substantially the computational cost. Two plasma flow examples are presented to demonstrate the capability and efficiency of the stochastic sensitivity analysis. The first one is a two-fluid model in a shock tube while the second one is a one-fluid/two-temperature model in flow past a cylinder.

  5. Fluctuations as stochastic deformation.

    PubMed

    Kazinski, P O

    2008-04-01

    A notion of stochastic deformation is introduced and the corresponding algebraic deformation procedure is developed. This procedure is analogous to the deformation of an algebra of observables like deformation quantization, but for an imaginary deformation parameter (the Planck constant). This method is demonstrated on diverse relativistic and nonrelativistic models with finite and infinite degrees of freedom. It is shown that under stochastic deformation the model of a nonrelativistic particle interacting with the electromagnetic field on a curved background passes into the stochastic model described by the Fokker-Planck equation with the diffusion tensor being the inverse metric tensor. The first stochastic correction to the Newton equations for this system is found. The Klein-Kramers equation is also derived as the stochastic deformation of a certain classical model. Relativistic generalizations of the Fokker-Planck and Klein-Kramers equations are obtained by applying the procedure of stochastic deformation to appropriate relativistic classical models. The analog of the Fokker-Planck equation associated with the stochastic Lorentz-Dirac equation is derived too. The stochastic deformation of the models of a free scalar field and an electromagnetic field is investigated. It turns out that in the latter case the obtained stochastic model describes a fluctuating electromagnetic field in a transparent medium. PMID:18517590

  6. Fluctuations as stochastic deformation

    NASA Astrophysics Data System (ADS)

    Kazinski, P. O.

    2008-04-01

    A notion of stochastic deformation is introduced and the corresponding algebraic deformation procedure is developed. This procedure is analogous to the deformation of an algebra of observables like deformation quantization, but for an imaginary deformation parameter (the Planck constant). This method is demonstrated on diverse relativistic and nonrelativistic models with finite and infinite degrees of freedom. It is shown that under stochastic deformation the model of a nonrelativistic particle interacting with the electromagnetic field on a curved background passes into the stochastic model described by the Fokker-Planck equation with the diffusion tensor being the inverse metric tensor. The first stochastic correction to the Newton equations for this system is found. The Klein-Kramers equation is also derived as the stochastic deformation of a certain classical model. Relativistic generalizations of the Fokker-Planck and Klein-Kramers equations are obtained by applying the procedure of stochastic deformation to appropriate relativistic classical models. The analog of the Fokker-Planck equation associated with the stochastic Lorentz-Dirac equation is derived too. The stochastic deformation of the models of a free scalar field and an electromagnetic field is investigated. It turns out that in the latter case the obtained stochastic model describes a fluctuating electromagnetic field in a transparent medium.

  7. A Stochastic Diffusion Process for the Dirichlet Distribution

    DOE PAGESBeta

    Bakosi, J.; Ristorcelli, J. R.

    2013-01-01

    The method of potential solutions of Fokker-Planck equations is used to develop a transport equation for the joint probability of N coupled stochastic variables with the Dirichlet distribution as its asymptotic solution. To ensure a bounded sample space, a coupled nonlinear diffusion process is required: the Wiener processes in the equivalent system of stochastic differential equations are multiplicative with coefficients dependent on all the stochastic variables. Individual samples of a discrete ensemble, obtained from the stochastic process, satisfy a unit-sum constraint at all times. The process may be used to represent realizations of a fluctuating ensemble of N variablesmore » subject to a conservation principle. Similar to the multivariate Wright-Fisher process, whose invariant is also Dirichlet, the univariate case yields a process whose invariant is the beta distribution. As a test of the results, Monte Carlo simulations are used to evolve numerical ensembles toward the invariant Dirichlet distribution.« less

  8. Stochastic Simulation Tool for Aerospace Structural Analysis

    NASA Technical Reports Server (NTRS)

    Knight, Norman F.; Moore, David F.

    2006-01-01

    Stochastic simulation refers to incorporating the effects of design tolerances and uncertainties into the design analysis model and then determining their influence on the design. A high-level evaluation of one such stochastic simulation tool, the MSC.Robust Design tool by MSC.Software Corporation, has been conducted. This stochastic simulation tool provides structural analysts with a tool to interrogate their structural design based on their mathematical description of the design problem using finite element analysis methods. This tool leverages the analyst's prior investment in finite element model development of a particular design. The original finite element model is treated as the baseline structural analysis model for the stochastic simulations that are to be performed. A Monte Carlo approach is used by MSC.Robust Design to determine the effects of scatter in design input variables on response output parameters. The tool was not designed to provide a probabilistic assessment, but to assist engineers in understanding cause and effect. It is driven by a graphical-user interface and retains the engineer-in-the-loop strategy for design evaluation and improvement. The application problem for the evaluation is chosen to be a two-dimensional shell finite element model of a Space Shuttle wing leading-edge panel under re-entry aerodynamic loading. MSC.Robust Design adds value to the analysis effort by rapidly being able to identify design input variables whose variability causes the most influence in response output parameters.

  9. Monte Carlo Example Programs

    Energy Science and Technology Software Center (ESTSC)

    2006-05-09

    The Monte Carlo example programs VARHATOM and DMCATOM are two small, simple FORTRAN programs that illustrate the use of the Monte Carlo Mathematical technique for calculating the ground state energy of the hydrogen atom.

  10. Monte Carlo methods in ICF

    SciTech Connect

    Zimmerman, G.B.

    1997-06-24

    Monte Carlo methods appropriate to simulate the transport of x-rays, neutrons, ion and electrons in Inertial Confinement Fusion targets are described and analyzed. The Implicit Monte Carlo method of x-ray transport handles symmetry within indirect drive ICF hohlraums well, but can be improved 50X in efficiency by angular biasing the x-rays towards the fuel capsule. Accurate simulation of thermonuclear burns nd burn diagnostics involves detailed particle source spectra, charged particle ranges, inflight reaction kinematics, corrections for bulk and thermal Doppler effects and variance reduction to obtain adequate statistics for rare events. It is found that the effects of angular Coulomb scattering must be included in models of charged particle transport through heterogeneous materials.

  11. Stochastic robustness of linear control systems

    NASA Technical Reports Server (NTRS)

    Stengel, Robert F.; Ryan, Laura E.

    1990-01-01

    A simple numerical procedure for estimating the stochastic robustness of a linear, time-invariant system is described. Monte Carlo evaluation of the system's eigenvalues allows the probability of instability and the related stochastic root locus to be estimated. This definition of robustness is an alternative to existing deterministic definitions that address both structured and unstructured parameter variations directly. This analysis approach treats not only Gaussian parameter uncertainties but non-Gaussian cases, including uncertain-but-bounded variations. Trivial extensions of the procedure admit alternate discriminants to be considered. Thus, the probabilities that stipulated degrees of instability will be exceeded or that closed-loop roots will leave desirable regions also can be estimated. Results are particularly amenable to graphical presentation.

  12. A Stochastic Employment Problem

    ERIC Educational Resources Information Center

    Wu, Teng

    2013-01-01

    The Stochastic Employment Problem(SEP) is a variation of the Stochastic Assignment Problem which analyzes the scenario that one assigns balls into boxes. Balls arrive sequentially with each one having a binary vector X = (X[subscript 1], X[subscript 2],...,X[subscript n]) attached, with the interpretation being that if X[subscript i] = 1 the ball…

  13. The isolation limits of stochastic vibration

    NASA Technical Reports Server (NTRS)

    Knopse, C. R.; Allaire, P. E.

    1993-01-01

    The vibration isolation problem is formulated as a 1D kinematic problem. The geometry of the stochastic wall trajectories arising from the stroke constraint is defined in terms of their significant extrema. An optimal control solution for the minimum acceleration return path determines a lower bound on platform mean square acceleration. This bound is expressed in terms of the probability density function on the significant maxima and the conditional fourth moment of the first passage time inverse. The first of these is found analytically while the second is found using a Monte Carlo simulation. The rms acceleration lower bound as a function of available space is then determined through numerical quadrature.

  14. Scattering of light by stochastically rough particles

    NASA Technical Reports Server (NTRS)

    Peltoniemi, Jouni I.; Lumme, Kari; Muinonen, Karri; Irvine, William M.

    1989-01-01

    The single particle phase function and the linear polarization for large stochastically deformed spheres have been calculated by Monte Carlo simulation using the geometrical optics approximation. The radius vector of a particle is assumed to obey a bivariate lognormal distribution with three free parameters: mean radius, its standard deviation and the coherence length of the autocorrelation function. All reflections/refractions which include sufficient energy have been included. Real and imaginary parts of the refractive index can be varied without any restrictions. Results and comparisons with some earlier less general theories are presented. Applications of this theory to the photometric properties of atmosphereless bodies and interplanetary dust are discussed.

  15. A Monte Carlo approach to water management

    NASA Astrophysics Data System (ADS)

    Koutsoyiannis, D.

    2012-04-01

    Common methods for making optimal decisions in water management problems are insufficient. Linear programming methods are inappropriate because hydrosystems are nonlinear with respect to their dynamics, operation constraints and objectives. Dynamic programming methods are inappropriate because water management problems cannot be divided into sequential stages. Also, these deterministic methods cannot properly deal with the uncertainty of future conditions (inflows, demands, etc.). Even stochastic extensions of these methods (e.g. linear-quadratic-Gaussian control) necessitate such drastic oversimplifications of hydrosystems that may make the obtained results irrelevant to the real world problems. However, a Monte Carlo approach is feasible and can form a general methodology applicable to any type of hydrosystem. This methodology uses stochastic simulation to generate system inputs, either unconditional or conditioned on a prediction, if available, and represents the operation of the entire system through a simulation model as faithful as possible, without demanding a specific mathematical form that would imply oversimplifications. Such representation fully respects the physical constraints, while at the same time it evaluates the system operation constraints and objectives in probabilistic terms, and derives their distribution functions and statistics through Monte Carlo simulation. As the performance criteria of a hydrosystem operation will generally be highly nonlinear and highly nonconvex functions of the control variables, a second Monte Carlo procedure, implementing stochastic optimization, is necessary to optimize system performance and evaluate the control variables of the system. The latter is facilitated if the entire representation is parsimonious, i.e. if the number of control variables is kept at a minimum by involving a suitable system parameterization. The approach is illustrated through three examples for (a) a hypothetical system of two reservoirs

  16. Solution of the stochastic control problem in unbounded domains.

    NASA Technical Reports Server (NTRS)

    Robinson, P.; Moore, J.

    1973-01-01

    Bellman's dynamic programming equation for the optimal index and control law for stochastic control problems is a parabolic or elliptic partial differential equation frequently defined in an unbounded domain. Existing methods of solution require bounded domain approximations, the application of singular perturbation techniques or Monte Carlo simulation procedures. In this paper, using the fact that Poisson impulse noise tends to a Gaussian process under certain limiting conditions, a method which achieves an arbitrarily good approximate solution to the stochastic control problem is given. The method uses the two iterative techniques of successive approximation and quasi-linearization and is inherently more efficient than existing methods of solution.

  17. Interaction picture density matrix quantum Monte Carlo.

    PubMed

    Malone, Fionn D; Blunt, N S; Shepherd, James J; Lee, D K K; Spencer, J S; Foulkes, W M C

    2015-07-28

    The recently developed density matrix quantum Monte Carlo (DMQMC) algorithm stochastically samples the N-body thermal density matrix and hence provides access to exact properties of many-particle quantum systems at arbitrary temperatures. We demonstrate that moving to the interaction picture provides substantial benefits when applying DMQMC to interacting fermions. In this first study, we focus on a system of much recent interest: the uniform electron gas in the warm dense regime. The basis set incompleteness error at finite temperature is investigated and extrapolated via a simple Monte Carlo sampling procedure. Finally, we provide benchmark calculations for a four-electron system, comparing our results to previous work where possible. PMID:26233116

  18. Stochastic Processes in Electrochemistry.

    PubMed

    Singh, Pradyumna S; Lemay, Serge G

    2016-05-17

    Stochastic behavior becomes an increasingly dominant characteristic of electrochemical systems as we probe them on the smallest scales. Advances in the tools and techniques of nanoelectrochemistry dictate that stochastic phenomena will become more widely manifest in the future. In this Perspective, we outline the conceptual tools that are required to analyze and understand this behavior. We draw on examples from several specific electrochemical systems where important information is encoded in, and can be derived from, apparently random signals. This Perspective attempts to serve as an accessible introduction to understanding stochastic phenomena in electrochemical systems and outlines why they cannot be understood with conventional macroscopic descriptions. PMID:27120701

  19. Quantum Stochastic Processes

    SciTech Connect

    Spring, William Joseph

    2009-04-13

    We consider quantum analogues of n-parameter stochastic processes, associated integrals and martingale properties extending classical results obtained in [1, 2, 3], and quantum results in [4, 5, 6, 7, 8, 9, 10].

  20. Dynamics of Double Stochastic Operators

    NASA Astrophysics Data System (ADS)

    Saburov, Mansoor

    2016-03-01

    A double stochastic operator is a generalization of a double stochastic matrix. In this paper, we study the dynamics of double stochastic operators. We give a criterion for a regularity of a double stochastic operator in terms of absences of its periodic points. We provide some examples to insure that, in general, a trajectory of a double stochastic operator may converge to any interior point of the simplex.

  1. Monte Carlo fundamentals

    SciTech Connect

    Brown, F.B.; Sutton, T.M.

    1996-02-01

    This report is composed of the lecture notes from the first half of a 32-hour graduate-level course on Monte Carlo methods offered at KAPL. These notes, prepared by two of the principle developers of KAPL`s RACER Monte Carlo code, cover the fundamental theory, concepts, and practices for Monte Carlo analysis. In particular, a thorough grounding in the basic fundamentals of Monte Carlo methods is presented, including random number generation, random sampling, the Monte Carlo approach to solving transport problems, computational geometry, collision physics, tallies, and eigenvalue calculations. Furthermore, modern computational algorithms for vector and parallel approaches to Monte Carlo calculations are covered in detail, including fundamental parallel and vector concepts, the event-based algorithm, master/slave schemes, parallel scaling laws, and portability issues.

  2. A Stochastic Cratering Model for Asteroid Surfaces

    NASA Technical Reports Server (NTRS)

    Richardson, J. E.; Melosh, H. J.; Greenberg, R. J.

    2005-01-01

    The observed cratering records on asteroid surfaces (four so far: Gaspra, Ida, Mathilde, and Eros [1-4]) provide us with important clues to their past bombardment histories. Previous efforts toward interpreting these records have led to two basic modeling styles for reproducing the statistics of the observed crater populations. The first, and most direct, method is to use Monte Carlo techniques [5] to stochastically populate a matrix-model test surface with craters as a function of time [6,7]. The second method is to use a more general, parameterized approach to duplicate the statistics of the observed crater population [8,9]. In both methods, several factors must be included beyond the simple superposing of circular features: (1) crater erosion by subsequent impacts, (2) infilling of craters by impact ejecta, and (3) crater degradation and era- sure due to the seismic effects of subsequent impacts. Here we present an updated Monte Carlo (stochastic) modeling approach, designed specifically with small- to medium-sized asteroids in mind.

  3. Microgrid Reliability Modeling and Battery Scheduling Using Stochastic Linear Programming

    SciTech Connect

    Cardoso, Goncalo; Stadler, Michael; Siddiqui, Afzal; Marnay, Chris; DeForest, Nicholas; Barbosa-Povoa, Ana; Ferrao, Paulo

    2013-05-23

    This paper describes the introduction of stochastic linear programming into Operations DER-CAM, a tool used to obtain optimal operating schedules for a given microgrid under local economic and environmental conditions. This application follows previous work on optimal scheduling of a lithium-iron-phosphate battery given the output uncertainty of a 1 MW molten carbonate fuel cell. Both are in the Santa Rita Jail microgrid, located in Dublin, California. This fuel cell has proven unreliable, partially justifying the consideration of storage options. Several stochastic DER-CAM runs are executed to compare different scenarios to values obtained by a deterministic approach. Results indicate that using a stochastic approach provides a conservative yet more lucrative battery schedule. Lower expected energy bills result, given fuel cell outages, in potential savings exceeding 6percent.

  4. Applying the stochastic Galerkin method to epidemic models with uncertainty in the parameters.

    PubMed

    Harman, David B; Johnston, Peter R

    2016-07-01

    Parameters in modelling are not always known with absolute certainty. In epidemic modelling, this is true of many of the parameters. It is important for this uncertainty to be included in any model. This paper looks at using the stochastic Galerkin method to solve an SIR model with uncertainty in the parameters. The results obtained from the stochastic Galerkin method are then compared with results obtained through Monte Carlo sampling. The computational cost of each method is also compared. It is shown that the stochastic Galerkin method produces good results, even at low order expansions, that are much less computationally expensive than Monte Carlo sampling. It is also shown that the stochastic Galerkin method does not always converge and this non-convergence is explored. PMID:27091743

  5. Renormalization of stochastic lattice models: basic formulation.

    PubMed

    Haselwandter, Christoph A; Vvedensky, Dimitri D

    2007-10-01

    We describe a general method for the multiscale analysis of stochastic lattice models. Beginning with a lattice Langevin formulation of site fluctuations, we derive stochastic partial differential equations by regularizing the transition rules of the model. Subsequent coarse graining is accomplished by calculating renormalization-group (RG) trajectories from initial conditions determined by the regularized atomistic models. The RG trajectories correspond to hierarchies of continuum equations describing lattice models over expanding length and time scales. These continuum equations retain a quantitative connection over different scales, as well as to the underlying atomistic dynamics. This provides a systematic method for the derivation of continuum equations from the transition rules of lattice models for any length and time scales. As an illustration we consider the one-dimensional (1D) Wolf-Villain (WV) model [Europhys. Lett. 13, 389 (1990)]. The RG analysis of this model, which we develop in detail, is generic and can be applied to a wide range of conservative lattice models. The RG trajectory of the 1D WV model shows a complex crossover sequence of linear and nonlinear stochastic differential equations, which is in excellent agreement with kinetic Monte Carlo simulations of this model. We conclude by discussing possible applications of the multiscale method described here to other nonequilibrium systems. PMID:17994944

  6. Monte Carlo Form-Finding Method for Tensegrity Structures

    NASA Astrophysics Data System (ADS)

    Li, Yue; Feng, Xi-Qiao; Cao, Yan-Ping

    2010-05-01

    In this paper, we propose a Monte Carlo-based approach to solve tensegrity form-finding problems. It uses a stochastic procedure to find the deterministic equilibrium configuration of a tensegrity structure. The suggested Monte Carlo form-finding (MCFF) method is highly efficient because it does not involve complicated matrix operations and symmetry analysis and it works for arbitrary initial configurations. Both regular and non-regular tensegrity problems of large scale can be solved. Some representative examples are presented to demonstrate the efficiency and accuracy of this versatile method.

  7. An adaptive high-dimensional stochastic model representation technique for the solution of stochastic partial differential equations

    SciTech Connect

    Ma Xiang; Zabaras, Nicholas

    2010-05-20

    A computational methodology is developed to address the solution of high-dimensional stochastic problems. It utilizes high-dimensional model representation (HDMR) technique in the stochastic space to represent the model output as a finite hierarchical correlated function expansion in terms of the stochastic inputs starting from lower-order to higher-order component functions. HDMR is efficient at capturing the high-dimensional input-output relationship such that the behavior for many physical systems can be modeled to good accuracy only by the first few lower-order terms. An adaptive version of HDMR is also developed to automatically detect the important dimensions and construct higher-order terms using only the important dimensions. The newly developed adaptive sparse grid collocation (ASGC) method is incorporated into HDMR to solve the resulting sub-problems. By integrating HDMR and ASGC, it is computationally possible to construct a low-dimensional stochastic reduced-order model of the high-dimensional stochastic problem and easily perform various statistic analysis on the output. Several numerical examples involving elementary mathematical functions and fluid mechanics problems are considered to illustrate the proposed method. The cases examined show that the method provides accurate results for stochastic dimensionality as high as 500 even with large-input variability. The efficiency of the proposed method is examined by comparing with Monte Carlo (MC) simulation.

  8. A probabilistic lower bound for two-stage stochastic programs

    SciTech Connect

    Dantzig, G.B.; Infanger, G.

    1995-11-01

    In the framework of Benders decomposition for two-stage stochastic linear programs, the authors estimate the coefficients and right-hand sides of the cutting planes using Monte Carlo sampling. The authors present a new theory for estimating a lower bound for the optimal objective value and they compare (using various test problems whose true optimal value is known) the predicted versus the observed rate of coverage of the optimal objective by the lower bound confidence interval.

  9. MonteCUBES

    SciTech Connect

    Blennow, Mattias

    2010-03-30

    We introduce the software package MonteCUBES, which is designed to easily and effectively perform Markov Chain Monte Carlo simulations for analyzing neutrino oscillation experiments. We discuss the methods used in the software as well as why we believe that it is particularly useful for simulating new physics effects.

  10. Stochastic modeling of driver behavior by Langevin equations

    NASA Astrophysics Data System (ADS)

    Langner, Michael; Peinke, Joachim

    2015-06-01

    A procedure based on stochastic Langevin equations is presented and shows how a stochastic model of driver behavior can be estimated directly from given data. The Langevin analysis allows the separation of a given data-set into a stochastic diffusion- and a deterministic drift field. Form the drift field a potential can be derived. In particular the method is here applied on driving data from a simulator. We overcome typical problems like varying sampling rates, low noise levels, low data amounts, inefficient coordinate systems, and non-stationary situations. From the estimation of the drift- and diffusion vector-fields derived from the data, we show different ways how to set up Monte-Carlo simulations for the driver behavior.

  11. Clustering of extreme events in typical stochastic models

    NASA Astrophysics Data System (ADS)

    Mystegniotis, Antonios; Vasilaki, Vasileia; Pappa, Ioanna; Curceac, Stelian; Saltouridou, Despina; Efthimiou, Nikos; Papatsoutsos, Giannis; Papalexiou, Simon Michael; Koutsoyiannis, Demetris

    2013-04-01

    We study the clustering properties of extreme events as produced by typical stochastic models and compare the results with the ones of observed data. Specifically the stochastic models that we use are the AR(1), AR(2), ARMA(1,1), as well as the Hurst-Kolmogorov model. In terms of data, we use instrumental and proxy hydroclimatic time series. To quantify clustering we study the multi scale properties of each process and in particular the variation of standard deviation with time scale as well of the frequencies of similar events (e.g. those exceeding a certain threshold with time scale). To calculate these properties we use either analytical methods when possible, or Monte Carlo simulation. Acknowledgement: This research is conducted within the frame of the undergraduate course "Stochastic Methods in Water Resources" of the National Technical University of Athens (NTUA). The School of Civil Engineering of NTUA provided moral support for the participation of the students in the Assembly.

  12. Spatial Correlations in Monte Carlo Criticality Simulations

    NASA Astrophysics Data System (ADS)

    Dumonteil, E.; Malvagi, F.; Zoia, A.; Mazzolo, A.; Artusio, D.; Dieudonné, C.; De Mulatier, C.

    2014-06-01

    Temporal correlations arising in Monte Carlo criticality codes have focused the attention of both developers and practitioners for a long time. Those correlations affects the evaluation of tallies of loosely coupled systems, where the system's typical size is very large compared to the diffusion/absorption length scale of the neutrons. These time correlations are closely related to spatial correlations, both variables being linked by the transport equation. Therefore this paper addresses the question of diagnosing spatial correlations in Monte Carlo criticality simulations. In that aim, we will propose a spatial correlation function well suited to Monte Carlo simulations, and show its use while simulating a fuel pin-cell. The results will be discussed, modeled and interpreted using the tools of branching processes of statistical mechanics. A mechanism called "neutron clustering", affecting simulations, will be discussed in this frame.

  13. Stochastic Thermal Convection

    NASA Astrophysics Data System (ADS)

    Venturi, Daniele

    2005-11-01

    Stochastic bifurcations and stability of natural convective flows in 2d and 3d enclosures are investigated by the multi-element generalized polynomial chaos (ME-gPC) method (Xiu and Karniadakis, SISC, vol. 24, 2002). The Boussinesq approximation for the variation of physical properties is assumed. The stability analysis is first carried out in a deterministic sense, to determine steady state solutions and primary and secondary bifurcations. Stochastic simulations are then conducted around discontinuities and transitional regimes. It is found that these highly non-linear phenomena can be efficiently captured by the ME-gPC method. Finally, the main findings of the stochastic analysis and their implications for heat transfer will be discussed.

  14. Stochastic Gauss equations

    NASA Astrophysics Data System (ADS)

    Pierret, Frédéric

    2016-02-01

    We derived the equations of Celestial Mechanics governing the variation of the orbital elements under a stochastic perturbation, thereby generalizing the classical Gauss equations. Explicit formulas are given for the semimajor axis, the eccentricity, the inclination, the longitude of the ascending node, the pericenter angle, and the mean anomaly, which are expressed in term of the angular momentum vector H per unit of mass and the energy E per unit of mass. Together, these formulas are called the stochastic Gauss equations, and they are illustrated numerically on an example from satellite dynamics.

  15. Stochastic modeling of rainfall

    SciTech Connect

    Guttorp, P.

    1996-12-31

    We review several approaches in the literature for stochastic modeling of rainfall, and discuss some of their advantages and disadvantages. While stochastic precipitation models have been around at least since the 1850`s, the last two decades have seen an increased development of models based (more or less) on the physical processes involved in precipitation. There are interesting questions of scale and measurement that pertain to these modeling efforts. Recent modeling efforts aim at including meteorological variables, and may be useful for regional down-scaling of general circulation models.

  16. STOCHASTIC COOLING FOR BUNCHED BEAMS.

    SciTech Connect

    BLASKIEWICZ, M.

    2005-05-16

    Problems associated with bunched beam stochastic cooling are reviewed. A longitudinal stochastic cooling system for RHIC is under construction and has been partially commissioned. The state of the system and future plans are discussed.

  17. MULTILEVEL ACCELERATION OF STOCHASTIC COLLOCATION METHODS FOR PDE WITH RANDOM INPUT DATA

    SciTech Connect

    Webster, Clayton G; Jantsch, Peter A; Teckentrup, Aretha L; Gunzburger, Max D

    2013-01-01

    Stochastic Collocation (SC) methods for stochastic partial differential equa- tions (SPDEs) suffer from the curse of dimensionality, whereby increases in the stochastic dimension cause an explosion of computational effort. To combat these challenges, multilevel approximation methods seek to decrease computational complexity by balancing spatial and stochastic discretization errors. As a form of variance reduction, multilevel techniques have been successfully applied to Monte Carlo (MC) methods, but may be extended to accelerate other methods for SPDEs in which the stochastic and spatial degrees of freedom are de- coupled. This article presents general convergence and computational complexity analysis of a multilevel method for SPDEs, demonstrating its advantages with regard to standard, single level approximation. The numerical results will highlight conditions under which multilevel sparse grid SC is preferable to the more traditional MC and SC approaches.

  18. Stochastic entrainment of a stochastic oscillator.

    PubMed

    Wang, Guanyu; Peskin, Charles S

    2015-11-01

    In this work, we consider a stochastic oscillator described by a discrete-state continuous-time Markov chain, in which the states are arranged in a circle, and there is a constant probability per unit time of jumping from one state to the next in a specified direction around the circle. At each of a sequence of equally spaced times, the oscillator has a specified probability of being reset to a particular state. The focus of this work is the entrainment of the oscillator by this periodic but stochastic stimulus. We consider a distinguished limit, in which (i) the number of states of the oscillator approaches infinity, as does the probability per unit time of jumping from one state to the next, so that the natural mean period of the oscillator remains constant, (ii) the resetting probability approaches zero, and (iii) the period of the resetting signal approaches a multiple, by a ratio of small integers, of the natural mean period of the oscillator. In this distinguished limit, we use analytic and numerical methods to study the extent to which entrainment occurs. PMID:26651734

  19. MORSE Monte Carlo code

    SciTech Connect

    Cramer, S.N.

    1984-01-01

    The MORSE code is a large general-use multigroup Monte Carlo code system. Although no claims can be made regarding its superiority in either theoretical details or Monte Carlo techniques, MORSE has been, since its inception at ORNL in the late 1960s, the most widely used Monte Carlo radiation transport code. The principal reason for this popularity is that MORSE is relatively easy to use, independent of any installation or distribution center, and it can be easily customized to fit almost any specific need. Features of the MORSE code are described.

  20. Application of tabu search to deterministic and stochastic optimization problems

    NASA Astrophysics Data System (ADS)

    Gurtuna, Ozgur

    During the past two decades, advances in computer science and operations research have resulted in many new optimization methods for tackling complex decision-making problems. One such method, tabu search, forms the basis of this thesis. Tabu search is a very versatile optimization heuristic that can be used for solving many different types of optimization problems. Another research area, real options, has also gained considerable momentum during the last two decades. Real options analysis is emerging as a robust and powerful method for tackling decision-making problems under uncertainty. Although the theoretical foundations of real options are well-established and significant progress has been made in the theory side, applications are lagging behind. A strong emphasis on practical applications and a multidisciplinary approach form the basic rationale of this thesis. The fundamental concepts and ideas behind tabu search and real options are investigated in order to provide a concise overview of the theory supporting both of these two fields. This theoretical overview feeds into the design and development of algorithms that are used to solve three different problems. The first problem examined is a deterministic one: finding the optimal servicing tours that minimize energy and/or duration of missions for servicing satellites around Earth's orbit. Due to the nature of the space environment, this problem is modeled as a time-dependent, moving-target optimization problem. Two solution methods are developed: an exhaustive method for smaller problem instances, and a method based on tabu search for larger ones. The second and third problems are related to decision-making under uncertainty. In the second problem, tabu search and real options are investigated together within the context of a stochastic optimization problem: option valuation. By merging tabu search and Monte Carlo simulation, a new method for studying options, Tabu Search Monte Carlo (TSMC) method, is

  1. Stochastic Models of Human Growth.

    ERIC Educational Resources Information Center

    Goodrich, Robert L.

    Stochastic difference equations of the Box-Jenkins form provide an adequate family of models on which to base the stochastic theory of human growth processes, but conventional time series identification methods do not apply to available data sets. A method to identify structure and parameters of stochastic difference equation models of human…

  2. Stochastic finite-difference time-domain

    NASA Astrophysics Data System (ADS)

    Smith, Steven Michael

    2011-12-01

    This dissertation presents the derivation of an approximate method to determine the mean and the variance of electro-magnetic fields in the body using the Finite-Difference Time-Domain (FDTD) method. Unlike Monte Carlo analysis, which requires repeated FDTD simulations, this method directly computes the variance of the fields at every point in space at every sample of time in the simulation. This Stochastic FDTD simulation (S-FDTD) has at its root a new wave called the Variance wave, which is computed in the time domain along with the mean properties of the model space in the FDTD simulation. The Variance wave depends on the electro-magnetic fields, the reflections and transmission though the different dielectrics, and the variances of the electrical properties of the surrounding materials. Like the electro-magnetic fields, the Variance wave begins at zero (there is no variance before the source is turned on) and is computed in the time domain until all fields reach steady state. This process is performed in a fraction of the time of a Monte Carlo simulation and yields the first two statistical parameters (mean and variance). The mean of the field is computed using the traditional FDTD equations. Variance is computed by approximating the correlation coefficients between the constituitive properties and the use of the S-FDTD equations. The impetus for this work was the simulation time it takes to perform 3D Specific Absorption Rate (SAR) FDTD analysis of the human head model for cell phone power absorption in the human head due to the proximity of a cell phone being used. In many instances, Monte Carlo analysis is not performed due to the lengthy simulation times required. With the development of S-FDTD, these statistical analyses could be performed providing valuable statistical information with this information being provided in a small fraction of the time it would take to perform a Monte Carlo analysis.

  3. Focus on stochastic thermodynamics

    NASA Astrophysics Data System (ADS)

    Van den Broeck, Christian; Sasa, Shin-ichi; Seifert, Udo

    2016-02-01

    We introduce the thirty papers collected in this ‘focus on’ issue. The contributions explore conceptual issues within and around stochastic thermodynamics, use this framework for the theoretical modeling and experimental investigation of specific systems, and provide further perspectives on and for this active field.

  4. Elementary stochastic cooling

    SciTech Connect

    Tollestrup, A.V.; Dugan, G

    1983-12-01

    Major headings in this review include: proton sources; antiproton production; antiproton sources and Liouville, the role of the Debuncher; transverse stochastic cooling, time domain; the accumulator; frequency domain; pickups and kickers; Fokker-Planck equation; calculation of constants in the Fokker-Planck equation; and beam feedback. (GHT)

  5. Monte Carlo variance reduction

    NASA Technical Reports Server (NTRS)

    Byrn, N. R.

    1980-01-01

    Computer program incorporates technique that reduces variance of forward Monte Carlo method for given amount of computer time in determining radiation environment in complex organic and inorganic systems exposed to significant amounts of radiation.

  6. Distributed parallel computing in stochastic modeling of groundwater systems.

    PubMed

    Dong, Yanhui; Li, Guomin; Xu, Haizhen

    2013-03-01

    Stochastic modeling is a rapidly evolving, popular approach to the study of the uncertainty and heterogeneity of groundwater systems. However, the use of Monte Carlo-type simulations to solve practical groundwater problems often encounters computational bottlenecks that hinder the acquisition of meaningful results. To improve the computational efficiency, a system that combines stochastic model generation with MODFLOW-related programs and distributed parallel processing is investigated. The distributed computing framework, called the Java Parallel Processing Framework, is integrated into the system to allow the batch processing of stochastic models in distributed and parallel systems. As an example, the system is applied to the stochastic delineation of well capture zones in the Pinggu Basin in Beijing. Through the use of 50 processing threads on a cluster with 10 multicore nodes, the execution times of 500 realizations are reduced to 3% compared with those of a serial execution. Through this application, the system demonstrates its potential in solving difficult computational problems in practical stochastic modeling. PMID:22823593

  7. Stochastic Synapses Enable Efficient Brain-Inspired Learning Machines.

    PubMed

    Neftci, Emre O; Pedroni, Bruno U; Joshi, Siddharth; Al-Shedivat, Maruan; Cauwenberghs, Gert

    2016-01-01

    Recent studies have shown that synaptic unreliability is a robust and sufficient mechanism for inducing the stochasticity observed in cortex. Here, we introduce Synaptic Sampling Machines (S2Ms), a class of neural network models that uses synaptic stochasticity as a means to Monte Carlo sampling and unsupervised learning. Similar to the original formulation of Boltzmann machines, these models can be viewed as a stochastic counterpart of Hopfield networks, but where stochasticity is induced by a random mask over the connections. Synaptic stochasticity plays the dual role of an efficient mechanism for sampling, and a regularizer during learning akin to DropConnect. A local synaptic plasticity rule implementing an event-driven form of contrastive divergence enables the learning of generative models in an on-line fashion. S2Ms perform equally well using discrete-timed artificial units (as in Hopfield networks) or continuous-timed leaky integrate and fire neurons. The learned representations are remarkably sparse and robust to reductions in bit precision and synapse pruning: removal of more than 75% of the weakest connections followed by cursory re-learning causes a negligible performance loss on benchmark classification tasks. The spiking neuron-based S2Ms outperform existing spike-based unsupervised learners, while potentially offering substantial advantages in terms of power and complexity, and are thus promising models for on-line learning in brain-inspired hardware. PMID:27445650

  8. Stochastic Synapses Enable Efficient Brain-Inspired Learning Machines

    PubMed Central

    Neftci, Emre O.; Pedroni, Bruno U.; Joshi, Siddharth; Al-Shedivat, Maruan; Cauwenberghs, Gert

    2016-01-01

    Recent studies have shown that synaptic unreliability is a robust and sufficient mechanism for inducing the stochasticity observed in cortex. Here, we introduce Synaptic Sampling Machines (S2Ms), a class of neural network models that uses synaptic stochasticity as a means to Monte Carlo sampling and unsupervised learning. Similar to the original formulation of Boltzmann machines, these models can be viewed as a stochastic counterpart of Hopfield networks, but where stochasticity is induced by a random mask over the connections. Synaptic stochasticity plays the dual role of an efficient mechanism for sampling, and a regularizer during learning akin to DropConnect. A local synaptic plasticity rule implementing an event-driven form of contrastive divergence enables the learning of generative models in an on-line fashion. S2Ms perform equally well using discrete-timed artificial units (as in Hopfield networks) or continuous-timed leaky integrate and fire neurons. The learned representations are remarkably sparse and robust to reductions in bit precision and synapse pruning: removal of more than 75% of the weakest connections followed by cursory re-learning causes a negligible performance loss on benchmark classification tasks. The spiking neuron-based S2Ms outperform existing spike-based unsupervised learners, while potentially offering substantial advantages in terms of power and complexity, and are thus promising models for on-line learning in brain-inspired hardware. PMID:27445650

  9. Stochastic analysis of transport in tubes with rough walls

    SciTech Connect

    Tartakovsky, Daniel M. . E-mail: dmt@lanl.gov; Xiu Dongbin . E-mail: dxiu@math.purdue.edu

    2006-09-01

    Flow and transport in tubes with rough surfaces play an important role in a variety of applications. Often the topology of such surfaces cannot be accurately described in all of its relevant details due to either insufficient data or measurement errors or both. In such cases, this topological uncertainty can be efficiently handled by treating rough boundaries as random fields, so that an underlying physical phenomenon is described by deterministic or stochastic differential equations in random domains. To deal with this class of problems, we use a computational framework, which is based on stochastic mappings to transform the original deterministic/stochastic problem in a random domain into a stochastic problem in a deterministic domain. The latter problem has been studied more extensively and existing analytical/numerical techniques can be readily applied. In this paper, we employ both a generalized polynomial chaos and Monte Carlo simulations to solve the transformed stochastic problem. We use our approach to describe transport of a passive scalar in Stokes' flow and to quantify the corresponding predictive uncertainty.

  10. Adaptive stochastic cellular automata: Applications

    NASA Astrophysics Data System (ADS)

    Qian, S.; Lee, Y. C.; Jones, R. D.; Barnes, C. W.; Flake, G. W.; O'Rourke, M. K.; Lee, K.; Chen, H. H.; Sun, G. Z.; Zhang, Y. Q.; Chen, D.; Giles, C. L.

    1990-09-01

    The stochastic learning cellular automata model has been applied to the problem of controlling unstable systems. Two example unstable systems studied are controlled by an adaptive stochastic cellular automata algorithm with an adaptive critic. The reinforcement learning algorithm and the architecture of the stochastic CA controller are presented. Learning to balance a single pole is discussed in detail. Balancing an inverted double pendulum highlights the power of the stochastic CA approach. The stochastic CA model is compared to conventional adaptive control and artificial neural network approaches.

  11. Stochastic computing with biomolecular automata

    NASA Astrophysics Data System (ADS)

    Adar, Rivka; Benenson, Yaakov; Linshiz, Gregory; Rosner, Amit; Tishby, Naftali; Shapiro, Ehud

    2004-07-01

    Stochastic computing has a broad range of applications, yet electronic computers realize its basic step, stochastic choice between alternative computation paths, in a cumbersome way. Biomolecular computers use a different computational paradigm and hence afford novel designs. We constructed a stochastic molecular automaton in which stochastic choice is realized by means of competition between alternative biochemical pathways, and choice probabilities are programmed by the relative molar concentrations of the software molecules coding for the alternatives. Programmable and autonomous stochastic molecular automata have been shown to perform direct analysis of disease-related molecular indicators in vitro and may have the potential to provide in situ medical diagnosis and cure.

  12. SCALE Monte Carlo Eigenvalue Methods and New Advancements

    SciTech Connect

    Goluoglu, Sedat; Leppanen, Jaakko; Petrie Jr, Lester M; Dunn, Michael E

    2010-01-01

    SCALE code system is developed and maintained by Oak Ridge National Laboratory to perform criticality safety, reactor analysis, radiation shielding, and spent fuel characterization for nuclear facilities and transportation/storage package designs. SCALE is a modular code system that includes several codes which use either Monte Carlo or discrete ordinates solution methodologies for solving relevant neutral particle transport equations. This paper describes some of the key capabilities of the Monte Carlo criticality safety codes within the SCALE code system.

  13. Edgeworth expansions of stochastic trading time

    NASA Astrophysics Data System (ADS)

    Decamps, Marc; De Schepper, Ann

    2010-08-01

    Under most local and stochastic volatility models the underlying forward is assumed to be a positive function of a time-changed Brownian motion. It relates nicely the implied volatility smile to the so-called activity rate in the market. Following Young and DeWitt-Morette (1986) [8], we propose to apply the Duru-Kleinert process-cum-time transformation in path integral to formulate the transition density of the forward. The method leads to asymptotic expansions of the transition density around a Gaussian kernel corresponding to the average activity in the market conditional on the forward value. The approximation is numerically illustrated for pricing vanilla options under the CEV model and the popular normal SABR model. The asymptotics can also be used for Monte Carlo simulations or backward integration schemes.

  14. Stochastic Inversion of 2D Magnetotelluric Data

    SciTech Connect

    Chen, Jinsong

    2010-07-01

    The algorithm is developed to invert 2D magnetotelluric (MT) data based on sharp boundary parametrization using a Bayesian framework. Within the algorithm, we consider the locations and the resistivity of regions formed by the interfaces are as unknowns. We use a parallel, adaptive finite-element algorithm to forward simulate frequency-domain MT responses of 2D conductivity structure. Those unknown parameters are spatially correlated and are described by a geostatistical model. The joint posterior probability distribution function is explored by Markov Chain Monte Carlo (MCMC) sampling methods. The developed stochastic model is effective for estimating the interface locations and resistivity. Most importantly, it provides details uncertainty information on each unknown parameter. Hardware requirements: PC, Supercomputer, Multi-platform, Workstation; Software requirements C and Fortan; Operation Systems/version is Linux/Unix or Windows

  15. Stochastic Inversion of 2D Magnetotelluric Data

    Energy Science and Technology Software Center (ESTSC)

    2010-07-01

    The algorithm is developed to invert 2D magnetotelluric (MT) data based on sharp boundary parametrization using a Bayesian framework. Within the algorithm, we consider the locations and the resistivity of regions formed by the interfaces are as unknowns. We use a parallel, adaptive finite-element algorithm to forward simulate frequency-domain MT responses of 2D conductivity structure. Those unknown parameters are spatially correlated and are described by a geostatistical model. The joint posterior probability distribution function ismore » explored by Markov Chain Monte Carlo (MCMC) sampling methods. The developed stochastic model is effective for estimating the interface locations and resistivity. Most importantly, it provides details uncertainty information on each unknown parameter. Hardware requirements: PC, Supercomputer, Multi-platform, Workstation; Software requirements C and Fortan; Operation Systems/version is Linux/Unix or Windows« less

  16. Rarefied gas dynamics using stochastic rotation dynamics

    NASA Astrophysics Data System (ADS)

    Tuzel, Erkan; Ihle, Thomas; Kroll, Daniel M.

    2003-03-01

    In the past two decades, Direct Simulation Monte Carlo (DSMC) has been the dominant predictive tool for rarefied gas dynamics. In the non-hydrodynamic regime, where continuum models fail, particle based methods have been used to model systems ranging from shuttle re-entry problems to mesoscopic flow in MEMS devices. A new method, namely stochastic rotation dynamics (SRD), which utilizes effective multiparticle collisions, will be described. It will be shown that it is possible to get the correct transport coefficients for Argon gas by tuning the collision parameters, namely the collision angle and collision probability. Simulation results comparing DSMC and SRD will be shown for equilibrium relaxation rates and Poiseuille flow. One important feature of SRD is that it coarse-grains the time scale, so that simulations in the transition regime are typically five to twenty times faster than for DSMC. Benchmarks as a function of Knudsen number will be given, and directions for further research will be discussed.

  17. Stochastic-dynamic Modelling of Morphodynamics

    NASA Astrophysics Data System (ADS)

    Eppel, D. P.; Kapitza, H.

    The numerical prediction of coastal sediment motion over time spans of years and decades is hampered by the sediment's ability, when stirred by waves and currents, to often react not uniquely to the external forcing but rather to show some kind of internal dynamics whose characteristics are not directly linked to the external forcing. Analytical stability analyses of the sediment-water system indicate that instabilities of tidally forced sediment layers in shallow seas can occur on spatial scales smaller than and not related to the scales of the tidal components. The finite growth of these un- stable amplitides can be described in terms of Ginzburg-Landau equations. Examples are the formation of ripples, sand waves and sand dunes or the formation of shore- face connected ridges. Among others, analyses of time series of coastal profiles from Duck, South Carolina extending over several decades gave evidence for self-organized behaviour suggesting that some important sediment-water systems can be perceived as dissipative dynamical structures. The consequences of such behaviour for predicting morphodynamics has been pointed out: one would expect that there exist time horizons beyond which predictions in the traditional deterministic sense are not possible. One would have to look for statistical quantities containing information of some relevance such as phase-space densities of solutions, attractor sets and the like. This contribution is part of an effort to address the prediction problem of morphody- namics through process-oriented models containing stochastic parameterizations for bottom shear stresses, critical shear stresses, etc.; process-based models because they are directly related to the physical processes but in a stochastic form because it is known that the physical processes contain strong stochastic components. The final outcome of such a program would be the generation of an ensemble of solutions by Monte Carlo integrations of the stochastic model

  18. Stochastic Simulations and Sensitivity Analysis of Plasma Flow

    SciTech Connect

    Lin, Guang; Karniadakis, George E.

    2008-08-01

    For complex physical systems with large number of random inputs, it will be very expensive to perform stochastic simulations for all of the random inputs. Stochastic sensitivity analysis is introduced in this paper to rank the significance of random inputs, provide information on which random input has more influence on the system outputs and the coupling or interaction effect among different random inputs. There are two types of numerical methods in stochastic sensitivity analysis: local and global methods. The local approach, which relies on a partial derivative of output with respect to parameters, is used to measure the sensitivity around a local operating point. When the system has strong nonlinearities and parameters fluctuate within a wide range from their nominal values, the local sensitivity does not provide full information to the system operators. On the other side, the global approach examines the sensitivity from the entire range of the parameter variations. The global screening methods, based on One-At-a-Time (OAT) perturbation of parameters, rank the significant parameters and identify their interaction among a large number of parameters. Several screening methods have been proposed in literature, i.e., the Morris method, Cotter's method, factorial experimentation, and iterated fractional factorial design. In this paper, the Morris method, Monte Carlo sampling method, Quasi-Monte Carlo method and collocation method based on sparse grids are studied. Additionally, two MHD examples are presented to demonstrate the capability and efficiency of the stochastic sensitivity analysis, which can be used as a pre-screening technique for reducing the dimensionality and hence the cost in stochastic simulations.

  19. Stochastic speculative price.

    PubMed

    Samuelson, P A

    1971-02-01

    Because a commodity like wheat can be carried forward from one period to the next, speculative arbitrage serves to link its prices at different points of time. Since, however, the size of the harvest depends on complicated probability processes impossible to forecast with certainty, the minimal model for understanding market behavior must involve stochastic processes. The present study, on the basis of the axiom that it is the expected rather than the known-for-certain prices which enter into all arbitrage relations and carryover decisions, determines the behavior of price as the solution to a stochastic-dynamic-programming problem. The resulting stationary time series possesses an ergodic state and normative properties like those often observed for real-world bourses. PMID:16591903

  20. Stochastic ice stream dynamics.

    PubMed

    Mantelli, Elisa; Bertagni, Matteo Bernard; Ridolfi, Luca

    2016-08-01

    Ice streams are narrow corridors of fast-flowing ice that constitute the arterial drainage network of ice sheets. Therefore, changes in ice stream flow are key to understanding paleoclimate, sea level changes, and rapid disintegration of ice sheets during deglaciation. The dynamics of ice flow are tightly coupled to the climate system through atmospheric temperature and snow recharge, which are known exhibit stochastic variability. Here we focus on the interplay between stochastic climate forcing and ice stream temporal dynamics. Our work demonstrates that realistic climate fluctuations are able to (i) induce the coexistence of dynamic behaviors that would be incompatible in a purely deterministic system and (ii) drive ice stream flow away from the regime expected in a steady climate. We conclude that environmental noise appears to be crucial to interpreting the past behavior of ice sheets, as well as to predicting their future evolution. PMID:27457960

  1. Stochastic ice stream dynamics

    NASA Astrophysics Data System (ADS)

    Mantelli, Elisa; Bertagni, Matteo Bernard; Ridolfi, Luca

    2016-08-01

    Ice streams are narrow corridors of fast-flowing ice that constitute the arterial drainage network of ice sheets. Therefore, changes in ice stream flow are key to understanding paleoclimate, sea level changes, and rapid disintegration of ice sheets during deglaciation. The dynamics of ice flow are tightly coupled to the climate system through atmospheric temperature and snow recharge, which are known exhibit stochastic variability. Here we focus on the interplay between stochastic climate forcing and ice stream temporal dynamics. Our work demonstrates that realistic climate fluctuations are able to (i) induce the coexistence of dynamic behaviors that would be incompatible in a purely deterministic system and (ii) drive ice stream flow away from the regime expected in a steady climate. We conclude that environmental noise appears to be crucial to interpreting the past behavior of ice sheets, as well as to predicting their future evolution.

  2. VAWT stochastic wind simulator

    SciTech Connect

    Strickland, J.H.

    1987-04-01

    A stochastic wind simulation for VAWTs (VSTOC) has been developed which yields turbulent wind-velocity fluctuations for rotationally sampled points. This allows three-component wind-velocity fluctuations to be simulated at specified nodal points on the wind-turbine rotor. A first-order convection scheme is used which accounts for the decrease in streamwise velocity as the flow passes through the wind-turbine rotor. The VSTOC simulation is independent of the particular analytical technique used to predict the aerodynamic and performance characteristics of the turbine. The VSTOC subroutine may be used simply as a subroutine in a particular VAWT prediction code or it may be used as a subroutine in an independent processor. The independent processor is used to interact with a version of the VAWT prediction code which is segmented into deterministic and stochastic modules. Using VSTOC in this fashion is very efficient with regard to decreasing computer time for the overall calculation process.

  3. STOCHASTIC COOLING FOR RHIC.

    SciTech Connect

    BLASKIEWICZ,M.BRENNAN,J.M.CAMERON,P.WEI,J.

    2003-05-12

    Emittance growth due to Intra-Beam Scattering significantly reduces the heavy ion luminosity lifetime in RHIC. Stochastic cooling of the stored beam could improve things considerably by counteracting IBS and preventing particles from escaping the rf bucket [1]. High frequency bunched-beam stochastic cooling is especially challenging but observations of Schottky signals in the 4-8 GHz band indicate that conditions are favorable in RHIC [2]. We report here on measurements of the longitudinal beam transfer function carried out with a pickup kicker pair on loan from FNAL TEVATRON. Results imply that for ions a coasting beam description is applicable and we outline some general features of a viable momentum cooling system for RHIC.

  4. Simulating stochastic dynamics using large time steps.

    PubMed

    Corradini, O; Faccioli, P; Orland, H

    2009-12-01

    We present an approach to investigate the long-time stochastic dynamics of multidimensional classical systems, in contact with a heat bath. When the potential energy landscape is rugged, the kinetics displays a decoupling of short- and long-time scales and both molecular dynamics or Monte Carlo (MC) simulations are generally inefficient. Using a field theoretic approach, we perform analytically the average over the short-time stochastic fluctuations. This way, we obtain an effective theory, which generates the same long-time dynamics of the original theory, but has a lower time-resolution power. Such an approach is used to develop an improved version of the MC algorithm, which is particularly suitable to investigate the dynamics of rare conformational transitions. In the specific case of molecular systems at room temperature, we show that elementary integration time steps used to simulate the effective theory can be chosen a factor approximately 100 larger than those used in the original theory. Our results are illustrated and tested on a simple system, characterized by a rugged energy landscape. PMID:20365123

  5. Patchwork sampling of stochastic differential equations.

    PubMed

    Kürsten, Rüdiger; Behn, Ulrich

    2016-03-01

    We propose a method to sample stationary properties of solutions of stochastic differential equations, which is accurate and efficient if there are rarely visited regions or rare transitions between distinct regions of the state space. The method is based on a complete, nonoverlapping partition of the state space into patches on which the stochastic process is ergodic. On each of these patches we run simulations of the process strictly truncated to the corresponding patch, which allows effective simulations also in rarely visited regions. The correct weight for each patch is obtained by counting the attempted transitions between all different patches. The results are patchworked to cover the whole state space. We extend the concept of truncated Markov chains which is originally formulated for processes which obey detailed balance to processes not fulfilling detailed balance. The method is illustrated by three examples, describing the one-dimensional diffusion of an overdamped particle in a double-well potential, a system of many globally coupled overdamped particles in double-well potentials subject to additive Gaussian white noise, and the overdamped motion of a particle on the circle in a periodic potential subject to a deterministic drift and additive noise. In an appendix we explain how other well-known Markov chain Monte Carlo algorithms can be related to truncated Markov chains. PMID:27078484

  6. Phylogenetic Stochastic Mapping Without Matrix Exponentiation

    PubMed Central

    Irvahn, Jan; Minin, Vladimir N.

    2014-01-01

    Abstract Phylogenetic stochastic mapping is a method for reconstructing the history of trait changes on a phylogenetic tree relating species/organism carrying the trait. State-of-the-art methods assume that the trait evolves according to a continuous-time Markov chain (CTMC) and works well for small state spaces. The computations slow down considerably for larger state spaces (e.g., space of codons), because current methodology relies on exponentiating CTMC infinitesimal rate matrices—an operation whose computational complexity grows as the size of the CTMC state space cubed. In this work, we introduce a new approach, based on a CTMC technique called uniformization, which does not use matrix exponentiation for phylogenetic stochastic mapping. Our method is based on a new Markov chain Monte Carlo (MCMC) algorithm that targets the distribution of trait histories conditional on the trait data observed at the tips of the tree. The computational complexity of our MCMC method grows as the size of the CTMC state space squared. Moreover, in contrast to competing matrix exponentiation methods, if the rate matrix is sparse, we can leverage this sparsity and increase the computational efficiency of our algorithm further. Using simulated data, we illustrate advantages of our MCMC algorithm and investigate how large the state space needs to be for our method to outperform matrix exponentiation approaches. We show that even on the moderately large state space of codons our MCMC method can be significantly faster than currently used matrix exponentiation methods. PMID:24918812

  7. Patchwork sampling of stochastic differential equations

    NASA Astrophysics Data System (ADS)

    Kürsten, Rüdiger; Behn, Ulrich

    2016-03-01

    We propose a method to sample stationary properties of solutions of stochastic differential equations, which is accurate and efficient if there are rarely visited regions or rare transitions between distinct regions of the state space. The method is based on a complete, nonoverlapping partition of the state space into patches on which the stochastic process is ergodic. On each of these patches we run simulations of the process strictly truncated to the corresponding patch, which allows effective simulations also in rarely visited regions. The correct weight for each patch is obtained by counting the attempted transitions between all different patches. The results are patchworked to cover the whole state space. We extend the concept of truncated Markov chains which is originally formulated for processes which obey detailed balance to processes not fulfilling detailed balance. The method is illustrated by three examples, describing the one-dimensional diffusion of an overdamped particle in a double-well potential, a system of many globally coupled overdamped particles in double-well potentials subject to additive Gaussian white noise, and the overdamped motion of a particle on the circle in a periodic potential subject to a deterministic drift and additive noise. In an appendix we explain how other well-known Markov chain Monte Carlo algorithms can be related to truncated Markov chains.

  8. Stochastic Event-Driven Molecular Dynamics

    SciTech Connect

    Donev, Aleksandar Garcia, Alejandro L.; Alder, Berni J.

    2008-02-01

    A novel Stochastic Event-Driven Molecular Dynamics (SEDMD) algorithm is developed for the simulation of polymer chains suspended in a solvent. SEDMD combines event-driven molecular dynamics (EDMD) with the Direct Simulation Monte Carlo (DSMC) method. The polymers are represented as chains of hard-spheres tethered by square wells and interact with the solvent particles with hard-core potentials. The algorithm uses EDMD for the simulation of the polymer chain and the interactions between the chain beads and the surrounding solvent particles. The interactions between the solvent particles themselves are not treated deterministically as in EDMD, rather, the momentum and energy exchange in the solvent is determined stochastically using DSMC. The coupling between the solvent and the solute is consistently represented at the particle level retaining hydrodynamic interactions and thermodynamic fluctuations. However, unlike full MD simulations of both the solvent and the solute, in SEDMD the spatial structure of the solvent is ignored. The SEDMD algorithm is described in detail and applied to the study of the dynamics of a polymer chain tethered to a hard-wall subjected to uniform shear. SEDMD closely reproduces results obtained using traditional EDMD simulations with two orders of magnitude greater efficiency. Results question the existence of periodic (cycling) motion of the polymer chain.

  9. Entropy of stochastic flows

    SciTech Connect

    Dorogovtsev, Andrei A

    2010-06-29

    For sets in a Hilbert space the concept of quadratic entropy is introduced. It is shown that this entropy is finite for the range of a stochastic flow of Brownian particles on R. This implies, in particular, the fact that the total time of the free travel in the Arratia flow of all particles that started from a bounded interval is finite. Bibliography: 10 titles.

  10. Kinetic Monte Carlo models for the study of chemical reactions in the Earth's upper atmosphere

    NASA Astrophysics Data System (ADS)

    Turchak, L. I.; Shematovich, V. I.

    2016-06-01

    A stochastic approach to study the non-equilibrium chemistry in the Earth's upper atmosphere is presented, which has been developed over a number of years. Kinetic Monte Carlo models based on this approach are an effective tool for investigating the role of suprathermal particles both in local variations of the atmospheric chemical composition and in the formation of the hot planetary corona.

  11. Ultimate open pit stochastic optimization

    NASA Astrophysics Data System (ADS)

    Marcotte, Denis; Caron, Josiane

    2013-02-01

    Classical open pit optimization (maximum closure problem) is made on block estimates, without directly considering the block grades uncertainty. We propose an alternative approach of stochastic optimization. The stochastic optimization is taken as the optimal pit computed on the block expected profits, rather than expected grades, computed from a series of conditional simulations. The stochastic optimization generates, by construction, larger ore and waste tonnages than the classical optimization. Contrary to the classical approach, the stochastic optimization is conditionally unbiased for the realized profit given the predicted profit. A series of simulated deposits with different variograms are used to compare the stochastic approach, the classical approach and the simulated approach that maximizes expected profit among simulated designs. Profits obtained with the stochastic optimization are generally larger than the classical or simulated pit. The main factor controlling the relative gain of stochastic optimization compared to classical approach and simulated pit is shown to be the information level as measured by the boreholes spacing/range ratio. The relative gains of the stochastic approach over the classical approach increase with the treatment costs but decrease with mining costs. The relative gains of the stochastic approach over the simulated pit approach increase both with the treatment and mining costs. At early stages of an open pit project, when uncertainty is large, the stochastic optimization approach appears preferable to the classical approach or the simulated pit approach for fair comparison of the values of alternative projects and for the initial design and planning of the open pit.

  12. Stochastic multiscale modeling of polycrystalline materials

    NASA Astrophysics Data System (ADS)

    Wen, Bin

    Mechanical properties of engineering materials are sensitive to the underlying random microstructure. Quantification of mechanical property variability induced by microstructure variation is essential for the prediction of extreme properties and microstructure-sensitive design of materials. Recent advances in high throughput characterization of polycrystalline microstructures have resulted in huge data sets of microstructural descriptors and image snapshots. To utilize these large scale experimental data for computing the resulting variability of macroscopic properties, appropriate mathematical representation of microstructures is needed. By exploring the space containing all admissible microstructures that are statistically similar to the available data, one can estimate the distribution/envelope of possible properties by employing efficient stochastic simulation methodologies along with robust physics-based deterministic simulators. The focus of this thesis is on the construction of low-dimensional representations of random microstructures and the development of efficient physics-based simulators for polycrystalline materials. By adopting appropriate stochastic methods, such as Monte Carlo and Adaptive Sparse Grid Collocation methods, the variability of microstructure-sensitive properties of polycrystalline materials is investigated. The primary outcomes of this thesis include: (1) Development of data-driven reduced-order representations of microstructure variations to construct the admissible space of random polycrystalline microstructures. (2) Development of accurate and efficient physics-based simulators for the estimation of material properties based on mesoscale microstructures. (3) Investigating property variability of polycrystalline materials using efficient stochastic simulation methods in combination with the above two developments. The uncertainty quantification framework developed in this work integrates information science and materials science, and

  13. Quantum Spontaneous Stochasticity

    NASA Astrophysics Data System (ADS)

    Drivas, Theodore; Eyink, Gregory

    Classical Newtonian dynamics is expected to be deterministic, but recent fluid turbulence theory predicts that a particle advected at high Reynolds-numbers by ''nearly rough'' flows moves nondeterministically. Small stochastic perturbations to the flow velocity or to the initial data lead to persistent randomness, even in the limit where the perturbations vanish! Such ``spontaneous stochasticity'' has profound consequences for astrophysics, geophysics, and our daily lives. We show that a similar effect occurs with a quantum particle in a ''nearly rough'' force, for the semi-classical (large-mass) limit, where spreading of the wave-packet is usually expected to be negligible and dynamics to be deterministic Newtonian. Instead, there are non-zero probabilities to observe multiple, non-unique solutions of the classical equations. Although the quantum wave-function remains split, rapid phase oscillations prevent any coherent superposition of the branches. Classical spontaneous stochasticity has not yet been seen in controlled laboratory experiments of fluid turbulence, but the corresponding quantum effects may be observable by current techniques. We suggest possible experiments with neutral atomic-molecular systems in repulsive electric dipole potentials.

  14. Noncovalent Interactions by Quantum Monte Carlo.

    PubMed

    Dubecký, Matúš; Mitas, Lubos; Jurečka, Petr

    2016-05-11

    Quantum Monte Carlo (QMC) is a family of stochastic methods for solving quantum many-body problems such as the stationary Schrödinger equation. The review introduces basic notions of electronic structure QMC based on random walks in real space as well as its advances and adaptations to systems with noncovalent interactions. Specific issues such as fixed-node error cancellation, construction of trial wave functions, and efficiency considerations that allow for benchmark quality QMC energy differences are described in detail. Comprehensive overview of articles covers QMC applications to systems with noncovalent interactions over the last three decades. The current status of QMC with regard to efficiency, applicability, and usability by nonexperts together with further considerations about QMC developments, limitations, and unsolved challenges are discussed as well. PMID:27081724

  15. Chemical application of diffusion quantum Monte Carlo

    NASA Technical Reports Server (NTRS)

    Reynolds, P. J.; Lester, W. A., Jr.

    1984-01-01

    The diffusion quantum Monte Carlo (QMC) method gives a stochastic solution to the Schroedinger equation. This approach is receiving increasing attention in chemical applications as a result of its high accuracy. However, reducing statistical uncertainty remains a priority because chemical effects are often obtained as small differences of large numbers. As an example, the single-triplet splitting of the energy of the methylene molecule CH sub 2 is given. The QMC algorithm was implemented on the CYBER 205, first as a direct transcription of the algorithm running on the VAX 11/780, and second by explicitly writing vector code for all loops longer than a crossover length C. The speed of the codes relative to one another as a function of C, and relative to the VAX, are discussed. The computational time dependence obtained versus the number of basis functions is discussed and this is compared with that obtained from traditional quantum chemistry codes and that obtained from traditional computer architectures.

  16. Stochastic lag time in nucleated linear self-assembly

    NASA Astrophysics Data System (ADS)

    Tiwari, Nitin S.; van der Schoot, Paul

    2016-06-01

    Protein aggregation is of great importance in biology, e.g., in amyloid fibrillation. The aggregation processes that occur at the cellular scale must be highly stochastic in nature because of the statistical number fluctuations that arise on account of the small system size at the cellular scale. We study the nucleated reversible self-assembly of monomeric building blocks into polymer-like aggregates using the method of kinetic Monte Carlo. Kinetic Monte Carlo, being inherently stochastic, allows us to study the impact of fluctuations on the polymerization reactions. One of the most important characteristic features in this kind of problem is the existence of a lag phase before self-assembly takes off, which is what we focus attention on. We study the associated lag time as a function of system size and kinetic pathway. We find that the leading order stochastic contribution to the lag time before polymerization commences is inversely proportional to the system volume for large-enough system size for all nine reaction pathways tested. Finite-size corrections to this do depend on the kinetic pathway.

  17. Stochastic lag time in nucleated linear self-assembly.

    PubMed

    Tiwari, Nitin S; van der Schoot, Paul

    2016-06-21

    Protein aggregation is of great importance in biology, e.g., in amyloid fibrillation. The aggregation processes that occur at the cellular scale must be highly stochastic in nature because of the statistical number fluctuations that arise on account of the small system size at the cellular scale. We study the nucleated reversible self-assembly of monomeric building blocks into polymer-like aggregates using the method of kinetic Monte Carlo. Kinetic Monte Carlo, being inherently stochastic, allows us to study the impact of fluctuations on the polymerization reactions. One of the most important characteristic features in this kind of problem is the existence of a lag phase before self-assembly takes off, which is what we focus attention on. We study the associated lag time as a function of system size and kinetic pathway. We find that the leading order stochastic contribution to the lag time before polymerization commences is inversely proportional to the system volume for large-enough system size for all nine reaction pathways tested. Finite-size corrections to this do depend on the kinetic pathway. PMID:27334194

  18. Linear-scaling and parallelisable algorithms for stochastic quantum chemistry

    NASA Astrophysics Data System (ADS)

    Booth, George H.; Smart, Simon D.; Alavi, Ali

    2014-07-01

    For many decades, quantum chemical method development has been dominated by algorithms which involve increasingly complex series of tensor contractions over one-electron orbital spaces. Procedures for their derivation and implementation have evolved to require the minimum amount of logic and rely heavily on computationally efficient library-based matrix algebra and optimised paging schemes. In this regard, the recent development of exact stochastic quantum chemical algorithms to reduce computational scaling and memory overhead requires a contrasting algorithmic philosophy, but one which when implemented efficiently can achieve higher accuracy/cost ratios with small random errors. Additionally, they can exploit the continuing trend for massive parallelisation which hinders the progress of deterministic high-level quantum chemical algorithms. In the Quantum Monte Carlo community, stochastic algorithms are ubiquitous but the discrete Fock space of quantum chemical methods is often unfamiliar, and the methods introduce new concepts required for algorithmic efficiency. In this paper, we explore these concepts and detail an algorithm used for Full Configuration Interaction Quantum Monte Carlo (FCIQMC), which is implemented and available in MOLPRO and as a standalone code, and is designed for high-level parallelism and linear-scaling with walker number. Many of the algorithms are also in use in, or can be transferred to, other stochastic quantum chemical methods and implementations. We apply these algorithms to the strongly correlated chromium dimer to demonstrate their efficiency and parallelism.

  19. A Stochastic Multi-Attribute Assessment of Energy Options for Fairbanks, Alaska

    NASA Astrophysics Data System (ADS)

    Read, L.; Madani, K.; Mokhtari, S.; Hanks, C. L.; Sheets, B.

    2012-12-01

    Many competing projects have been proposed to address Interior Alaska's high cost of energy—both for electricity production and for heating. Public and private stakeholders are considering the costs associated with these competing projects which vary in fuel source, subsidy requirements, proximity, and other factors. As a result, the current projects under consideration involve a complex cost structure of potential subsidies and reliance on present and future market prices, introducing a significant amount of uncertainty associated with each selection. Multi-criteria multi-decision making (MCMDM) problems of this nature can benefit from game theory and systems engineering methods, which account for behavior and preferences of stakeholders in the analysis to produce feasible and relevant solutions. This work uses a stochastic MCMDM framework to evaluate the trade-offs of each proposed project based on a complete cost analysis, environmental impact, and long-term sustainability. Uncertainty in the model is quantified via a Monte Carlo analysis, which helps characterize the sensitivity and risk associated with each project. Based on performance measures and criteria outlined by the stakeholders, a decision matrix will inform policy on selecting a project that is both efficient and preferred by the constituents.

  20. A retrodictive stochastic simulation algorithm

    SciTech Connect

    Vaughan, T.G. Drummond, P.D.; Drummond, A.J.

    2010-05-20

    In this paper we describe a simple method for inferring the initial states of systems evolving stochastically according to master equations, given knowledge of the final states. This is achieved through the use of a retrodictive stochastic simulation algorithm which complements the usual predictive stochastic simulation approach. We demonstrate the utility of this new algorithm by applying it to example problems, including the derivation of likely ancestral states of a gene sequence given a Markovian model of genetic mutation.

  1. Multiple-time-stepping generalized hybrid Monte Carlo methods

    SciTech Connect

    Escribano, Bruno; Akhmatskaya, Elena; Reich, Sebastian; Azpiroz, Jon M.

    2015-01-01

    Performance of the generalized shadow hybrid Monte Carlo (GSHMC) method [1], which proved to be superior in sampling efficiency over its predecessors [2–4], molecular dynamics and hybrid Monte Carlo, can be further improved by combining it with multi-time-stepping (MTS) and mollification of slow forces. We demonstrate that the comparatively simple modifications of the method not only lead to better performance of GSHMC itself but also allow for beating the best performed methods, which use the similar force splitting schemes. In addition we show that the same ideas can be successfully applied to the conventional generalized hybrid Monte Carlo method (GHMC). The resulting methods, MTS-GHMC and MTS-GSHMC, provide accurate reproduction of thermodynamic and dynamical properties, exact temperature control during simulation and computational robustness and efficiency. MTS-GHMC uses a generalized momentum update to achieve weak stochastic stabilization to the molecular dynamics (MD) integrator. MTS-GSHMC adds the use of a shadow (modified) Hamiltonian to filter the MD trajectories in the HMC scheme. We introduce a new shadow Hamiltonian formulation adapted to force-splitting methods. The use of such Hamiltonians improves the acceptance rate of trajectories and has a strong impact on the sampling efficiency of the method. Both methods were implemented in the open-source MD package ProtoMol and were tested on a water and a protein systems. Results were compared to those obtained using a Langevin Molly (LM) method [5] on the same systems. The test results demonstrate the superiority of the new methods over LM in terms of stability, accuracy and sampling efficiency. This suggests that putting the MTS approach in the framework of hybrid Monte Carlo and using the natural stochasticity offered by the generalized hybrid Monte Carlo lead to improving stability of MTS and allow for achieving larger step sizes in the simulation of complex systems.

  2. Multiple-time-stepping generalized hybrid Monte Carlo methods

    NASA Astrophysics Data System (ADS)

    Escribano, Bruno; Akhmatskaya, Elena; Reich, Sebastian; Azpiroz, Jon M.

    2015-01-01

    Performance of the generalized shadow hybrid Monte Carlo (GSHMC) method [1], which proved to be superior in sampling efficiency over its predecessors [2-4], molecular dynamics and hybrid Monte Carlo, can be further improved by combining it with multi-time-stepping (MTS) and mollification of slow forces. We demonstrate that the comparatively simple modifications of the method not only lead to better performance of GSHMC itself but also allow for beating the best performed methods, which use the similar force splitting schemes. In addition we show that the same ideas can be successfully applied to the conventional generalized hybrid Monte Carlo method (GHMC). The resulting methods, MTS-GHMC and MTS-GSHMC, provide accurate reproduction of thermodynamic and dynamical properties, exact temperature control during simulation and computational robustness and efficiency. MTS-GHMC uses a generalized momentum update to achieve weak stochastic stabilization to the molecular dynamics (MD) integrator. MTS-GSHMC adds the use of a shadow (modified) Hamiltonian to filter the MD trajectories in the HMC scheme. We introduce a new shadow Hamiltonian formulation adapted to force-splitting methods. The use of such Hamiltonians improves the acceptance rate of trajectories and has a strong impact on the sampling efficiency of the method. Both methods were implemented in the open-source MD package ProtoMol and were tested on a water and a protein systems. Results were compared to those obtained using a Langevin Molly (LM) method [5] on the same systems. The test results demonstrate the superiority of the new methods over LM in terms of stability, accuracy and sampling efficiency. This suggests that putting the MTS approach in the framework of hybrid Monte Carlo and using the natural stochasticity offered by the generalized hybrid Monte Carlo lead to improving stability of MTS and allow for achieving larger step sizes in the simulation of complex systems.

  3. Decomposition and (importance) sampling techniques for multi-stage stochastic linear programs

    SciTech Connect

    Infanger, G.

    1993-11-01

    The difficulty of solving large-scale multi-stage stochastic linear programs arises from the sheer number of scenarios associated with numerous stochastic parameters. The number of scenarios grows exponentially with the number of stages and problems get easily out of hand even for very moderate numbers of stochastic parameters per stage. Our method combines dual (Benders) decomposition with Monte Carlo sampling techniques. We employ importance sampling to efficiently obtain accurate estimates of both expected future costs and gradients and right-hand sides of cuts. The method enables us to solve practical large-scale problems with many stages and numerous stochastic parameters per stage. We discuss the theory of sharing and adjusting cuts between different scenarios in a stage. We derive probabilistic lower and upper bounds, where we use importance path sampling for the upper bound estimation. Initial numerical results turned out to be promising.

  4. Stochastic averaging of energy envelope of Preisach hysteretic systems

    NASA Astrophysics Data System (ADS)

    Wang, Y.; Ying, Z. G.; Zhu, W. Q.

    2009-04-01

    A new stochastic averaging technique for analyzing the response of a single-degree-of-freedom Preisach hysteretic system with nonlocal memory under stationary Gaussian stochastic excitation is proposed. An equivalent nonhysteretic nonlinear system with amplitude-envelope-dependent damping and stiffness is firstly obtained from the given system by using the generalized harmonic balance technique. The relationship between the amplitude envelope and the energy envelope is then established, and the equivalent damping and stiffness coefficients are expressed as functions of the energy envelope. The available range of the yielding force of the system is extended and also the strong nonlinear stiffness of the system is incorporated so as to improve the response prediction. Finally, an averaged Itô stochastic differential equation for the energy envelope of the system as one-dimensional diffusion process is derived by using the stochastic averaging method of energy envelope, and the Fokker-Planck-Kolmogorov equation associated with the averaged Itô equation is solved to obtain stationary probability densities of the energy envelope and amplitude envelope. The approximate solutions are validated by using the Monte Carlo simulation.

  5. Hybrid stochastic simulations of intracellular reaction-diffusion systems

    PubMed Central

    Kalantzis, Georgios

    2009-01-01

    With the observation that stochasticity is important in biological systems, chemical kinetics have begun to receive wider interest. While the use of Monte Carlo discrete event simulations most accurately capture the variability of molecular species, they become computationally costly for complex reaction-diffusion systems with large populations of molecules. On the other hand, continuous time models are computationally efficient but they fail to capture any variability in the molecular species. In this study a novel hybrid stochastic approach is introduced for simulating reaction-diffusion systems. We developed a dynamic partitioning strategy using fractional propensities. In that way processes with high frequency are simulated mostly with deterministic rate-based equations, and those with low frequency mostly with the exact stochastic algorithm of Gillespie. In this way we preserve the stochastic behavior of cellular pathways while being able to apply it to large populations of molecules. In this article we describe this hybrid algorithmic approach, and we demonstrate its accuracy and efficiency compared with the Gillespie algorithm for two different systems. First, a model of intracellular viral kinetics with two steady states and second, a compartmental model of the postsynaptic spine head for studying the dynamics of Ca+2 and NMDA receptors. PMID:19414282

  6. Multi-scenario modelling of uncertainty in stochastic chemical systems

    SciTech Connect

    Evans, R. David; Ricardez-Sandoval, Luis A.

    2014-09-15

    Uncertainty analysis has not been well studied at the molecular scale, despite extensive knowledge of uncertainty in macroscale systems. The ability to predict the effect of uncertainty allows for robust control of small scale systems such as nanoreactors, surface reactions, and gene toggle switches. However, it is difficult to model uncertainty in such chemical systems as they are stochastic in nature, and require a large computational cost. To address this issue, a new model of uncertainty propagation in stochastic chemical systems, based on the Chemical Master Equation, is proposed in the present study. The uncertain solution is approximated by a composite state comprised of the averaged effect of samples from the uncertain parameter distributions. This model is then used to study the effect of uncertainty on an isomerization system and a two gene regulation network called a repressilator. The results of this model show that uncertainty in stochastic systems is dependent on both the uncertain distribution, and the system under investigation. -- Highlights: •A method to model uncertainty on stochastic systems was developed. •The method is based on the Chemical Master Equation. •Uncertainty in an isomerization reaction and a gene regulation network was modelled. •Effects were significant and dependent on the uncertain input and reaction system. •The model was computationally more efficient than Kinetic Monte Carlo.

  7. Stochastic optimization of multireservoir systems via reinforcement learning

    NASA Astrophysics Data System (ADS)

    Lee, Jin-Hee; Labadie, John W.

    2007-11-01

    Although several variants of stochastic dynamic programming have been applied to optimal operation of multireservoir systems, they have been plagued by a high-dimensional state space and the inability to accurately incorporate the stochastic environment as characterized by temporally and spatially correlated hydrologic inflows. Reinforcement learning has emerged as an effective approach to solving sequential decision problems by combining concepts from artificial intelligence, cognitive science, and operations research. A reinforcement learning system has a mathematical foundation similar to dynamic programming and Markov decision processes, with the goal of maximizing the long-term reward or returns as conditioned on the state of the system environment and the immediate reward obtained from operational decisions. Reinforcement learning can include Monte Carlo simulation where transition probabilities and rewards are not explicitly known a priori. The Q-Learning method in reinforcement learning is demonstrated on the two-reservoir Geum River system, South Korea, and is shown to outperform implicit stochastic dynamic programming and sampling stochastic dynamic programming methods.

  8. Stochastic transitions in a bistable reaction system on the membrane

    PubMed Central

    Kochańczyk, Marek; Jaruszewicz, Joanna; Lipniacki, Tomasz

    2013-01-01

    Transitions between steady states of a multi-stable stochastic system in the perfectly mixed chemical reactor are possible only because of stochastic switching. In realistic cellular conditions, where diffusion is limited, transitions between steady states can also follow from the propagation of travelling waves. Here, we study the interplay between the two modes of transition for a prototype bistable system of kinase–phosphatase interactions on the plasma membrane. Within microscopic kinetic Monte Carlo simulations on the hexagonal lattice, we observed that for finite diffusion the behaviour of the spatially extended system differs qualitatively from the behaviour of the same system in the well-mixed regime. Even when a small isolated subcompartment remains mostly inactive, the chemical travelling wave may propagate, leading to the activation of a larger compartment. The activating wave can be induced after a small subdomain is activated as a result of a stochastic fluctuation. Such a spontaneous onset of activity is radically more probable in subdomains characterized by slower diffusion. Our results show that a local immobilization of substrates can lead to the global activation of membrane proteins by the mechanism that involves stochastic fluctuations followed by the propagation of a semi-deterministic travelling wave. PMID:23635492

  9. Stochastic calculus in physics

    SciTech Connect

    Fox, R.F.

    1987-03-01

    The relationship of Ito-Stratonovich stochastic calculus to studies of weakly colored noise is explained. A functional calculus approach is used to obtain an effective Fokker-Planck equation for the weakly colored noise regime. In a smooth limit, this representation produces the Stratonovich version of the Ito-Stratonovich calculus for white noise. It also provides an approach to steady state behavior for strongly colored noise. Numerical simulation algorithms are explored, and a novel suggestion is made for efficient and accurate simulation of white noise equations.

  10. Stochastic ontogenetic growth model

    NASA Astrophysics Data System (ADS)

    West, B. J.; West, D.

    2012-02-01

    An ontogenetic growth model (OGM) for a thermodynamically closed system is generalized to satisfy both the first and second law of thermodynamics. The hypothesized stochastic ontogenetic growth model (SOGM) is shown to entail the interspecies allometry relation by explicitly averaging the basal metabolic rate and the total body mass over the steady-state probability density for the total body mass (TBM). This is the first derivation of the interspecies metabolic allometric relation from a dynamical model and the asymptotic steady-state distribution of the TBM is fit to data and shown to be inverse power law.

  11. Stochastic thermodynamics of resetting

    NASA Astrophysics Data System (ADS)

    Fuchs, Jaco; Goldt, Sebastian; Seifert, Udo

    2016-03-01

    Stochastic dynamics with random resetting leads to a non-equilibrium steady state. Here, we consider the thermodynamics of resetting by deriving the first and second law for resetting processes far from equilibrium. We identify the contributions to the entropy production of the system which arise due to resetting and show that they correspond to the rate with which information is either erased or created. Using Landauer's principle, we derive a bound on the amount of work that is required to maintain a resetting process. We discuss different regimes of resetting, including a Maxwell demon scenario where heat is extracted from a bath at constant temperature.

  12. Chemical application of diffusion quantum Monte Carlo

    NASA Astrophysics Data System (ADS)

    Reynolds, P. J.; Lester, W. A., Jr.

    1983-10-01

    The diffusion quantum Monte Carlo (QMC) method gives a stochastic solution to the Schroedinger equation. As an example the singlet-triplet splitting of the energy of the methylene molecule CH2 is given. The QMC algorithm was implemented on the CYBER 205, first as a direct transcription of the algorithm running on our VAX 11/780, and second by explicitly writing vector code for all loops longer than a crossover length C. The speed of the codes relative to one another as a function of C, and relative to the VAX is discussed. Since CH2 has only eight electrons, most of the loops in this application are fairly short. The longest inner loops run over the set of atomic basis functions. The CPU time dependence obtained versus the number of basis functions is discussed and compared with that obtained from traditional quantum chemistry codes and that obtained from traditional computer architectures. Finally, preliminary work on restructuring the algorithm to compute the separate Monte Carlo realizations in parallel is discussed.

  13. A Monte Carlo multimodal inversion of surface waves

    NASA Astrophysics Data System (ADS)

    Maraschini, Margherita; Foti, Sebastiano

    2010-09-01

    The analysis of surface wave propagation is often used to estimate the S-wave velocity profile at a site. In this paper, we propose a stochastic approach for the inversion of surface waves, which allows apparent dispersion curves to be inverted. The inversion method is based on the integrated use of two-misfit functions. A misfit function based on the determinant of the Haskell-Thomson matrix and a classical Euclidean distance between the dispersion curves. The former allows all the modes of the dispersion curve to be taken into account with a very limited computational cost because it avoids the explicit calculation of the dispersion curve for each tentative model. It is used in a Monte Carlo inversion with a large population of profiles. In a subsequent step, the selection of representative models is obtained by applying a Fisher test based on the Euclidean distance between the experimental and the synthetic dispersion curves to the best models of the Monte Carlo inversion. This procedure allows the set of the selected models to be identified on the basis of the data quality. It also mitigates the influence of local minima that can affect the Monte Carlo results. The effectiveness of the procedure is shown for synthetic and real experimental data sets, where the advantages of the two-stage procedure are highlighted. In particular, the determinant misfit allows the computation of large populations in stochastic algorithms with a limited computational cost.

  14. Path sampling with stochastic dynamics: Some new algorithms

    SciTech Connect

    Stoltz, Gabriel . E-mail: stoltz@cermics.enpc.fr

    2007-07-01

    We propose here some new sampling algorithms for path sampling in the case when stochastic dynamics are used. In particular, we present a new proposal function for equilibrium sampling of paths with a Monte-Carlo dynamics (the so-called 'brownian tube' proposal). This proposal is based on the continuity of the dynamics with respect to the random forcing, and generalizes all previous approaches when stochastic dynamics are used. The efficiency of this proposal is demonstrated using some measure of decorrelation in path space. We also discuss a switching strategy that allows to transform ensemble of paths at a finite rate while remaining at equilibrium, in contrast with the usual Jarzynski like switching. This switching is very interesting to sample constrained paths starting from unconstrained paths, or to perform simulated annealing in a rigorous way.

  15. Semiparametric Stochastic Modeling of the Rate Function in Longitudinal Studies

    PubMed Central

    Zhu, Bin; Taylor, Jeremy M.G.; Song, Peter X.-K.

    2011-01-01

    In longitudinal biomedical studies, there is often interest in the rate functions, which describe the functional rates of change of biomarker profiles. This paper proposes a semiparametric approach to model these functions as the realizations of stochastic processes defined by stochastic differential equations. These processes are dependent on the covariates of interest and vary around a specified parametric function. An efficient Markov chain Monte Carlo algorithm is developed for inference. The proposed method is compared with several existing methods in terms of goodness-of-fit and more importantly the ability to forecast future functional data in a simulation study. The proposed methodology is applied to prostate-specific antigen profiles for illustration. Supplementary materials for this paper are available online. PMID:22423170

  16. Stochastic Optimal Scheduling of Residential Appliances with Renewable Energy Sources

    SciTech Connect

    Wu, Hongyu; Pratt, Annabelle; Chakraborty, Sudipta

    2015-07-03

    This paper proposes a stochastic, multi-objective optimization model within a Model Predictive Control (MPC) framework, to determine the optimal operational schedules of residential appliances operating in the presence of renewable energy source (RES). The objective function minimizes the weighted sum of discomfort, energy cost, total and peak electricity consumption, and carbon footprint. A heuristic method is developed for combining different objective components. The proposed stochastic model utilizes Monte Carlo simulation (MCS) for representing uncertainties in electricity price, outdoor temperature, RES generation, water usage, and non-controllable loads. The proposed model is solved using a mixed integer linear programming (MILP) solver and numerical results show the validity of the model. Case studies show the benefit of using the proposed optimization model.

  17. Stochastic Effects in the Bistable Homogeneous Semenov Model

    NASA Astrophysics Data System (ADS)

    Nowakowski, B.; Lemarchand, A.; Nowakowska, E.

    2002-04-01

    We present the mesoscopic description of stochastic effects in a thermochemical bistable diluted gas system subject to the Newtonian heat exchange with a thermostat. We apply the master equation including a transition rate for the Newtonian thermal transfer process, derived on the basis of kinetic theory. As temperature is a continuous variable, this master equation has a complicated integro-differential form. We perform Monte Carlo simulations based on this equation to study the stochastic effects in a homogeneous Semenov model (which neglects reactant consumption) in the bistable regime. The mean first passage time is computed as a function of the number of particles in the system and the distance from the bifurcation associated with the emergence of bistability. An approximate analytical prediction is deduced from the Fokker--Planck equation associated with the master equation. The results of the master equation approach are successfully compared with those of direct simulations of the microscopic particle dynamics.

  18. Some variance reduction methods for numerical stochastic homogenization.

    PubMed

    Blanc, X; Le Bris, C; Legoll, F

    2016-04-28

    We give an overview of a series of recent studies devoted to variance reduction techniques for numerical stochastic homogenization. Numerical homogenization requires that a set of problems is solved at the microscale, the so-called corrector problems. In a random environment, these problems are stochastic and therefore need to be repeatedly solved, for several configurations of the medium considered. An empirical average over all configurations is then performed using the Monte Carlo approach, so as to approximate the effective coefficients necessary to determine the macroscopic behaviour. Variance severely affects the accuracy and the cost of such computations. Variance reduction approaches, borrowed from other contexts in the engineering sciences, can be useful. Some of these variance reduction techniques are presented, studied and tested here. PMID:27002065

  19. Stochastic analysis of complex reaction networks using binomial moment equations.

    PubMed

    Barzel, Baruch; Biham, Ofer

    2012-09-01

    The stochastic analysis of complex reaction networks is a difficult problem because the number of microscopic states in such systems increases exponentially with the number of reactive species. Direct integration of the master equation is thus infeasible and is most often replaced by Monte Carlo simulations. While Monte Carlo simulations are a highly effective tool, equation-based formulations are more amenable to analytical treatment and may provide deeper insight into the dynamics of the network. Here, we present a highly efficient equation-based method for the analysis of stochastic reaction networks. The method is based on the recently introduced binomial moment equations [Barzel and Biham, Phys. Rev. Lett. 106, 150602 (2011)]. The binomial moments are linear combinations of the ordinary moments of the probability distribution function of the population sizes of the interacting species. They capture the essential combinatorics of the reaction processes reflecting their stoichiometric structure. This leads to a simple and transparent form of the equations, and allows a highly efficient and surprisingly simple truncation scheme. Unlike ordinary moment equations, in which the inclusion of high order moments is prohibitively complicated, the binomial moment equations can be easily constructed up to any desired order. The result is a set of equations that enables the stochastic analysis of complex reaction networks under a broad range of conditions. The number of equations is dramatically reduced from the exponential proliferation of the master equation to a polynomial (and often quadratic) dependence on the number of reactive species in the binomial moment equations. The aim of this paper is twofold: to present a complete derivation of the binomial moment equations; to demonstrate the applicability of the moment equations for a representative set of example networks, in which stochastic effects play an important role. PMID:23030885

  20. Stochastic Quantum Gas Dynamics

    NASA Astrophysics Data System (ADS)

    Proukakis, Nick P.; Cockburn, Stuart P.

    2010-03-01

    We study the dynamics of weakly-interacting finite temperature Bose gases via the Stochastic Gross-Pitaevskii equation (SGPE). As a first step, we demonstrate [jointly with A. Negretti (Ulm, Germany) and C. Henkel (Potsdam, Germany)] that the SGPE provides a significantly better method for generating an equilibrium state than the number-conserving Bogoliubov method (except for low temperatures and small atom numbers). We then study [jointly with H. Nistazakis and D.J. Frantzeskakis (University of Athens, Greece), P.G.Kevrekidis (University of Massachusetts) and T.P. Horikis (University of Ioannina, Greece)] the dynamics of dark solitons in elongated finite temperature condensates. We demonstrate numerical shot-to-shot variations in soliton trajectories (S.P. Cockburn et al., arXiv:0909.1660.), finding individual long-lived trajectories as in experiments. In our simulations, these variations arise from fluctuations in the phase and density of the underlying medium. We provide a detailed statistical analysis, proposing regimes for the controlled experimental demonstration of this effect; we also discuss the extent to which simpler models can be used to mimic the features of ensemble-averaged stochastic trajectories.

  1. Stochastic power flow modeling

    SciTech Connect

    Not Available

    1980-06-01

    The stochastic nature of customer demand and equipment failure on large interconnected electric power networks has produced a keen interest in the accurate modeling and analysis of the effects of probabilistic behavior on steady state power system operation. The principle avenue of approach has been to obtain a solution to the steady state network flow equations which adhere both to Kirchhoff's Laws and probabilistic laws, using either combinatorial or functional approximation techniques. Clearly the need of the present is to develop sound techniques for producing meaningful data to serve as input. This research has addressed this end and serves to bridge the gap between electric demand modeling, equipment failure analysis, etc., and the area of algorithm development. Therefore, the scope of this work lies squarely on developing an efficient means of producing sensible input information in the form of probability distributions for the many types of solution algorithms that have been developed. Two major areas of development are described in detail: a decomposition of stochastic processes which gives hope of stationarity, ergodicity, and perhaps even normality; and a powerful surrogate probability approach using proportions of time which allows the calculation of joint events from one dimensional probability spaces.

  2. Stochastic blind motion deblurring.

    PubMed

    Xiao, Lei; Gregson, James; Heide, Felix; Heidrich, Wolfgang

    2015-10-01

    Blind motion deblurring from a single image is a highly under-constrained problem with many degenerate solutions. A good approximation of the intrinsic image can, therefore, only be obtained with the help of prior information in the form of (often nonconvex) regularization terms for both the intrinsic image and the kernel. While the best choice of image priors is still a topic of ongoing investigation, this research is made more complicated by the fact that historically each new prior requires the development of a custom optimization method. In this paper, we develop a stochastic optimization method for blind deconvolution. Since this stochastic solver does not require the explicit computation of the gradient of the objective function and uses only efficient local evaluation of the objective, new priors can be implemented and tested very quickly. We demonstrate that this framework, in combination with different image priors produces results with Peak Signal-to-Noise Ratio (PSNR) values that match or exceed the results obtained by much more complex state-of-the-art blind motion deblurring algorithms. PMID:25974941

  3. Stochastic averaging of quasi-partially integrable Hamiltonian systems under combined Gaussian and Poisson white noise excitations

    NASA Astrophysics Data System (ADS)

    Jia, Wantao; Zhu, Weiqiu

    2014-03-01

    A stochastic averaging method for predicting the response of quasi-partially integrable and non-resonant Hamiltonian systems to combined Gaussian and Poisson white noise excitations is proposed. For the case with r (1stochastic integro-differential equations (SIDEs) of the original quasi-partially integrable and non-resonant Hamiltonian systems by using the stochastic jump-diffusion chain rule and the stochastic averaging theorem. An example is given to illustrate the applications of the proposed stochastic averaging method, and a combination of the finite difference method and the successive over-relaxation method is used to solve the reduced GFPK equation to obtain the stationary probability density of the system. The results are well verified by a Monte Carlo simulation.

  4. The influence of Stochastic perturbation of Geotechnical media On Electromagnetic tomography

    NASA Astrophysics Data System (ADS)

    Song, Lei; Yang, Weihao; Huangsonglei, Jiahui; Li, HaiPeng

    2015-04-01

    Electromagnetic tomography (CT) are commonly utilized in Civil engineering to detect the structure defects or geological anomalies. CT are generally recognized as a high precision geophysical method and the accuracy of CT are expected to be several centimeters and even to be several millimeters. Then, high frequency antenna with short wavelength are utilized commonly in Civil Engineering. As to the geotechnical media, stochastic perturbation of the EM parameters are inevitably exist in geological scales, in structure scales and in local scales, et al. In those cases, the geometric dimensionings of the target body, the EM wavelength and the accuracy expected might be of the same order. When the high frequency EM wave propagated in the stochastic geotechnical media, the GPR signal would be reflected not only from the target bodies but also from the stochastic perturbation of the background media. To detect the karst caves in dissolution fracture rock, one need to assess the influence of the stochastic distributed dissolution holes and fractures; to detect the void in a concrete structure, one should master the influence of the stochastic distributed stones, et al. In this paper, on the base of stochastic media discrete realizations, the authors try to evaluate quantificationally the influence of the stochastic perturbation of Geotechnical media by Radon/Iradon Transfer through full-combined Monte Carlo numerical simulation. It is found the stochastic noise is related with transfer angle, perturbing strength, angle interval, autocorrelation length, et al. And the quantitative formula of the accuracy of the electromagnetic tomography is also established, which could help on the precision estimation of GPR tomography in stochastic perturbation Geotechnical media. Key words: Stochastic Geotechnical Media; Electromagnetic Tomography; Radon/Iradon Transfer.

  5. A simple chaotic neuron model: stochastic behavior of neural networks.

    PubMed

    Aydiner, Ekrem; Vural, Adil M; Ozcelik, Bekir; Kiymac, Kerim; Tan, Uner

    2003-05-01

    We have briefly reviewed the occurrence of the post-synaptic potentials between neurons, the relationship between EEG and neuron dynamics, as well as methods of signal analysis. We propose a simple stochastic model representing electrical activity of neuronal systems. The model is constructed using the Monte Carlo simulation technique. The results yielded EEG-like signals with their phase portraits in three-dimensional space. The Lyapunov exponent was positive, indicating chaotic behavior. The correlation of the EEG-like signals was.92, smaller than those reported by others. It was concluded that this neuron model may provide valuable clues about the dynamic behavior of neural systems. PMID:12745622

  6. Stochastic series expansion simulation of the t -V model

    NASA Astrophysics Data System (ADS)

    Wang, Lei; Liu, Ye-Hua; Troyer, Matthias

    2016-04-01

    We present an algorithm for the efficient simulation of the half-filled spinless t -V model on bipartite lattices, which combines the stochastic series expansion method with determinantal quantum Monte Carlo techniques widely used in fermionic simulations. The algorithm scales linearly in the inverse temperature, cubically with the system size, and is free from the time-discretization error. We use it to map out the finite-temperature phase diagram of the spinless t -V model on the honeycomb lattice and observe a suppression of the critical temperature of the charge-density-wave phase in the vicinity of a fermionic quantum critical point.

  7. On the efficacy of stochastic collocation, stochastic Galerkin, and stochastic reduced order models for solving stochastic problems

    DOE PAGESBeta

    Richard V. Field, Jr.; Emery, John M.; Grigoriu, Mircea Dan

    2015-05-19

    The stochastic collocation (SC) and stochastic Galerkin (SG) methods are two well-established and successful approaches for solving general stochastic problems. A recently developed method based on stochastic reduced order models (SROMs) can also be used. Herein we provide a comparison of the three methods for some numerical examples; our evaluation only holds for the examples considered in the paper. The purpose of the comparisons is not to criticize the SC or SG methods, which have proven very useful for a broad range of applications, nor is it to provide overall ratings of these methods as compared to the SROM method.more » Furthermore, our objectives are to present the SROM method as an alternative approach to solving stochastic problems and provide information on the computational effort required by the implementation of each method, while simultaneously assessing their performance for a collection of specific problems.« less

  8. On the efficacy of stochastic collocation, stochastic Galerkin, and stochastic reduced order models for solving stochastic problems

    SciTech Connect

    Richard V. Field, Jr.; Emery, John M.; Grigoriu, Mircea Dan

    2015-05-19

    The stochastic collocation (SC) and stochastic Galerkin (SG) methods are two well-established and successful approaches for solving general stochastic problems. A recently developed method based on stochastic reduced order models (SROMs) can also be used. Herein we provide a comparison of the three methods for some numerical examples; our evaluation only holds for the examples considered in the paper. The purpose of the comparisons is not to criticize the SC or SG methods, which have proven very useful for a broad range of applications, nor is it to provide overall ratings of these methods as compared to the SROM method. Furthermore, our objectives are to present the SROM method as an alternative approach to solving stochastic problems and provide information on the computational effort required by the implementation of each method, while simultaneously assessing their performance for a collection of specific problems.

  9. Variance decomposition in stochastic simulators

    SciTech Connect

    Le Maître, O. P.; Knio, O. M.; Moraes, A.

    2015-06-28

    This work aims at the development of a mathematical and computational approach that enables quantification of the inherent sources of stochasticity and of the corresponding sensitivities in stochastic simulations of chemical reaction networks. The approach is based on reformulating the system dynamics as being generated by independent standardized Poisson processes. This reformulation affords a straightforward identification of individual realizations for the stochastic dynamics of each reaction channel, and consequently a quantitative characterization of the inherent sources of stochasticity in the system. By relying on the Sobol-Hoeffding decomposition, the reformulation enables us to perform an orthogonal decomposition of the solution variance. Thus, by judiciously exploiting the inherent stochasticity of the system, one is able to quantify the variance-based sensitivities associated with individual reaction channels, as well as the importance of channel interactions. Implementation of the algorithms is illustrated in light of simulations of simplified systems, including the birth-death, Schlögl, and Michaelis-Menten models.

  10. Variance decomposition in stochastic simulators

    NASA Astrophysics Data System (ADS)

    Le Maître, O. P.; Knio, O. M.; Moraes, A.

    2015-06-01

    This work aims at the development of a mathematical and computational approach that enables quantification of the inherent sources of stochasticity and of the corresponding sensitivities in stochastic simulations of chemical reaction networks. The approach is based on reformulating the system dynamics as being generated by independent standardized Poisson processes. This reformulation affords a straightforward identification of individual realizations for the stochastic dynamics of each reaction channel, and consequently a quantitative characterization of the inherent sources of stochasticity in the system. By relying on the Sobol-Hoeffding decomposition, the reformulation enables us to perform an orthogonal decomposition of the solution variance. Thus, by judiciously exploiting the inherent stochasticity of the system, one is able to quantify the variance-based sensitivities associated with individual reaction channels, as well as the importance of channel interactions. Implementation of the algorithms is illustrated in light of simulations of simplified systems, including the birth-death, Schlögl, and Michaelis-Menten models.

  11. ISDEP: Integrator of stochastic differential equations for plasmas

    NASA Astrophysics Data System (ADS)

    Velasco, J. L.; Bustos, A.; Castejón, F.; Fernández, L. A.; Martin-Mayor, V.; Tarancón, A.

    2012-09-01

    In this paper we present a general description of the ISDEP code (Integrator of Stochastic Differential Equations for Plasmas) and a brief overview of its physical results and applications so far. ISDEP is a Monte Carlo code that calculates the distribution function of a minority population of ions in a magnetized plasma. It solves the ion equations of motion taking into account the complex 3D structure of fusion devices, the confining electromagnetic field and collisions with other plasma species. The Monte Carlo method used is based on the equivalence between the Fokker-Planck and Langevin equations. This allows ISDEP to run in distributed computing platforms without communication between nodes with almost linear scaling. This paper intends to be a general description and a reference paper in ISDEP.

  12. Parallelized Stochastic Cutoff Method for Long-Range Interacting Systems

    NASA Astrophysics Data System (ADS)

    Endo, Eishin; Toga, Yuta; Sasaki, Munetaka

    2015-07-01

    We present a method of parallelizing the stochastic cutoff (SCO) method, which is a Monte-Carlo method for long-range interacting systems. After interactions are eliminated by the SCO method, we subdivide a lattice into noninteracting interpenetrating sublattices. This subdivision enables us to parallelize the Monte-Carlo calculation in the SCO method. Such subdivision is found by numerically solving the vertex coloring of a graph created by the SCO method. We use an algorithm proposed by Kuhn and Wattenhofer to solve the vertex coloring by parallel computation. This method was applied to a two-dimensional magnetic dipolar system on an L × L square lattice to examine its parallelization efficiency. The result showed that, in the case of L = 2304, the speed of computation increased about 102 times by parallel computation with 288 processors.

  13. Automated variance reduction for Monte Carlo shielding analyses with MCNP

    NASA Astrophysics Data System (ADS)

    Radulescu, Georgeta

    Variance reduction techniques are employed in Monte Carlo analyses to increase the number of particles in the space phase of interest and thereby lower the variance of statistical estimation. Variance reduction parameters are required to perform Monte Carlo calculations. It is well known that adjoint solutions, even approximate ones, are excellent biasing functions that can significantly increase the efficiency of a Monte Carlo calculation. In this study, an automated method of generating Monte Carlo variance reduction parameters, and of implementing the source energy biasing and the weight window technique in MCNP shielding calculations has been developed. The method is based on the approach used in the SAS4 module of the SCALE code system, which derives the biasing parameters from an adjoint one-dimensional Discrete Ordinates calculation. Unlike SAS4 that determines the radial and axial dose rates of a spent fuel cask in separate calculations, the present method provides energy and spatial biasing parameters for the entire system that optimize the simulation of particle transport towards all external surfaces of a spent fuel cask. The energy and spatial biasing parameters are synthesized from the adjoint fluxes of three one-dimensional Discrete Ordinates adjoint calculations. Additionally, the present method accommodates multiple source regions, such as the photon sources in light-water reactor spent nuclear fuel assemblies, in one calculation. With this automated method, detailed and accurate dose rate maps for photons, neutrons, and secondary photons outside spent fuel casks or other containers can be efficiently determined with minimal efforts.

  14. Monte Carlo Event Generators

    NASA Astrophysics Data System (ADS)

    Dytman, Steven

    2011-10-01

    Every neutrino experiment requires a Monte Carlo event generator for various purposes. Historically, each series of experiments developed their own code which tuned to their needs. Modern experiments would benefit from a universal code (e.g. PYTHIA) which would allow more direct comparison between experiments. GENIE attempts to be that code. This paper compares most commonly used codes and provides some details of GENIE.

  15. A Monte Carlo algorithm for degenerate plasmas

    SciTech Connect

    Turrell, A.E. Sherlock, M.; Rose, S.J.

    2013-09-15

    A procedure for performing Monte Carlo calculations of plasmas with an arbitrary level of degeneracy is outlined. It has possible applications in inertial confinement fusion and astrophysics. Degenerate particles are initialised according to the Fermi–Dirac distribution function, and scattering is via a Pauli blocked binary collision approximation. The algorithm is tested against degenerate electron–ion equilibration, and the degenerate resistivity transport coefficient from unmagnetised first order transport theory. The code is applied to the cold fuel shell and alpha particle equilibration problem of inertial confinement fusion.

  16. An Advanced Neutronic Analysis Toolkit with Inline Monte Carlo capability for BHTR Analysis

    SciTech Connect

    William R. Martin; John C. Lee

    2009-12-30

    Monte Carlo capability has been combined with a production LWR lattice physics code to allow analysis of high temperature gas reactor configurations, accounting for the double heterogeneity due to the TRISO fuel. The Monte Carlo code MCNP5 has been used in conjunction with CPM3, which was the testbench lattice physics code for this project. MCNP5 is used to perform two calculations for the geometry of interest, one with homogenized fuel compacts and the other with heterogeneous fuel compacts, where the TRISO fuel kernels are resolved by MCNP5.

  17. Biochemical simulations: stochastic, approximate stochastic and hybrid approaches

    PubMed Central

    2009-01-01

    Computer simulations have become an invaluable tool to study the sometimes counterintuitive temporal dynamics of (bio-)chemical systems. In particular, stochastic simulation methods have attracted increasing interest recently. In contrast to the well-known deterministic approach based on ordinary differential equations, they can capture effects that occur due to the underlying discreteness of the systems and random fluctuations in molecular numbers. Numerous stochastic, approximate stochastic and hybrid simulation methods have been proposed in the literature. In this article, they are systematically reviewed in order to guide the researcher and help her find the appropriate method for a specific problem. PMID:19151097

  18. Stochastic reconstruction of sandstones

    PubMed

    Manwart; Torquato; Hilfer

    2000-07-01

    A simulated annealing algorithm is employed to generate a stochastic model for a Berea sandstone and a Fontainebleau sandstone, with each a prescribed two-point probability function, lineal-path function, and "pore size" distribution function, respectively. We find that the temperature decrease of the annealing has to be rather quick to yield isotropic and percolating configurations. A comparison of simple morphological quantities indicates good agreement between the reconstructions and the original sandstones. Also, the mean survival time of a random walker in the pore space is reproduced with good accuracy. However, a more detailed investigation by means of local porosity theory shows that there may be significant differences of the geometrical connectivity between the reconstructed and the experimental samples. PMID:11088546

  19. Stochastic techno-economic evaluation of cellulosic biofuel pathways.

    PubMed

    Zhao, Xin; Brown, Tristan R; Tyner, Wallace E

    2015-12-01

    This study evaluates the economic feasibility and stochastic dominance rank of eight cellulosic biofuel production pathways (including gasification, pyrolysis, liquefaction, and fermentation) under technological and economic uncertainty. A techno-economic assessment based financial analysis is employed to derive net present values and breakeven prices for each pathway. Uncertainty is investigated and incorporated into fuel prices and techno-economic variables: capital cost, conversion technology yield, hydrogen cost, natural gas price and feedstock cost using @Risk, a Palisade Corporation software. The results indicate that none of the eight pathways would be profitable at expected values under projected energy prices. Fast pyrolysis and hydroprocessing (FPH) has the lowest breakeven fuel price at 3.11$/gallon of gasoline equivalent (0.82$/liter of gasoline equivalent). With the projected energy prices, FPH investors could expect a 59% probability of loss. Stochastic dominance is done based on return on investment. Most risk-averse decision makers would prefer FPH to other pathways. PMID:26454041

  20. Stochastic image reconstruction for a dual-particle imaging system

    NASA Astrophysics Data System (ADS)

    Hamel, M. C.; Polack, J. K.; Poitrasson-Rivière, A.; Flaska, M.; Clarke, S. D.; Pozzi, S. A.; Tomanin, A.; Peerani, P.

    2016-02-01

    Stochastic image reconstruction has been applied to a dual-particle imaging system being designed for nuclear safeguards applications. The dual-particle imager (DPI) is a combined Compton-scatter and neutron-scatter camera capable of producing separate neutron and photon images. The stochastic origin ensembles (SOE) method was investigated as an imaging method for the DPI because only a minimal estimation of system response is required to produce images with quality that is comparable to common maximum-likelihood methods. This work contains neutron and photon SOE image reconstructions for a 252Cf point source, two mixed-oxide (MOX) fuel canisters representing point sources, and the MOX fuel canisters representing a distributed source. Simulation of the DPI using MCNPX-PoliMi is validated by comparison of simulated and measured results. Because image quality is dependent on the number of counts and iterations used, the relationship between these quantities is investigated.

  1. Nuclear pairing within a configuration-space Monte Carlo approach

    NASA Astrophysics Data System (ADS)

    Lingle, Mark; Volya, Alexander

    2015-06-01

    Pairing correlations in nuclei play a decisive role in determining nuclear drip lines, binding energies, and many collective properties. In this work a new configuration-space Monte Carlo (CSMC) method for treating nuclear pairing correlations is developed, implemented, and demonstrated. In CSMC the Hamiltonian matrix is stochastically generated in Krylov subspace, resulting in the Monte Carlo version of Lanczos-like diagonalization. The advantages of this approach over other techniques are discussed; the absence of the fermionic sign problem, probabilistic interpretation of quantum-mechanical amplitudes, and ability to handle truly large-scale problems with defined precision and error control are noteworthy merits of CSMC. The features of our CSMC approach are shown using models and realistic examples. Special attention is given to difficult limits: situations with nonconstant pairing strengths, cases with nearly degenerate excited states, limits when pairing correlations in finite systems are weak, and problems when the relevant configuration space is large.

  2. Advanced interacting sequential Monte Carlo sampling for inverse scattering

    NASA Astrophysics Data System (ADS)

    Giraud, F.; Minvielle, P.; Del Moral, P.

    2013-09-01

    The following electromagnetism (EM) inverse problem is addressed. It consists in estimating the local radioelectric properties of materials recovering an object from global EM scattering measurements, at various incidences and wave frequencies. This large scale ill-posed inverse problem is explored by an intensive exploitation of an efficient 2D Maxwell solver, distributed on high performance computing machines. Applied to a large training data set, a statistical analysis reduces the problem to a simpler probabilistic metamodel, from which Bayesian inference can be performed. Considering the radioelectric properties as a hidden dynamic stochastic process that evolves according to the frequency, it is shown how advanced Markov chain Monte Carlo methods—called sequential Monte Carlo or interacting particles—can take benefit of the structure and provide local EM property estimates.

  3. Monte Carlo approach to tissue-cell populations

    NASA Astrophysics Data System (ADS)

    Drasdo, D.; Kree, R.; McCaskill, J. S.

    1995-12-01

    We describe a stochastic dynamics of tissue cells with special emphasis on epithelial cells and fibro- blasts and fibrocytes of the connective tissue. Pattern formation and growth characteristics of such cell populations in culture are investigated numerically by Monte Carlo simulations for quasi-two-dimensional systems of cells. A number of quantitative predictions are obtained which may be confronted with experimental results. Furthermore we introduce several biologically motivated variants of our basic model and briefly discuss the simulation of two dimensional analogs of two complex processes in tissues: the growth of a sarcoma across an epithelial boundary and the wound healing of a skin cut. As compared to other approaches, we find the Monte Carlo approach to tissue growth and structure to be particularly simple and flexible. It allows for a hierarchy of models reaching from global description of birth-death processes to very specific features of intracellular dynamics. (c) 1995 The American Physical Society

  4. RES: Regularized Stochastic BFGS Algorithm

    NASA Astrophysics Data System (ADS)

    Mokhtari, Aryan; Ribeiro, Alejandro

    2014-12-01

    RES, a regularized stochastic version of the Broyden-Fletcher-Goldfarb-Shanno (BFGS) quasi-Newton method is proposed to solve convex optimization problems with stochastic objectives. The use of stochastic gradient descent algorithms is widespread, but the number of iterations required to approximate optimal arguments can be prohibitive in high dimensional problems. Application of second order methods, on the other hand, is impracticable because computation of objective function Hessian inverses incurs excessive computational cost. BFGS modifies gradient descent by introducing a Hessian approximation matrix computed from finite gradient differences. RES utilizes stochastic gradients in lieu of deterministic gradients for both, the determination of descent directions and the approximation of the objective function's curvature. Since stochastic gradients can be computed at manageable computational cost RES is realizable and retains the convergence rate advantages of its deterministic counterparts. Convergence results show that lower and upper bounds on the Hessian egeinvalues of the sample functions are sufficient to guarantee convergence to optimal arguments. Numerical experiments showcase reductions in convergence time relative to stochastic gradient descent algorithms and non-regularized stochastic versions of BFGS. An application of RES to the implementation of support vector machines is developed.

  5. On the evaluation of expected performance cost for partially observed closed-loop stochastic systems

    NASA Technical Reports Server (NTRS)

    Bayard, D. S.; Eslami, M.

    1985-01-01

    New methods are presented for evaluating the expected performance cost of partially observed closed-loop stochastic systems. When the variances of the process statistics are small, a linearized model of the closed-loop stochastic system is defined for which the expected cost can be evaluated by recursion on a set of purely deterministic difference equations. When the variances of the process statistics are large, the linearized model can be used in the control variate method of variance reduction for reducing the number of sample paths required for effective Monte Carlo estimation.

  6. Stochastic Hard-Sphere Dynamics for Hydrodynamics of Non-Ideal Fluids

    SciTech Connect

    Donev, A; Alder, B J; Garcia, A L

    2008-02-26

    A novel stochastic fluid model is proposed with a nonideal structure factor consistent with compressibility, and adjustable transport coefficients. This stochastic hard-sphere dynamics (SHSD) algorithm is a modification of the direct simulation Monte Carlo algorithm and has several computational advantages over event-driven hard-sphere molecular dynamics. Surprisingly, SHSD results in an equation of state and a pair correlation function identical to that of a deterministic Hamiltonian system of penetrable spheres interacting with linear core pair potentials. The fluctuating hydrodynamic behavior of the SHSD fluid is verified for the Brownian motion of a nanoparticle suspended in a compressible solvent.

  7. On the stochastic dependence between photomultipliers in the TDCR method.

    PubMed

    Bobin, C; Thiam, C; Chauvenet, B; Bouchard, J

    2012-04-01

    The TDCR method (Triple to Double Coincidence Ratio) is widely implemented in National Metrology Institutes for activity primary measurements based on liquid scintillation counting. The detection efficiency and thereby the activity are determined using a statistical and physical model. In this article, we propose to revisit the application of the classical TDCR model and its validity by introducing a prerequisite of stochastic independence between photomultiplier counting. In order to support the need for this condition, the demonstration is carried out by considering the simple case of a monoenergetic deposition in the scintillation cocktail. Simulations of triple and double coincidence counting are presented in order to point out the existence of stochastic dependence between photomultipliers that can be significant in the case of low-energy deposition in the scintillator. It is demonstrated that a problem of time dependence arises when the coincidence resolving time is shorter than the time distribution of scintillation photons; in addition, it is shown that this effect is at the origin of a bias in the detection efficiency calculation encountered for the standardization of (3)H. This investigation is extended to the study of geometric dependence between photomultipliers related to the position of light emission inside the scintillation vial (the volume of the vial is not considered in the classical TDCR model). In that case, triple and double coincidences are calculated using a stochastic TDCR model based on the Monte-Carlo simulation code Geant4. This stochastic approach is also applied to the standardization of (51)Cr by liquid scintillation; the difference observed in detection efficiencies calculated using the standard and stochastic models can be explained by such an effect of geometric dependence between photomultiplier channels. PMID:22244195

  8. Stochastic Evaluation of Riparian Vegetation Dynamics in River Channels

    NASA Astrophysics Data System (ADS)

    Miyamoto, H.; Kimura, R.; Toshimori, N.

    2013-12-01

    Vegetation overgrowth in sand bars and floodplains has been a serious problem for river management in Japan. From the viewpoints of flood control and ecological conservation, it would be necessary to accurately predict the vegetation dynamics for a long period of time. In this study, we have developed a stochastic model for predicting the dynamics of trees in floodplains with emphasis on the interaction with flood impacts. The model consists of the following four processes in coupling ecohydrology with biogeomorphology: (i) stochastic behavior of flow discharge, (ii) hydrodynamics in a channel with vegetation, (iii) variation of riverbed topography and (iv) vegetation dynamics on the floodplain. In the model, the flood discharge is stochastically simulated using a Poisson process, one of the conventional approaches in hydrological time-series generation. The model for vegetation dynamics includes the effects of tree growth, mortality by flood impacts, and infant tree invasion. To determine the model parameters, vegetation conditions have been observed mainly before and after flood impacts since 2008 at a field site located between 23.2-24.0 km from the river mouth in Kako River, Japan. This site is one of the vegetation overgrowth locations in Kako River floodplains, where the predominant tree species are willows and bamboos. In this presentation, sensitivity of the vegetation overgrowth tendency is investigated in Kako River channels. Through the Monte Carlo simulation for several cross sections in Kako River, responses of the vegetated channels are stochastically evaluated in terms of the changes of discharge magnitude and channel geomorphology. The expectation and standard deviation of vegetation areal ratio are compared in the different channel cross sections for different river discharges and relative floodplain heights. The result shows that the vegetation status changes sensitively in the channels with larger discharge and insensitive in the lower floodplain

  9. Adaptive hybrid simulations for multiscale stochastic reaction networks

    SciTech Connect

    Hepp, Benjamin; Gupta, Ankit; Khammash, Mustafa

    2015-01-21

    The probability distribution describing the state of a Stochastic Reaction Network (SRN) evolves according to the Chemical Master Equation (CME). It is common to estimate its solution using Monte Carlo methods such as the Stochastic Simulation Algorithm (SSA). In many cases, these simulations can take an impractical amount of computational time. Therefore, many methods have been developed that approximate sample paths of the underlying stochastic process and estimate the solution of the CME. A prominent class of these methods include hybrid methods that partition the set of species and the set of reactions into discrete and continuous subsets. Such a partition separates the dynamics into a discrete and a continuous part. Simulating such a stochastic process can be computationally much easier than simulating the exact discrete stochastic process with SSA. Moreover, the quasi-stationary assumption to approximate the dynamics of fast subnetworks can be applied for certain classes of networks. However, as the dynamics of a SRN evolves, these partitions may have to be adapted during the simulation. We develop a hybrid method that approximates the solution of a CME by automatically partitioning the reactions and species sets into discrete and continuous components and applying the quasi-stationary assumption on identifiable fast subnetworks. Our method does not require any user intervention and it adapts to exploit the changing timescale separation between reactions and/or changing magnitudes of copy-numbers of constituent species. We demonstrate the efficiency of the proposed method by considering examples from systems biology and showing that very good approximations to the exact probability distributions can be achieved in significantly less computational time. This is especially the case for systems with oscillatory dynamics, where the system dynamics change considerably throughout the time-period of interest.

  10. Adaptive hybrid simulations for multiscale stochastic reaction networks

    NASA Astrophysics Data System (ADS)

    Hepp, Benjamin; Gupta, Ankit; Khammash, Mustafa

    2015-01-01

    The probability distribution describing the state of a Stochastic Reaction Network (SRN) evolves according to the Chemical Master Equation (CME). It is common to estimate its solution using Monte Carlo methods such as the Stochastic Simulation Algorithm (SSA). In many cases, these simulations can take an impractical amount of computational time. Therefore, many methods have been developed that approximate sample paths of the underlying stochastic process and estimate the solution of the CME. A prominent class of these methods include hybrid methods that partition the set of species and the set of reactions into discrete and continuous subsets. Such a partition separates the dynamics into a discrete and a continuous part. Simulating such a stochastic process can be computationally much easier than simulating the exact discrete stochastic process with SSA. Moreover, the quasi-stationary assumption to approximate the dynamics of fast subnetworks can be applied for certain classes of networks. However, as the dynamics of a SRN evolves, these partitions may have to be adapted during the simulation. We develop a hybrid method that approximates the solution of a CME by automatically partitioning the reactions and species sets into discrete and continuous components and applying the quasi-stationary assumption on identifiable fast subnetworks. Our method does not require any user intervention and it adapts to exploit the changing timescale separation between reactions and/or changing magnitudes of copy-numbers of constituent species. We demonstrate the efficiency of the proposed method by considering examples from systems biology and showing that very good approximations to the exact probability distributions can be achieved in significantly less computational time. This is especially the case for systems with oscillatory dynamics, where the system dynamics change considerably throughout the time-period of interest.

  11. Angular biasing in implicit Monte-Carlo

    SciTech Connect

    Zimmerman, G.B.

    1994-10-20

    Calculations of indirect drive Inertial Confinement Fusion target experiments require an integrated approach in which laser irradiation and radiation transport in the hohlraum are solved simultaneously with the symmetry, implosion and burn of the fuel capsule. The Implicit Monte Carlo method has proved to be a valuable tool for the two dimensional radiation transport within the hohlraum, but the impact of statistical noise on the symmetric implosion of the small fuel capsule is difficult to overcome. We present an angular biasing technique in which an increased number of low weight photons are directed at the imploding capsule. For typical parameters this reduces the required computer time for an integrated calculation by a factor of 10. An additional factor of 5 can also be achieved by directing even smaller weight photons at the polar regions of the capsule where small mass zones are most sensitive to statistical noise.

  12. A stochastic multi-symplectic scheme for stochastic Maxwell equations with additive noise

    SciTech Connect

    Hong, Jialin; Zhang, Liying

    2014-07-01

    In this paper we investigate a stochastic multi-symplectic method for stochastic Maxwell equations with additive noise. Based on the stochastic version of variational principle, we find a way to obtain the stochastic multi-symplectic structure of three-dimensional (3-D) stochastic Maxwell equations with additive noise. We propose a stochastic multi-symplectic scheme and show that it preserves the stochastic multi-symplectic conservation law and the local and global stochastic energy dissipative properties, which the equations themselves possess. Numerical experiments are performed to verify the numerical behaviors of the stochastic multi-symplectic scheme.

  13. Monte Carlo portal dosimetry

    SciTech Connect

    Chin, P.W. . E-mail: mary.chin@physics.org

    2005-10-15

    This project developed a solution for verifying external photon beam radiotherapy. The solution is based on a calibration chain for deriving portal dose maps from acquired portal images, and a calculation framework for predicting portal dose maps. Quantitative comparison between acquired and predicted portal dose maps accomplishes both geometric (patient positioning with respect to the beam) and dosimetric (two-dimensional fluence distribution of the beam) verifications. A disagreement would indicate that beam delivery had not been according to plan. The solution addresses the clinical need for verifying radiotherapy both pretreatment (without the patient in the beam) and on treatment (with the patient in the beam). Medical linear accelerators mounted with electronic portal imaging devices (EPIDs) were used to acquire portal images. Two types of EPIDs were investigated: the amorphous silicon (a-Si) and the scanning liquid ion chamber (SLIC). The EGSnrc family of Monte Carlo codes were used to predict portal dose maps by computer simulation of radiation transport in the beam-phantom-EPID configuration. Monte Carlo simulations have been implemented on several levels of high throughput computing (HTC), including the grid, to reduce computation time. The solution has been tested across the entire clinical range of gantry angle, beam size (5 cmx5 cm to 20 cmx20 cm), and beam-patient and patient-EPID separations (4 to 38 cm). In these tests of known beam-phantom-EPID configurations, agreement between acquired and predicted portal dose profiles was consistently within 2% of the central axis value. This Monte Carlo portal dosimetry solution therefore achieved combined versatility, accuracy, and speed not readily achievable by other techniques.

  14. Stochastic roots of growth phenomena

    NASA Astrophysics Data System (ADS)

    De Lauro, E.; De Martino, S.; De Siena, S.; Giorno, V.

    2014-05-01

    We show that the Gompertz equation describes the evolution in time of the median of a geometric stochastic process. Therefore, we induce that the process itself generates the growth. This result allows us further to exploit a stochastic variational principle to take account of self-regulation of growth through feedback of relative density variations. The conceptually well defined framework so introduced shows its usefulness by suggesting a form of control of growth by exploiting external actions.

  15. Stochastic superparameterization in quasigeostrophic turbulence

    SciTech Connect

    Grooms, Ian; Majda, Andrew J.

    2014-08-15

    In this article we expand and develop the authors' recent proposed methodology for efficient stochastic superparameterization algorithms for geophysical turbulence. Geophysical turbulence is characterized by significant intermittent cascades of energy from the unresolved to the resolved scales resulting in complex patterns of waves, jets, and vortices. Conventional superparameterization simulates large scale dynamics on a coarse grid in a physical domain, and couples these dynamics to high-resolution simulations on periodic domains embedded in the coarse grid. Stochastic superparameterization replaces the nonlinear, deterministic eddy equations on periodic embedded domains by quasilinear stochastic approximations on formally infinite embedded domains. The result is a seamless algorithm which never uses a small scale grid and is far cheaper than conventional SP, but with significant success in difficult test problems. Various design choices in the algorithm are investigated in detail here, including decoupling the timescale of evolution on the embedded domains from the length of the time step used on the coarse grid, and sensitivity to certain assumed properties of the eddies (e.g. the shape of the assumed eddy energy spectrum). We present four closures based on stochastic superparameterization which elucidate the properties of the underlying framework: a ‘null hypothesis’ stochastic closure that uncouples the eddies from the mean, a stochastic closure with nonlinearly coupled eddies and mean, a nonlinear deterministic closure, and a stochastic closure based on energy conservation. The different algorithms are compared and contrasted on a stringent test suite for quasigeostrophic turbulence involving two-layer dynamics on a β-plane forced by an imposed background shear. The success of the algorithms developed here suggests that they may be fruitfully applied to more realistic situations. They are expected to be particularly useful in providing accurate and

  16. Stochastic cooling in RHIC

    SciTech Connect

    Brennan J. M.; Blaskiewicz, M.; Mernick, K.

    2012-05-20

    The full 6-dimensional [x,x'; y,y'; z,z'] stochastic cooling system for RHIC was completed and operational for the FY12 Uranium-Uranium collider run. Cooling enhances the integrated luminosity of the Uranium collisions by a factor of 5, primarily by reducing the transverse emittances but also by cooling in the longitudinal plane to preserve the bunch length. The components have been deployed incrementally over the past several runs, beginning with longitudinal cooling, then cooling in the vertical planes but multiplexed between the Yellow and Blue rings, next cooling both rings simultaneously in vertical (the horizontal plane was cooled by betatron coupling), and now simultaneous horizontal cooling has been commissioned. The system operated between 5 and 9 GHz and with 3 x 10{sup 8} Uranium ions per bunch and produces a cooling half-time of approximately 20 minutes. The ultimate emittance is determined by the balance between cooling and emittance growth from Intra-Beam Scattering. Specific details of the apparatus and mathematical techniques for calculating its performance have been published elsewhere. Here we report on: the method of operation, results with beam, and comparison of results to simulations.

  17. Fuel flexible fuel injector

    SciTech Connect

    Tuthill, Richard S; Davis, Dustin W; Dai, Zhongtao

    2015-02-03

    A disclosed fuel injector provides mixing of fuel with airflow by surrounding a swirled fuel flow with first and second swirled airflows that ensures mixing prior to or upon entering the combustion chamber. Fuel tubes produce a central fuel flow along with a central airflow through a plurality of openings to generate the high velocity fuel/air mixture along the axis of the fuel injector in addition to the swirled fuel/air mixture.

  18. Monte Carlo and quasi-Monte Carlo methods

    NASA Astrophysics Data System (ADS)

    Caflisch, Russel E.

    Monte Carlo is one of the most versatile and widely used numerical methods. Its convergence rate, O(N-1/2), is independent of dimension, which shows Monte Carlo to be very robust but also slow. This article presents an introduction to Monte Carlo methods for integration problems, including convergence theory, sampling methods and variance reduction techniques. Accelerated convergence for Monte Carlo quadrature is attained using quasi-random (also called low-discrepancy) sequences, which are a deterministic alternative to random or pseudo-random sequences. The points in a quasi-random sequence are correlated to provide greater uniformity. The resulting quadrature method, called quasi-Monte Carlo, has a convergence rate of approximately O((logN)kN-1). For quasi-Monte Carlo, both theoretical error estimates and practical limitations are presented. Although the emphasis in this article is on integration, Monte Carlo simulation of rarefied gas dynamics is also discussed. In the limit of small mean free path (that is, the fluid dynamic limit), Monte Carlo loses its effectiveness because the collisional distance is much less than the fluid dynamic length scale. Computational examples are presented throughout the text to illustrate the theory. A number of open problems are described.

  19. Numerical solution of the Stratonovich- and Ito–Euler equations: Application to the stochastic piston problem

    SciTech Connect

    Zhang, Zhongqiang; Yang, Xiu; Lin, Guang; Karniadakis, George Em

    2013-03-01

    We consider a piston with a velocity perturbed by Brownian motion moving into a straight tube filled with a perfect gas at rest. The shock generated ahead of the piston can be located by solving the one-dimensional Euler equations driven by white noise using the Stratonovich or Ito formulations. We approximate the Brownian motion with its spectral truncation and subsequently apply stochastic collocation using either sparse grid or the quasi-Monte Carlo (QMC) method. In particular, we first transform the Euler equations with an unsteady stochastic boundary into stochastic Euler equations over a fixed domain with a time-dependent stochastic source term. We then solve the transformed equations by splitting them up into two parts, i.e., a ‘deterministic part’ and a ‘stochastic part’. Numerical results verify the Stratonovich–Euler and Ito–Euler models against stochastic perturbation results, and demonstrate the efficiency of sparse grid and QMC for small and large random piston motions, respectively. The variance of shock location of the piston grows cubically in the case of white noise in contrast to colored noise reported in [1], where the variance of shock location grows quadratically with time for short times and linearly for longer times.

  20. A probabilistic graphical model approach to stochastic multiscale partial differential equations

    SciTech Connect

    Wan, Jiang; Zabaras, Nicholas; Center for Applied Mathematics, Cornell University, 657 Frank H.T. Rhodes Hall, Ithaca, NY 14853

    2013-10-01

    We develop a probabilistic graphical model based methodology to efficiently perform uncertainty quantification in the presence of both stochastic input and multiple scales. Both the stochastic input and model responses are treated as random variables in this framework. Their relationships are modeled by graphical models which give explicit factorization of a high-dimensional joint probability distribution. The hyperparameters in the probabilistic model are learned using sequential Monte Carlo (SMC) method, which is superior to standard Markov chain Monte Carlo (MCMC) methods for multi-modal distributions. Finally, we make predictions from the probabilistic graphical model using the belief propagation algorithm. Numerical examples are presented to show the accuracy and efficiency of the predictive capability of the developed graphical model.

  1. Fluorescence Correlation Spectroscopy and Nonlinear Stochastic Reaction-Diffusion

    SciTech Connect

    Del Razo, Mauricio; Pan, Wenxiao; Qian, Hong; Lin, Guang

    2014-05-30

    The currently existing theory of fluorescence correlation spectroscopy (FCS) is based on the linear fluctuation theory originally developed by Einstein, Onsager, Lax, and others as a phenomenological approach to equilibrium fluctuations in bulk solutions. For mesoscopic reaction-diffusion systems with nonlinear chemical reactions among a small number of molecules, a situation often encountered in single-cell biochemistry, it is expected that FCS time correlation functions of a reaction-diffusion system can deviate from the classic results of Elson and Magde [Biopolymers (1974) 13:1-27]. We first discuss this nonlinear effect for reaction systems without diffusion. For nonlinear stochastic reaction-diffusion systems there are no closed solutions; therefore, stochastic Monte-Carlo simulations are carried out. We show that the deviation is small for a simple bimolecular reaction; the most significant deviations occur when the number of molecules is small and of the same order. Extending Delbrück-Gillespie’s theory for stochastic nonlinear reactions with rapidly stirring to reaction-diffusion systems provides a mesoscopic model for chemical and biochemical reactions at nanometric and mesoscopic level such as a single biological cell.

  2. Stochastic effects in a thermochemical system with Newtonian heat exchange

    NASA Astrophysics Data System (ADS)

    Nowakowski, B.; Lemarchand, A.

    2001-12-01

    We develop a mesoscopic description of stochastic effects in the Newtonian heat exchange between a diluted gas system and a thermostat. We explicitly study the homogeneous Semenov model involving a thermochemical reaction and neglecting consumption of reactants. The master equation includes a transition rate for the thermal transfer process, which is derived on the basis of the statistics for inelastic collisions between gas particles and walls of the thermostat. The main assumption is that the perturbation of the Maxwellian particle velocity distribution can be neglected. The transition function for the thermal process admits a continuous spectrum of temperature changes, and consequently, the master equation has a complicated integro-differential form. We perform Monte Carlo simulations based on this equation to study the stochastic effects in the Semenov system in the explosive regime. The dispersion of ignition times is calculated as a function of system size. For sufficiently small systems, the probability distribution of temperature displays transient bimodality during the ignition period. The results of the stochastic description are successfully compared with those of direct simulations of microscopic particle dynamics.

  3. A stochastic transcriptional switch model for single cell imaging data

    PubMed Central

    Hey, Kirsty L.; Momiji, Hiroshi; Featherstone, Karen; Davis, Julian R.E.; White, Michael R.H.; Rand, David A.; Finkenstädt, Bärbel

    2015-01-01

    Gene expression is made up of inherently stochastic processes within single cells and can be modeled through stochastic reaction networks (SRNs). In particular, SRNs capture the features of intrinsic variability arising from intracellular biochemical processes. We extend current models for gene expression to allow the transcriptional process within an SRN to follow a random step or switch function which may be estimated using reversible jump Markov chain Monte Carlo (MCMC). This stochastic switch model provides a generic framework to capture many different dynamic features observed in single cell gene expression. Inference for such SRNs is challenging due to the intractability of the transition densities. We derive a model-specific birth–death approximation and study its use for inference in comparison with the linear noise approximation where both approximations are considered within the unifying framework of state-space models. The methodology is applied to synthetic as well as experimental single cell imaging data measuring expression of the human prolactin gene in pituitary cells. PMID:25819987

  4. A stochastic transcriptional switch model for single cell imaging data.

    PubMed

    Hey, Kirsty L; Momiji, Hiroshi; Featherstone, Karen; Davis, Julian R E; White, Michael R H; Rand, David A; Finkenstädt, Bärbel

    2015-10-01

    Gene expression is made up of inherently stochastic processes within single cells and can be modeled through stochastic reaction networks (SRNs). In particular, SRNs capture the features of intrinsic variability arising from intracellular biochemical processes. We extend current models for gene expression to allow the transcriptional process within an SRN to follow a random step or switch function which may be estimated using reversible jump Markov chain Monte Carlo (MCMC). This stochastic switch model provides a generic framework to capture many different dynamic features observed in single cell gene expression. Inference for such SRNs is challenging due to the intractability of the transition densities. We derive a model-specific birth-death approximation and study its use for inference in comparison with the linear noise approximation where both approximations are considered within the unifying framework of state-space models. The methodology is applied to synthetic as well as experimental single cell imaging data measuring expression of the human prolactin gene in pituitary cells. PMID:25819987

  5. Global parameter estimation methods for stochastic biochemical systems

    PubMed Central

    2010-01-01

    Background The importance of stochasticity in cellular processes having low number of molecules has resulted in the development of stochastic models such as chemical master equation. As in other modelling frameworks, the accompanying rate constants are important for the end-applications like analyzing system properties (e.g. robustness) or predicting the effects of genetic perturbations. Prior knowledge of kinetic constants is usually limited and the model identification routine typically includes parameter estimation from experimental data. Although the subject of parameter estimation is well-established for deterministic models, it is not yet routine for the chemical master equation. In addition, recent advances in measurement technology have made the quantification of genetic substrates possible to single molecular levels. Thus, the purpose of this work is to develop practical and effective methods for estimating kinetic model parameters in the chemical master equation and other stochastic models from single cell and cell population experimental data. Results Three parameter estimation methods are proposed based on the maximum likelihood and density function distance, including probability and cumulative density functions. Since stochastic models such as chemical master equations are typically solved using a Monte Carlo approach in which only a finite number of Monte Carlo realizations are computationally practical, specific considerations are given to account for the effect of finite sampling in the histogram binning of the state density functions. Applications to three practical case studies showed that while maximum likelihood method can effectively handle low replicate measurements, the density function distance methods, particularly the cumulative density function distance estimation, are more robust in estimating the parameters with consistently higher accuracy, even for systems showing multimodality. Conclusions The parameter estimation methodologies

  6. Fission Matrix Capability for MCNP Monte Carlo

    SciTech Connect

    Carney, Sean E.; Brown, Forrest B.; Kiedrowski, Brian C.; Martin, William R.

    2012-09-05

    In a Monte Carlo criticality calculation, before the tallying of quantities can begin, a converged fission source (the fundamental eigenvector of the fission kernel) is required. Tallies of interest may include powers, absorption rates, leakage rates, or the multiplication factor (the fundamental eigenvalue of the fission kernel, k{sub eff}). Just as in the power iteration method of linear algebra, if the dominance ratio (the ratio of the first and zeroth eigenvalues) is high, many iterations of neutron history simulations are required to isolate the fundamental mode of the problem. Optically large systems have large dominance ratios, and systems containing poor neutron communication between regions are also slow to converge. The fission matrix method, implemented into MCNP[1], addresses these problems. When Monte Carlo random walk from a source is executed, the fission kernel is stochastically applied to the source. Random numbers are used for: distances to collision, reaction types, scattering physics, fission reactions, etc. This method is used because the fission kernel is a complex, 7-dimensional operator that is not explicitly known. Deterministic methods use approximations/discretization in energy, space, and direction to the kernel. Consequently, they are faster. Monte Carlo directly simulates the physics, which necessitates the use of random sampling. Because of this statistical noise, common convergence acceleration methods used in deterministic methods do not work. In the fission matrix method, we are using the random walk information not only to build the next-iteration fission source, but also a spatially-averaged fission kernel. Just like in deterministic methods, this involves approximation and discretization. The approximation is the tallying of the spatially-discretized fission kernel with an incorrect fission source. We address this by making the spatial mesh fine enough that this error is negligible. As a consequence of discretization we get a

  7. Stochastic uncertainty analysis for unconfined flow systems

    USGS Publications Warehouse

    Liu, Gaisheng; Zhang, Dongxiao; Lu, Zhiming

    2006-01-01

    A new stochastic approach proposed by Zhang and Lu (2004), called the Karhunen-Loeve decomposition-based moment equation (KLME), has been extended to solving nonlinear, unconfined flow problems in randomly heterogeneous aquifers. This approach is on the basis of an innovative combination of Karhunen-Loeve decomposition, polynomial expansion, and perturbation methods. The random log-transformed hydraulic conductivity field (InKS) is first expanded into a series in terms of orthogonal Gaussian standard random variables with their coefficients obtained as the eigenvalues and eigenfunctions of the covariance function of InKS- Next, head h is decomposed as a perturbation expansion series ??A(m), where A(m) represents the mth-order head term with respect to the standard deviation of InKS. Then A(m) is further expanded into a polynomial series of m products of orthogonal Gaussian standard random variables whose coefficients Ai1,i2(m)...,im are deterministic and solved sequentially from low to high expansion orders using MODFLOW-2000. Finally, the statistics of head and flux are computed using simple algebraic operations on Ai1,i2(m)...,im. A series of numerical test results in 2-D and 3-D unconfined flow systems indicated that the KLME approach is effective in estimating the mean and (co)variance of both heads and fluxes and requires much less computational effort as compared to the traditional Monte Carlo simulation technique. Copyright 2006 by the American Geophysical Union.

  8. MCMini: Monte Carlo on GPGPU

    SciTech Connect

    Marcus, Ryan C.

    2012-07-25

    MCMini is a proof of concept that demonstrates the possibility for Monte Carlo neutron transport using OpenCL with a focus on performance. This implementation, written in C, shows that tracing particles and calculating reactions on a 3D mesh can be done in a highly scalable fashion. These results demonstrate a potential path forward for MCNP or other Monte Carlo codes.

  9. Frost in Charitum Montes

    NASA Technical Reports Server (NTRS)

    2003-01-01

    MGS MOC Release No. MOC2-387, 10 June 2003

    This is a Mars Global Surveyor (MGS) Mars Orbiter Camera (MOC) wide angle view of the Charitum Montes, south of Argyre Planitia, in early June 2003. The seasonal south polar frost cap, composed of carbon dioxide, has been retreating southward through this area since spring began a month ago. The bright features toward the bottom of this picture are surfaces covered by frost. The picture is located near 57oS, 43oW. North is at the top, south is at the bottom. Sunlight illuminates the scene from the upper left. The area shown is about 217 km (135 miles) wide.

  10. Stacking with stochastic cooling

    NASA Astrophysics Data System (ADS)

    Caspers, Fritz; Möhl, Dieter

    2004-10-01

    Accumulation of large stacks of antiprotons or ions with the aid of stochastic cooling is more delicate than cooling a constant intensity beam. Basically the difficulty stems from the fact that the optimized gain and the cooling rate are inversely proportional to the number of particles 'seen' by the cooling system. Therefore, to maintain fast stacking, the newly injected batch has to be strongly 'protected' from the Schottky noise of the stack. Vice versa the stack has to be efficiently 'shielded' against the high gain cooling system for the injected beam. In the antiproton accumulators with stacking ratios up to 105 the problem is solved by radial separation of the injection and the stack orbits in a region of large dispersion. An array of several tapered cooling systems with a matched gain profile provides a continuous particle flux towards the high-density stack core. Shielding of the different systems from each other is obtained both through the spatial separation and via the revolution frequencies (filters). In the 'old AA', where the antiproton collection and stacking was done in one single ring, the injected beam was further shielded during cooling by means of a movable shutter. The complexity of these systems is very high. For more modest stacking ratios, one might use azimuthal rather than radial separation of stack and injected beam. Schematically half of the circumference would be used to accept and cool new beam and the remainder to house the stack. Fast gating is then required between the high gain cooling of the injected beam and the low gain stack cooling. RF-gymnastics are used to merge the pre-cooled batch with the stack, to re-create free space for the next injection, and to capture the new batch. This scheme is less demanding for the storage ring lattice, but at the expense of some reduction in stacking rate. The talk reviews the 'radial' separation schemes and also gives some considerations to the 'azimuthal' schemes.