Multidimensional stochastic approximation Monte Carlo
NASA Astrophysics Data System (ADS)
Zablotskiy, Sergey V.; Ivanov, Victor A.; Paul, Wolfgang
2016-06-01
Stochastic Approximation Monte Carlo (SAMC) has been established as a mathematically founded powerful flat-histogram Monte Carlo method, used to determine the density of states, g (E ) , of a model system. We show here how it can be generalized for the determination of multidimensional probability distributions (or equivalently densities of states) of macroscopic or mesoscopic variables defined on the space of microstates of a statistical mechanical system. This establishes this method as a systematic way for coarse graining a model system, or, in other words, for performing a renormalization group step on a model. We discuss the formulation of the Kadanoff block spin transformation and the coarse-graining procedure for polymer models in this language. We also apply it to a standard case in the literature of two-dimensional densities of states, where two competing energetic effects are present g (E1,E2) . We show when and why care has to be exercised when obtaining the microcanonical density of states g (E1+E2) from g (E1,E2) .
Successful combination of the stochastic linearization and Monte Carlo methods
NASA Technical Reports Server (NTRS)
Elishakoff, I.; Colombi, P.
1993-01-01
A combination of a stochastic linearization and Monte Carlo techniques is presented for the first time in literature. A system with separable nonlinear damping and nonlinear restoring force is considered. The proposed combination of the energy-wise linearization with the Monte Carlo method yields an error under 5 percent, which corresponds to the error reduction associated with the conventional stochastic linearization by a factor of 4.6.
Quantum Monte Carlo using a Stochastic Poisson Solver
Das, D; Martin, R M; Kalos, M H
2005-05-06
Quantum Monte Carlo (QMC) is an extremely powerful method to treat many-body systems. Usually quantum Monte Carlo has been applied in cases where the interaction potential has a simple analytic form, like the 1/r Coulomb potential. However, in a complicated environment as in a semiconductor heterostructure, the evaluation of the interaction itself becomes a non-trivial problem. Obtaining the potential from any grid-based finite-difference method, for every walker and every step is unfeasible. We demonstrate an alternative approach of solving the Poisson equation by a classical Monte Carlo within the overall quantum Monte Carlo scheme. We have developed a modified ''Walk On Spheres'' algorithm using Green's function techniques, which can efficiently account for the interaction energy of walker configurations, typical of quantum Monte Carlo algorithms. This stochastically obtained potential can be easily incorporated within popular quantum Monte Carlo techniques like variational Monte Carlo (VMC) or diffusion Monte Carlo (DMC). We demonstrate the validity of this method by studying a simple problem, the polarization of a helium atom in the electric field of an infinite capacitor.
Bayesian phylogeny analysis via stochastic approximation Monte Carlo.
Cheon, Sooyoung; Liang, Faming
2009-11-01
Monte Carlo methods have received much attention in the recent literature of phylogeny analysis. However, the conventional Markov chain Monte Carlo algorithms, such as the Metropolis-Hastings algorithm, tend to get trapped in a local mode in simulating from the posterior distribution of phylogenetic trees, rendering the inference ineffective. In this paper, we apply an advanced Monte Carlo algorithm, the stochastic approximation Monte Carlo algorithm, to Bayesian phylogeny analysis. Our method is compared with two popular Bayesian phylogeny software, BAMBE and MrBayes, on simulated and real datasets. The numerical results indicate that our method outperforms BAMBE and MrBayes. Among the three methods, SAMC produces the consensus trees which have the highest similarity to the true trees, and the model parameter estimates which have the smallest mean square errors, but costs the least CPU time.
Semi-stochastic full configuration interaction quantum Monte Carlo
NASA Astrophysics Data System (ADS)
Holmes, Adam; Petruzielo, Frank; Khadilkar, Mihir; Changlani, Hitesh; Nightingale, M. P.; Umrigar, C. J.
2012-02-01
In the recently proposed full configuration interaction quantum Monte Carlo (FCIQMC) [1,2], the ground state is projected out stochastically, using a population of walkers each of which represents a basis state in the Hilbert space spanned by Slater determinants. The infamous fermion sign problem manifests itself in the fact that walkers of either sign can be spawned on a given determinant. We propose an improvement on this method in the form of a hybrid stochastic/deterministic technique, which we expect will improve the efficiency of the algorithm by ameliorating the sign problem. We test the method on atoms and molecules, e.g., carbon, carbon dimer, N2 molecule, and stretched N2. [4pt] [1] Fermion Monte Carlo without fixed nodes: a Game of Life, death and annihilation in Slater Determinant space. George Booth, Alex Thom, Ali Alavi. J Chem Phys 131, 050106, (2009).[0pt] [2] Survival of the fittest: Accelerating convergence in full configuration-interaction quantum Monte Carlo. Deidre Cleland, George Booth, and Ali Alavi. J Chem Phys 132, 041103 (2010).
NASA Astrophysics Data System (ADS)
Newell, Quentin Thomas
The Monte Carlo method provides powerful geometric modeling capabilities for large problem domains in 3-D; therefore, the Monte Carlo method is becoming popular for 3-D fuel depletion analyses to compute quantities of interest in spent nuclear fuel including isotopic compositions. The Monte Carlo approach has not been fully embraced due to unresolved issues concerning the effect of Monte Carlo uncertainties on the predicted results. Use of the Monte Carlo method to solve the neutron transport equation introduces stochastic uncertainty in the computed fluxes. These fluxes are used to collapse cross sections, estimate power distributions, and deplete the fuel within depletion calculations; therefore, the predicted number densities contain random uncertainties from the Monte Carlo solution. These uncertainties can be compounded in time because of the extrapolative nature of depletion and decay calculations. The objective of this research was to quantify the stochastic uncertainty propagation of the flux uncertainty, introduced by the Monte Carlo method, to the number densities for the different isotopes in spent nuclear fuel due to multiple depletion time steps. The research derived a formula that calculates the standard deviation in the nuclide number densities based on propagating the statistical uncertainty introduced when using coupled Monte Carlo depletion computer codes. The research was developed with the use of the TRITON/KENO sequence of the SCALE computer code. The linear uncertainty nuclide group approximation (LUNGA) method developed in this research approximated the variance of ψN term, which is the variance in the flux shape due to uncertainty in the calculated nuclide number densities. Three different example problems were used in this research to calculate of the standard deviation in the nuclide number densities using the LUNGA method. The example problems showed that the LUNGA method is capable of calculating the standard deviation of the nuclide
Stochastic Kinetic Monte Carlo algorithms for long-range Hamiltonians
Mason, D R; Rudd, R E; Sutton, A P
2003-10-13
We present a higher order kinetic Monte Carlo methodology suitable to model the evolution of systems in which the transition rates are non- trivial to calculate or in which Monte Carlo moves are likely to be non- productive flicker events. The second order residence time algorithm first introduced by Athenes et al.[1] is rederived from the n-fold way algorithm of Bortz et al.[2] as a fully stochastic algorithm. The second order algorithm can be dynamically called when necessary to eliminate unproductive flickering between a metastable state and its neighbors. An algorithm combining elements of the first order and second order methods is shown to be more efficient, in terms of the number of rate calculations, than the first order or second order methods alone while remaining statistically identical. This efficiency is of prime importance when dealing with computationally expensive rate functions such as those arising from long- range Hamiltonians. Our algorithm has been developed for use when considering simulations of vacancy diffusion under the influence of elastic stress fields. We demonstrate the improved efficiency of the method over that of the n-fold way in simulations of vacancy diffusion in alloys. Our algorithm is seen to be an order of magnitude more efficient than the n-fold way in these simulations. We show that when magnesium is added to an Al-2at.%Cu alloy, this has the effect of trapping vacancies. When trapping occurs, we see that our algorithm performs thousands of events for each rate calculation performed.
Stochastic modelling of power reactor fuel behavior
NASA Astrophysics Data System (ADS)
Mirza, Shahid Nawaz
An understanding of the in-reactor behavior of nuclear fuel is essential to the safe and economic operation of a nuclear power plant. It is no longer possible to achieve this without computer code calculations. A state of art computer code, FRODO, for Fuel ROD Operation, has been developed to model the steady state behavior of fuel pins in a light water reactor and to do sensitivity analysis. FRODO concentrates on the thermal performance, fission product release and pellet-clad interaction and can be used to predict the fuel failure under the prevailing conditions. FRODO incorporates the numerous uncertainties involved in fuel behavior modeling, using statistical methods, to ascertain fuel failures and their causes. Sensitivity of fuel failure to different fuel parameters and reactor conditions can be easily evaluated. FRODO has been used to analyze the sensitivities of fuel failures to coolant flow reductions. It is found that the uncertainties have pronounced effects on conclusions about fuel failures and their causes.
Semi-stochastic full configuration interaction quantum Monte Carlo: Developments and application
Blunt, N. S. Kersten, J. A. F.; Smart, Simon D.; Spencer, J. S.; Booth, George H.; Alavi, Ali
2015-05-14
We expand upon the recent semi-stochastic adaptation to full configuration interaction quantum Monte Carlo (FCIQMC). We present an alternate method for generating the deterministic space without a priori knowledge of the wave function and present stochastic efficiencies for a variety of both molecular and lattice systems. The algorithmic details of an efficient semi-stochastic implementation are presented, with particular consideration given to the effect that the adaptation has on parallel performance in FCIQMC. We further demonstrate the benefit for calculation of reduced density matrices in FCIQMC through replica sampling, where the semi-stochastic adaptation seems to have even larger efficiency gains. We then combine these ideas to produce explicitly correlated corrected FCIQMC energies for the beryllium dimer, for which stochastic errors on the order of wavenumber accuracy are achievable.
Golightly, Andrew; Wilkinson, Darren J.
2011-01-01
Computational systems biology is concerned with the development of detailed mechanistic models of biological processes. Such models are often stochastic and analytically intractable, containing uncertain parameters that must be estimated from time course data. In this article, we consider the task of inferring the parameters of a stochastic kinetic model defined as a Markov (jump) process. Inference for the parameters of complex nonlinear multivariate stochastic process models is a challenging problem, but we find here that algorithms based on particle Markov chain Monte Carlo turn out to be a very effective computationally intensive approach to the problem. Approximations to the inferential model based on stochastic differential equations (SDEs) are considered, as well as improvements to the inference scheme that exploit the SDE structure. We apply the methodology to a Lotka–Volterra system and a prokaryotic auto-regulatory network. PMID:23226583
Gu, M G; Kong, F H
1998-06-23
We propose a general procedure for solving incomplete data estimation problems. The procedure can be used to find the maximum likelihood estimate or to solve estimating equations in difficult cases such as estimation with the censored or truncated regression model, the nonlinear structural measurement error model, and the random effects model. The procedure is based on the general principle of stochastic approximation and the Markov chain Monte-Carlo method. Applying the theory on adaptive algorithms, we derive conditions under which the proposed procedure converges. Simulation studies also indicate that the proposed procedure consistently converges to the maximum likelihood estimate for the structural measurement error logistic regression model.
Monte Carlo simulations of two-component drop growth by stochastic coalescence
NASA Astrophysics Data System (ADS)
Alfonso, L.; Raga, G. B.; Baumgardner, D.
2008-04-01
The evolution of two-dimensional drop distributions is simulated in this study using a Monte Carlo method.~The stochastic algorithm of Gillespie (1976) for chemical reactions in the formulation proposed by Laurenzi et al. (2002) was used to simulate the kinetic behavior of the drop population. Within this framework species are defined as droplets of specific size and aerosol composition. The performance of the algorithm was checked by comparing the numerical with the analytical solutions found by Lushnikov (1975). Very good agreement was observed between the Monte Carlo simulations and the analytical solution. Simulation results are presented for bi-variate constant and hydrodynamic kernels. The algorithm can be easily extended to incorporate various properties of clouds such as including several crystal habits, different types of soluble CCN, particle charging and drop breakup.
Stochastic sensitivity analysis of the biosphere model for Canadian nuclear fuel waste management
Reid, J.A.K.; Corbett, B.J. . Whiteshell Labs.)
1993-01-01
The biosphere model, BIOTRAC, was constructed to assess Canada's concept for nuclear fuel waste disposal in a vault deep in crystalline rock at some as yet undetermined location in the Canadian Shield. The model is therefore very general and based on the shield as a whole. BIOTRAC is made up of four linked submodels for surface water, soil, atmosphere, and food chain and dose. The model simulates physical conditions and radionuclide flows from the discharge of a hypothetical nuclear fuel waste disposal vault through groundwater, a well, a lake, air, soil, and plants to a critical group of individuals, i.e., those who are most exposed and therefore receive the highest dose. This critical group is totally self-sufficient and is represented by the International Commission for Radiological Protection reference man for dose prediction. BIOTRAC is a dynamic model that assumes steady-state physical conditions for each simulation, and deals with variation and uncertainty through Monte Carlo simulation techniques. This paper describes SENSYV, a technique for analyzing pathway and parameter sensitivities for the BIOTRAC code run in stochastic mode. Results are presented for [sup 129]I from the disposal of used fuel, and they confirm the importance of doses via the soil/plant/man and the air/plant/man ingestion pathways. The results also indicate that the lake/well water use switch, the aquatic iodine mass loading parameter, the iodine soil evasion rate, and the iodine plant/soil concentration ratio are important parameters.
Stochastic Monte-Carlo Markov Chain Inversions on Models Regionalized Using Receiver Functions
NASA Astrophysics Data System (ADS)
Larmat, C. S.; Maceira, M.; Kato, Y.; Bodin, T.; Calo, M.; Romanowicz, B. A.; Chai, C.; Ammon, C. J.
2014-12-01
There is currently a strong interest in stochastic approaches to seismic modeling - versus deterministic methods such as gradient methods - due to the ability of these methods to better deal with highly non-linear problems. Another advantage of stochastic methods is that they allow the estimation of the a posteriori probability distribution of the derived parameters, meaning the envisioned Bayesian inversion of Tarantola allowing the quantification of the solution error. The cost to pay of stochastic methods is that they require testing thousands of variations of each unknown parameter and their associated weights to ensure reliable probabilistic inferences. Even with the best High-Performance Computing resources available, 3D stochastic full waveform modeling at the regional scale still remains out-of-reach. We are exploring regionalization as one way to reduce the dimension of the parameter space, allowing the identification of areas in the models that can be treated as one block in a subsequent stochastic inversion. Regionalization is classically performed through the identification of tectonic or structural elements. Lekic & Romanowicz (2011) proposed a new approach with a cluster analysis of the tomographic velocity models instead. Here we present the results of a clustering analysis on the P-wave receiver-functions used in the subsequent inversion. Different clustering algorithms and quality of clustering are tested for different datasets of North America and China. Preliminary results with the kmean clustering algorithm show that an interpolated receiver function wavefield (Chai et al., GRL, in review) improve the agreement with the geological and tectonic regions of North America compared to the traditional approach of stacked receiver functions. After regionalization, 1D profile for each region is stochastically inferred using a parallelized code based on Monte-Carlo Markov Chains (MCMC), and modeling surfacewave-dispersion and receiver
ERIC Educational Resources Information Center
Gold, Michael Steven; Bentler, Peter M.
2000-01-01
Describes a Monte Carlo investigation of four methods for treating incomplete data: (1) resemblance based hot-deck imputation (RBHDI); (2) iterated stochastic regression imputation; (3) structured model expectation maximization; and (4) saturated model expectation maximization. Results favored the expectation maximization methods. (SLD)
Asselineau, Charles-Alexis; Zapata, Jose; Pye, John
2015-06-01
A stochastic optimisation method adapted to illumination and radiative heat transfer problems involving Monte-Carlo ray-tracing is presented. A solar receiver shape optimisation case study illustrates the advantages of the method and its potential: efficient receivers are identified using a moderate computational cost.
A Monte Carlo simulation based inverse propagation method for stochastic model updating
NASA Astrophysics Data System (ADS)
Bao, Nuo; Wang, Chunjie
2015-08-01
This paper presents an efficient stochastic model updating method based on statistical theory. Significant parameters have been selected implementing the F-test evaluation and design of experiments, and then the incomplete fourth-order polynomial response surface model (RSM) has been developed. Exploiting of the RSM combined with Monte Carlo simulation (MCS), reduces the calculation amount and the rapid random sampling becomes possible. The inverse uncertainty propagation is given by the equally weighted sum of mean and covariance matrix objective functions. The mean and covariance of parameters are estimated synchronously by minimizing the weighted objective function through hybrid of particle-swarm and Nelder-Mead simplex optimization method, thus the better correlation between simulation and test is achieved. Numerical examples of a three degree-of-freedom mass-spring system under different conditions and GARTEUR assembly structure validated the feasibility and effectiveness of the proposed method.
Müller, Eike H; Scheichl, Rob; Shardlow, Tony
2015-04-08
This paper applies several well-known tricks from the numerical treatment of deterministic differential equations to improve the efficiency of the multilevel Monte Carlo (MLMC) method for stochastic differential equations (SDEs) and especially the Langevin equation. We use modified equations analysis as an alternative to strong-approximation theory for the integrator, and we apply this to introduce MLMC for Langevin-type equations with integrators based on operator splitting. We combine this with extrapolation and investigate the use of discrete random variables in place of the Gaussian increments, which is a well-known technique for the weak approximation of SDEs. We show that, for small-noise problems, discrete random variables can lead to an increase in efficiency of almost two orders of magnitude for practical levels of accuracy.
Huang, Guanghui; Wan, Jianping; Chen, Hui
2013-02-01
Nonlinear stochastic differential equation models with unobservable state variables are now widely used in analysis of PK/PD data. Unobservable state variables are usually estimated with extended Kalman filter (EKF), and the unknown pharmacokinetic parameters are usually estimated by maximum likelihood estimator. However, EKF is inadequate for nonlinear PK/PD models, and MLE is known to be biased downwards. A density-based Monte Carlo filter (DMF) is proposed to estimate the unobservable state variables, and a simulation-based M estimator is proposed to estimate the unknown parameters in this paper, where a genetic algorithm is designed to search the optimal values of pharmacokinetic parameters. The performances of EKF and DMF are compared through simulations for discrete time and continuous time systems respectively, and it is found that the results based on DMF are more accurate than those given by EKF with respect to mean absolute error. Copyright © 2012 Elsevier Ltd. All rights reserved.
Müller, Eike H.; Scheichl, Rob; Shardlow, Tony
2015-01-01
This paper applies several well-known tricks from the numerical treatment of deterministic differential equations to improve the efficiency of the multilevel Monte Carlo (MLMC) method for stochastic differential equations (SDEs) and especially the Langevin equation. We use modified equations analysis as an alternative to strong-approximation theory for the integrator, and we apply this to introduce MLMC for Langevin-type equations with integrators based on operator splitting. We combine this with extrapolation and investigate the use of discrete random variables in place of the Gaussian increments, which is a well-known technique for the weak approximation of SDEs. We show that, for small-noise problems, discrete random variables can lead to an increase in efficiency of almost two orders of magnitude for practical levels of accuracy. PMID:27547075
Efficient Monte Carlo and greedy heuristic for the inference of stochastic block models
NASA Astrophysics Data System (ADS)
Peixoto, Tiago P.
2014-01-01
We present an efficient algorithm for the inference of stochastic block models in large networks. The algorithm can be used as an optimized Markov chain Monte Carlo (MCMC) method, with a fast mixing time and a much reduced susceptibility to getting trapped in metastable states, or as a greedy agglomerative heuristic, with an almost linear O (Nln2N) complexity, where N is the number of nodes in the network, independent of the number of blocks being inferred. We show that the heuristic is capable of delivering results which are indistinguishable from the more exact and numerically expensive MCMC method in many artificial and empirical networks, despite being much faster. The method is entirely unbiased towards any specific mixing pattern, and in particular it does not favor assortative community structures.
Monte Carlo simulations of two-component drop growth by stochastic coalescence
NASA Astrophysics Data System (ADS)
Alfonso, L.; Raga, G. B.; Baumgardner, D.
2009-02-01
The evolution of two-dimensional drop distributions is simulated in this study using a Monte Carlo method. The stochastic algorithm of Gillespie (1976) for chemical reactions in the formulation proposed by Laurenzi et al. (2002) was used to simulate the kinetic behavior of the drop population. Within this framework, species are defined as droplets of specific size and aerosol composition. The performance of the algorithm was checked by a comparison with the analytical solutions found by Lushnikov (1975) and Golovin (1963) and with finite difference solutions of the two-component kinetic collection equation obtained for the Golovin (sum) and hydrodynamic kernels. Very good agreement was observed between the Monte Carlo simulations and the analytical and numerical solutions. A simulation for realistic initial conditions is presented for the hydrodynamic kernel. As expected, the aerosol mass is shifted from small to large particles due to collection process. This algorithm could be extended to incorporate various properties of clouds such several crystals habits, different types of soluble CCN, particle charging and drop breakup.
NASA Astrophysics Data System (ADS)
Zhang, D.; Liao, Q.
2016-12-01
The Bayesian inference provides a convenient framework to solve statistical inverse problems. In this method, the parameters to be identified are treated as random variables. The prior knowledge, the system nonlinearity, and the measurement errors can be directly incorporated in the posterior probability density function (PDF) of the parameters. The Markov chain Monte Carlo (MCMC) method is a powerful tool to generate samples from the posterior PDF. However, since the MCMC usually requires thousands or even millions of forward simulations, it can be a computationally intensive endeavor, particularly when faced with large-scale flow and transport models. To address this issue, we construct a surrogate system for the model responses in the form of polynomials by the stochastic collocation method. In addition, we employ interpolation based on the nested sparse grids and takes into account the different importance of the parameters, under the condition of high random dimensions in the stochastic space. Furthermore, in case of low regularity such as discontinuous or unsmooth relation between the input parameters and the output responses, we introduce an additional transform process to improve the accuracy of the surrogate model. Once we build the surrogate system, we may evaluate the likelihood with very little computational cost. We analyzed the convergence rate of the forward solution and the surrogate posterior by Kullback-Leibler divergence, which quantifies the difference between probability distributions. The fast convergence of the forward solution implies fast convergence of the surrogate posterior to the true posterior. We also tested the proposed algorithm on water-flooding two-phase flow reservoir examples. The posterior PDF calculated from a very long chain with direct forward simulation is assumed to be accurate. The posterior PDF calculated using the surrogate model is in reasonable agreement with the reference, revealing a great improvement in terms of
Lu, Dan; Zhang, Guannan; Webster, Clayton G.; Barbier, Charlotte N.
2016-12-30
In this paper, we develop an improved multilevel Monte Carlo (MLMC) method for estimating cumulative distribution functions (CDFs) of a quantity of interest, coming from numerical approximation of large-scale stochastic subsurface simulations. Compared with Monte Carlo (MC) methods, that require a significantly large number of high-fidelity model executions to achieve a prescribed accuracy when computing statistical expectations, MLMC methods were originally proposed to significantly reduce the computational cost with the use of multifidelity approximations. The improved performance of the MLMC methods depends strongly on the decay of the variance of the integrand as the level increases. However, the main challenge in estimating CDFs is that the integrand is a discontinuous indicator function whose variance decays slowly. To address this difficult task, we approximate the integrand using a smoothing function that accelerates the decay of the variance. In addition, we design a novel a posteriori optimization strategy to calibrate the smoothing function, so as to balance the computational gain and the approximation error. The combined proposed techniques are integrated into a very general and practical algorithm that can be applied to a wide range of subsurface problems for high-dimensional uncertainty quantification, such as a fine-grid oil reservoir model considered in this effort. The numerical results reveal that with the use of the calibrated smoothing function, the improved MLMC technique significantly reduces the computational complexity compared to the standard MC approach. Finally, we discuss several factors that affect the performance of the MLMC method and provide guidance for effective and efficient usage in practice.
Lu, Dan; Zhang, Guannan; Webster, Clayton G.; ...
2016-12-30
In this paper, we develop an improved multilevel Monte Carlo (MLMC) method for estimating cumulative distribution functions (CDFs) of a quantity of interest, coming from numerical approximation of large-scale stochastic subsurface simulations. Compared with Monte Carlo (MC) methods, that require a significantly large number of high-fidelity model executions to achieve a prescribed accuracy when computing statistical expectations, MLMC methods were originally proposed to significantly reduce the computational cost with the use of multifidelity approximations. The improved performance of the MLMC methods depends strongly on the decay of the variance of the integrand as the level increases. However, the main challengemore » in estimating CDFs is that the integrand is a discontinuous indicator function whose variance decays slowly. To address this difficult task, we approximate the integrand using a smoothing function that accelerates the decay of the variance. In addition, we design a novel a posteriori optimization strategy to calibrate the smoothing function, so as to balance the computational gain and the approximation error. The combined proposed techniques are integrated into a very general and practical algorithm that can be applied to a wide range of subsurface problems for high-dimensional uncertainty quantification, such as a fine-grid oil reservoir model considered in this effort. The numerical results reveal that with the use of the calibrated smoothing function, the improved MLMC technique significantly reduces the computational complexity compared to the standard MC approach. Finally, we discuss several factors that affect the performance of the MLMC method and provide guidance for effective and efficient usage in practice.« less
NASA Astrophysics Data System (ADS)
Lu, Dan; Zhang, Guannan; Webster, Clayton; Barbier, Charlotte
2016-12-01
In this work, we develop an improved multilevel Monte Carlo (MLMC) method for estimating cumulative distribution functions (CDFs) of a quantity of interest, coming from numerical approximation of large-scale stochastic subsurface simulations. Compared with Monte Carlo (MC) methods, that require a significantly large number of high-fidelity model executions to achieve a prescribed accuracy when computing statistical expectations, MLMC methods were originally proposed to significantly reduce the computational cost with the use of multifidelity approximations. The improved performance of the MLMC methods depends strongly on the decay of the variance of the integrand as the level increases. However, the main challenge in estimating CDFs is that the integrand is a discontinuous indicator function whose variance decays slowly. To address this difficult task, we approximate the integrand using a smoothing function that accelerates the decay of the variance. In addition, we design a novel a posteriori optimization strategy to calibrate the smoothing function, so as to balance the computational gain and the approximation error. The combined proposed techniques are integrated into a very general and practical algorithm that can be applied to a wide range of subsurface problems for high-dimensional uncertainty quantification, such as a fine-grid oil reservoir model considered in this effort. The numerical results reveal that with the use of the calibrated smoothing function, the improved MLMC technique significantly reduces the computational complexity compared to the standard MC approach. Finally, we discuss several factors that affect the performance of the MLMC method and provide guidance for effective and efficient usage in practice.
NASA Astrophysics Data System (ADS)
Zhai, Xue; Fei, Cheng-Wei; Choy, Yat-Sze; Wang, Jian-Jun
2017-01-01
To improve the accuracy and efficiency of computation model for complex structures, the stochastic model updating (SMU) strategy was proposed by combining the improved response surface model (IRSM) and the advanced Monte Carlo (MC) method based on experimental static test, prior information and uncertainties. Firstly, the IRSM and its mathematical model were developed with the emphasis on moving least-square method, and the advanced MC simulation method is studied based on Latin hypercube sampling method as well. And then the SMU procedure was presented with experimental static test for complex structure. The SMUs of simply-supported beam and aeroengine stator system (casings) were implemented to validate the proposed IRSM and advanced MC simulation method. The results show that (1) the SMU strategy hold high computational precision and efficiency for the SMUs of complex structural system; (2) the IRSM is demonstrated to be an effective model due to its SMU time is far less than that of traditional response surface method, which is promising to improve the computational speed and accuracy of SMU; (3) the advanced MC method observably decrease the samples from finite element simulations and the elapsed time of SMU. The efforts of this paper provide a promising SMU strategy for complex structure and enrich the theory of model updating.
NASA Astrophysics Data System (ADS)
Shin, Seungho; Kim, Ah-Reum; Um, Sukkee
2016-02-01
A two-dimensional material network model has been developed to visualize the nano-structures of fuel-cell catalysts and to search for effective transport paths for the optimal performance of fuel cells in randomly-disordered composite catalysts. Stochastic random modeling based on the Monte Carlo method is developed using random number generation processes over a catalyst layer domain at a 95% confidence level. After the post-determination process of the effective connectivity, particularly for mass transport, the effective catalyst utilization factors are introduced to determine the extent of catalyst utilization in the fuel cells. The results show that the superficial pore volume fractions of 600 trials approximate a normal distribution curve with a mean of 0.5. In contrast, the estimated volume fraction of effectively inter-connected void clusters ranges from 0.097 to 0.420, which is much smaller than the superficial porosity of 0.5 before the percolation process. Furthermore, the effective catalyst utilization factor is determined to be linearly proportional to the effective porosity. More importantly, this study reveals that the average catalyst utilization is less affected by the variations of the catalyst's particle size and the absolute catalyst loading at a fixed volume fraction of void spaces.
NASA Astrophysics Data System (ADS)
Rosario, Dalton S.
2001-08-01
Higher-level decisions for AiTR (aided target recognition) networks have been made so far in our community in an ad-hoc fashion. Higher level decisions in this context do not involve target recognition performance per se, but other inherent output measures of performance, e.g., expected response time, long-term electronic memory required to achieve a tolerable level of image losses. Those measures usually require the knowledge associated with the steady-state, stochastic behavior of the entire network, which in practice is mathematically intractable. Decisions requiring those and similar output measures will become very important as AiTR networks are permanently deployed to the field. To address this concern, I propose to model AiTR systems as an open stochastic-process network and to conduct Monte Carlo simulations based on this model to estimate steady state performances. To illustrate this method, I modeled as proposed a familiar operational scenario and an existing baseline AiTR system. Details of the stochastic model and its corresponding Monte-Carlo simulation results are discussed in the paper.
Zaikin, Alexey; Míguez, Joaquín
2017-01-01
We compare three state-of-the-art Bayesian inference methods for the estimation of the unknown parameters in a stochastic model of a genetic network. In particular, we introduce a stochastic version of the paradigmatic synthetic multicellular clock model proposed by Ullner et al., 2007. By introducing dynamical noise in the model and assuming that the partial observations of the system are contaminated by additive noise, we enable a principled mechanism to represent experimental uncertainties in the synthesis of the multicellular system and pave the way for the design of probabilistic methods for the estimation of any unknowns in the model. Within this setup, we tackle the Bayesian estimation of a subset of the model parameters. Specifically, we compare three Monte Carlo based numerical methods for the approximation of the posterior probability density function of the unknown parameters given a set of partial and noisy observations of the system. The schemes we assess are the particle Metropolis-Hastings (PMH) algorithm, the nonlinear population Monte Carlo (NPMC) method and the approximate Bayesian computation sequential Monte Carlo (ABC-SMC) scheme. We present an extensive numerical simulation study, which shows that while the three techniques can effectively solve the problem there are significant differences both in estimation accuracy and computational efficiency. PMID:28797087
Mariño, Inés P; Zaikin, Alexey; Míguez, Joaquín
2017-01-01
We compare three state-of-the-art Bayesian inference methods for the estimation of the unknown parameters in a stochastic model of a genetic network. In particular, we introduce a stochastic version of the paradigmatic synthetic multicellular clock model proposed by Ullner et al., 2007. By introducing dynamical noise in the model and assuming that the partial observations of the system are contaminated by additive noise, we enable a principled mechanism to represent experimental uncertainties in the synthesis of the multicellular system and pave the way for the design of probabilistic methods for the estimation of any unknowns in the model. Within this setup, we tackle the Bayesian estimation of a subset of the model parameters. Specifically, we compare three Monte Carlo based numerical methods for the approximation of the posterior probability density function of the unknown parameters given a set of partial and noisy observations of the system. The schemes we assess are the particle Metropolis-Hastings (PMH) algorithm, the nonlinear population Monte Carlo (NPMC) method and the approximate Bayesian computation sequential Monte Carlo (ABC-SMC) scheme. We present an extensive numerical simulation study, which shows that while the three techniques can effectively solve the problem there are significant differences both in estimation accuracy and computational efficiency.
Slope stability effects of fuel management strategies – inferences from Monte Carlo simulations
R. M. Rice; R. R. Ziemer; S. C. Hankin
1982-01-01
A simple Monte Carlo simulation evaluated the effect of several fire management strategies on soil slip erosion and wildfires. The current condition was compared to (1) a very intensive fuelbreak system without prescribed fires, and (2) prescribed fire at four time intervals with (a) current fuelbreaks and (b) intensive fuel-breaks. The intensive fuelbreak system...
A Monte Carlo based spent fuel analysis safeguards strategy assessment
Fensin, Michael L; Tobin, Stephen J; Swinhoe, Martyn T; Menlove, Howard O; Sandoval, Nathan P
2009-01-01
Safeguarding nuclear material involves the detection of diversions of significant quantities of nuclear materials, and the deterrence of such diversions by the risk of early detection. There are a variety of motivations for quantifying plutonium in spent fuel assemblies by means of nondestructive assay (NDA) including the following: strengthening the capabilities of the International Atomic Energy Agencies ability to safeguards nuclear facilities, shipper/receiver difference, input accountability at reprocessing facilities and burnup credit at repositories. Many NDA techniques exist for measuring signatures from spent fuel; however, no single NDA technique can, in isolation, quantify elemental plutonium and other actinides of interest in spent fuel. A study has been undertaken to determine the best integrated combination of cost effective techniques for quantifying plutonium mass in spent fuel for nuclear safeguards. A standardized assessment process was developed to compare the effective merits and faults of 12 different detection techniques in order to integrate a few techniques and to down-select among the techniques in preparation for experiments. The process involves generating a basis burnup/enrichment/cooling time dependent spent fuel assembly library, creating diversion scenarios, developing detector models and quantifying the capability of each NDA technique. Because hundreds of input and output files must be managed in the couplings of data transitions for the different facets of the assessment process, a graphical user interface (GUI) was development that automates the process. This GUI allows users to visually create diversion scenarios with varied replacement materials, and generate a MCNPX fixed source detector assessment input file. The end result of the assembly library assessment is to select a set of common source terms and diversion scenarios for quantifying the capability of each of the 12 NDA techniques. We present here the generalized
Monte Carlo Simulation of the TRIGA Mark II Benchmark Experiment with Burned Fuel
Jeraj, Robert; Zagar, Tomaz; Ravnik, Matjaz
2002-03-15
Monte Carlo calculations of a criticality experiment with burned fuel on the TRIGA Mark II research reactor are presented. The main objective was to incorporate burned fuel composition calculated with the WIMSD4 deterministic code into the MCNP4B Monte Carlo code and compare the calculated k{sub eff} with the measurements. The criticality experiment was performed in 1998 at the ''Jozef Stefan'' Institute TRIGA Mark II reactor in Ljubljana, Slovenia, with the same fuel elements and loading pattern as in the TRIGA criticality benchmark experiment with fresh fuel performed in 1991. The only difference was that in 1998, the fuel elements had on average burnup of {approx}3%, corresponding to 1.3-MWd energy produced in the core in the period between 1991 and 1998. The fuel element burnup accumulated during 1991-1998 was calculated with the TRIGLAV in-house-developed fuel management two-dimensional multigroup diffusion code. The burned fuel isotopic composition was calculated with the WIMSD4 code and compared to the ORIGEN2 calculations. Extensive comparison of burned fuel material composition was performed for both codes for burnups up to 20% burned {sup 235}U, and the differences were evaluated in terms of reactivity. The WIMSD4 and ORIGEN2 results agreed well for all isotopes important in reactivity calculations, giving increased confidence in the WIMSD4 calculation of the burned fuel material composition. The k{sub eff} calculated with the combined WIMSD4 and MCNP4B calculations showed good agreement with the experimental values. This shows that linking of WIMSD4 with MCNP4B for criticality calculations with burned fuel is feasible and gives reliable results.
A Monte Carlo method of evaluating heterogeneous effects in plate-fueled reactors
Thayer, R.C.; Redmond, E.L. II; Ryskamp, J.M. )
1991-01-01
Few-group nuclear cross sections for small plate-fueled, light and heavy water test reactors are frequently generated with unit cell models that contain a homogeneous mixture of fuel, cladding, and water. The heterogeneous unit cells do not need to be represented explicitly for neutronics calculations when the plate and coolant channel thicknesses are small compared with the mean-free-path of neutrons. However, neutron and photon heating calculations were performed with heterogeneous fuel models to predict accurately the heat deposited in the fuel meat, cladding, and coolant. Heat deposited in the coolant channels and outside the fuel elements does not have a direct impact on the peak fuel meat temperature but must be included in the total coolant system heat balance. The results of a heterogeneous Monte Carlo calculation that estimates the heat loads in different fuel regions are presented and the fact that similar homogeneous fuel models can be used for many calculations. The calculations presented here were performed on models of the Advanced Neutron Source (ANS) and the Massachusetts Institute of Technology Reactor 2 (MITR-2). The ANS is a small, 362-MW (fission), plate-fueled, heavy water reactor designed to produce an intense steady-state source of neutrons.
NASA Astrophysics Data System (ADS)
Jin, Shengye; Tamura, Masayuki
2013-10-01
Monte Carlo Ray Tracing (MCRT) method is a versatile application for simulating radiative transfer regime of the Solar - Atmosphere - Landscape system. Moreover, it can be used to compute the radiation distribution over a complex landscape configuration, as an example like a forest area. Due to its robustness to the complexity of the 3-D scene altering, MCRT method is also employed for simulating canopy radiative transfer regime as the validation source of other radiative transfer models. In MCRT modeling within vegetation, one basic step is the canopy scene set up. 3-D scanning application was used for representing canopy structure as accurately as possible, but it is time consuming. Botanical growth function can be used to model the single tree growth, but cannot be used to express the impaction among trees. L-System is also a functional controlled tree growth simulation model, but it costs large computing memory. Additionally, it only models the current tree patterns rather than tree growth during we simulate the radiative transfer regime. Therefore, it is much more constructive to use regular solid pattern like ellipsoidal, cone, cylinder etc. to indicate single canopy. Considering the allelopathy phenomenon in some open forest optical images, each tree in its own `domain' repels other trees. According to this assumption a stochastic circle packing algorithm is developed to generate the 3-D canopy scene in this study. The canopy coverage (%) and the tree amount (N) of the 3-D scene are declared at first, similar to the random open forest image. Accordingly, we randomly generate each canopy radius (rc). Then we set the circle central coordinate on XY-plane as well as to keep circles separate from each other by the circle packing algorithm. To model the individual tree, we employ the Ishikawa's tree growth regressive model to set the tree parameters including DBH (dt), tree height (H). However, the relationship between canopy height (Hc) and trunk height (Ht) is
Monte Carlo minicell approach for a detailed MOX fuel-pin power profile analysis
Chang, G.S.; Ryskamp, J.M.
1997-12-01
The U.S. Department of Energy (DOE) is pursuing two options to dispose of surplus weapons-grade plutonium (WGPu). One option is to burn the WGPu in a mixed-oxide (MOX) fuel form in light water reactors (LWRs). A significant challenge is to demonstrate that the differences between the WG and reactor-grade (RG) MOX fuel are minimal, and therefore, the commercial MOX experience base is applicable. MOX fuel will be irradiated in the Advanced Test Reactor (ATR) to investigate this assertion. Detailed power distributions throughout the MOX pins are required to determine temperature distributions. The purpose of this work is to develop a new Monte Carlo procedure for accurately determining power distributions in fuel pins located in the ATR reflector. Conventional LWR methods are not appropriate because of the unique ATR geometry.
A selective hybrid stochastic strategy for fuel-cell multi-parameter identification
NASA Astrophysics Data System (ADS)
Guarnieri, Massimo; Negro, Enrico; Di Noto, Vito; Alotto, Piergiorgio
2016-11-01
The in situ identification of fuel-cell material parameters is crucial both for guiding the research for advanced functionalized materials and for fitting multiphysics models, which can be used in fuel cell performance evaluation and optimization. However, this identification still remains challenging when dealing with direct measurements. This paper presents a method for achieving this aim by stochastic optimization. Such techniques have been applied to the analysis of fuel cells for ten years, but typically to specific problems and by means of semi-empirical models, with an increased number of articles published in the last years. We present an original formulation that makes use of an accurate zero-dimensional multi-physical model of a polymer electrolyte membrane fuel cell and of two cooperating stochastic algorithms, particle swarm optimization and differential evolution, to extract multiple material parameters (exchange current density, mass transfer coefficient, diffusivity, conductivity, activation barriers …) from the experimental data of polarization curves (i.e. in situ measurements) under some controlled temperature, gas back pressure and humidification. The method is suitable for application in other fields where fitting of multiphysics nonlinear models is involved.
Monte Carlo Bounding Techniques for Determining Solution Quality in Stochastic Programs
1999-01-01
Sci. 6 (1960) 197–204. [26] V.I. Norkin, G.Ch. P ug, A. Ruszczynski, A branch and bound method for stochastic global optimization, IIASA Working...linear problems, IIASA Working Paper 96-014, Laxenburg, Austria, February 1996. [31] S. Sen, R.D. Doverspike, S. Cosares, Network planning with random
NASA Astrophysics Data System (ADS)
Schneider, Simon; Mueller, Marco; Janke, Wolfhard
2017-07-01
We investigate the behavior of the deviation of the estimator for the density of states (DOS) with respect to the exact solution in the course of Wang-Landau and Stochastic Approximation Monte Carlo (SAMC) simulations of the two-dimensional Ising model. We find that the deviation saturates in the Wang-Landau case. This can be cured by adjusting the refinement scheme. To this end, the 1 / t-modification of the Wang-Landau algorithm has been suggested. A similar choice of refinement scheme is employed in the SAMC algorithm. The convergence behavior of all three algorithms is examined. It turns out that the convergence of the SAMC algorithm is very sensitive to the onset of the refinement. Finally, the internal energy and specific heat of the Ising model are calculated from the SAMC DOS and compared to exact values.
NASA Astrophysics Data System (ADS)
Panzeri, M.; Riva, M.; Guadagnini, A.; Neuman, S. P.
2014-04-01
Traditional Ensemble Kalman Filter (EnKF) data assimilation requires computationally intensive Monte Carlo (MC) sampling, which suffers from filter inbreeding unless the number of simulations is large. Recently we proposed an alternative EnKF groundwater-data assimilation method that obviates the need for sampling and is free of inbreeding issues. In our new approach, theoretical ensemble moments are approximated directly by solving a system of corresponding stochastic groundwater flow equations. Like MC-based EnKF, our moment equations (ME) approach allows Bayesian updating of system states and parameters in real-time as new data become available. Here we compare the performances and accuracies of the two approaches on two-dimensional transient groundwater flow toward a well pumping water in a synthetic, randomly heterogeneous confined aquifer subject to prescribed head and flux boundary conditions.
NASA Astrophysics Data System (ADS)
Collins, Stuart D.; Chatterjee, Abhijit; Vlachos, Dionisios G.
2008-11-01
On-lattice kinetic Monte Carlo (KMC) simulations have extensively been applied to numerous systems. However, their applicability is severely limited to relatively short time and length scales. Recently, the coarse-grained MC (CGMC) method was introduced to greatly expand the reach of the lattice KMC technique. Herein, we extend the previous spatial CGMC methods to multicomponent species and/or site types. The underlying theory is derived and numerical examples are presented to demonstrate the method. Furthermore, we introduce the concept of homogenization at the stochastic level over all site types of a spatially coarse-grained cell. Homogenization provides a novel coarsening of the number of processes, an important aspect for complex problems plagued by the existence of numerous microscopic processes (combinatorial complexity). As expected, the homogenized CGMC method outperforms the traditional KMC method on computational cost while retaining good accuracy.
ATR WG-MOX Fuel Pellet Burnup Measurement by Monte Carlo - Mass Spectrometric Method
Chang, Gray Sen I
2002-10-01
This paper presents a new method for calculating the burnup of nuclear reactor fuel, the MCWO-MS method, and describes its application to an experiment currently in progress to assess the suitability for use in light-water reactors of Mixed-OXide (MOX) fuel that contains plutonium derived from excess nuclear weapons material. To demonstrate that the available experience base with Reactor-Grade Mixed uranium-plutonium OXide (RGMOX) can be applied to Weapons-Grade (WG)-MOX in light water reactors, and to support potential licensing of MOX fuel made from weapons-grade plutonium and depleted uranium for use in United States reactors, an experiment containing WG-MOX fuel is being irradiated in the Advanced Test Reactor (ATR) at the Idaho National Engineering and Environmental Laboratory. Fuel burnup is an important parameter needed for fuel performance evaluation. For the irradiated MOX fuel’s Post-Irradiation Examination, the 148Nd method is used to measure the burnup. The fission product 148Nd is an ideal burnup indicator, when appropriate correction factors are applied. In the ATR test environment, the spectrum-dependent and burnup-dependent correction factors (see Section 5 for detailed discussion) can be substantial in high fuel burnup. The validated Monte Carlo depletion tool (MCWO) used in this study can provide a burnup-dependent correction factor for the reactor parameters, such as capture-to-fission ratios, isotopic concentrations and compositions, fission power, and spectrum in a straightforward fashion. Furthermore, the correlation curve generated by MCWO can be coupled with the 239Pu/Pu ratio measured by a Mass Spectrometer (in the new MCWO-MS method) to obtain a best-estimate MOX fuel burnup. A Monte Carlo - MCWO method can eliminate the generation of few-group cross sections. The MCWO depletion tool can analyze the detailed spatial and spectral self-shielding effects in UO2, WG-MOX, and reactor-grade mixed oxide (RG-MOX) fuel pins. The MCWO-MS tool only
NASA Astrophysics Data System (ADS)
Juillet, Olivier; Leprévost, Alexandre; Bonnard, Jérémy; Frésard, Raymond
2017-04-01
The so-called phaseless quantum Monte-Carlo method currently offers one of the best performing theoretical framework to investigate interacting Fermi systems. It allows to extract an approximate ground-state wavefunction by averaging independent-particle states undergoing a Brownian motion in imaginary-time. Here, we extend the approach to a random walk in the space of Hartree-Fock-Bogoliubov (HFB) vacua that are better suited for superconducting or superfluid systems. Well-controlled statistical errors are ensured by constraining stochastic paths with the help of a trial wavefunction. It also guides the dynamics and takes the form of a linear combination of HFB ansätze. Estimates for the observables are reconstructed through an extension of Wick’s theorem to matrix elements between HFB product states. The usual combinatory complexity associated to the application of this theorem for four- and more- body operators is bypassed with a compact expression in terms of Pfaffians. The limiting case of a stochastic motion within Slater determinants but guided with HFB trial wavefunctions is also considered. Finally, exploratory results for the spin-polarized Hubbard model in the attractive regime are presented.
Stochastic method for accommodation of equilibrating basins in kinetic Monte Carlo simulations
Van Siclen, Clinton D
2007-02-01
A computationally simple way to accommodate "basins" of trapping states in standard kinetic Monte Carlo simulations is presented. By assuming the system is effectively equilibrated in the basin, the residence time (time spent in the basin before escape) and the probabilities for transition to states outside the basin may be calculated. This is demonstrated for point defect diffusion over a periodic grid of sites containing a complex basin.
Bogart, D.
1996-06-01
Although resonance neutron captures for {sup 238}U in water-moderated lattices are known to occur near moderator-fuel interfaces, the sharply attenuated spatial captures here have not been calculated by multigroup transport or Monte Carlo methods. Advances in computer speed and capacity have restored interest in applying Monte Carlo methods to evaluate spatial resonance captures in fueled lattices. Recently published studies have placed complete reliance on the ostensible precision of the Monte Carlo approach without auxiliary confirmation that resonance processes were followed adequately or that the Monte Carlo method was applied appropriately. Other methods of analysis that have evolved from early resonance integral theory have provided a basis for an alternative approach to determine radial resonance captures in fuel rods. A generalized method has been formulated and confirmed by comparison with published experiments of high spatial resolution for radial resonance captures in metallic uranium rods. The same analytical method has been applied to uranium-oxide fuels. The generalized method defined a spatial effective resonance cross section that is a continuous function of distance from the moderator-fuel interface and enables direct calculation of precise radial resonance capture distributions in fuel rods. This generalized method is used as a reference for comparison with two recent independent studies that have employed different Monte Carlo codes and cross-section libraries. Inconsistencies in the Monte Carlo application or in how pointwise cross-section libraries are sampled may exist. It is shown that refined Monte Carlo solutions with improved spatial resolution would not asymptotically approach the reference spatial capture distributions.
NASA Astrophysics Data System (ADS)
McDonough, Kevin K.
The dissertation presents contributions to fuel-efficient control of vehicle speed and constrained control with applications to aircraft. In the first part of this dissertation a stochastic approach to fuel-efficient vehicle speed control is developed. This approach encompasses stochastic modeling of road grade and traffic speed, modeling of fuel consumption through the use of a neural network, and the application of stochastic dynamic programming to generate vehicle speed control policies that are optimized for the trade-off between fuel consumption and travel time. The fuel economy improvements with the proposed policies are quantified through simulations and vehicle experiments. It is shown that the policies lead to the emergence of time-varying vehicle speed patterns that are referred to as time-varying cruise. Through simulations and experiments it is confirmed that these time-varying vehicle speed profiles are more fuel-efficient than driving at a comparable constant speed. Motivated by these results, a simpler implementation strategy that is more appealing for practical implementation is also developed. This strategy relies on a finite state machine and state transition threshold optimization, and its benefits are quantified through model-based simulations and vehicle experiments. Several additional contributions are made to approaches for stochastic modeling of road grade and vehicle speed that include the use of Kullback-Liebler divergence and divergence rate and a stochastic jump-like model for the behavior of the road grade. In the second part of the dissertation, contributions to constrained control with applications to aircraft are described. Recoverable sets and integral safe sets of initial states of constrained closed-loop systems are introduced first and computational procedures of such sets based on linear discrete-time models are given. The use of linear discrete-time models is emphasized as they lead to fast computational procedures. Examples of
Stochastic modeling of polarized light scattering using a Monte Carlo based stencil method.
Sormaz, Milos; Stamm, Tobias; Jenny, Patrick
2010-05-01
This paper deals with an efficient and accurate simulation algorithm to solve the vector Boltzmann equation for polarized light transport in scattering media. The approach is based on a stencil method, which was previously developed for unpolarized light scattering and proved to be much more efficient (speedup factors of up to 10 were reported) than the classical Monte Carlo while being equally accurate. To validate what we believe to be the new stencil method, a substrate composed of spherical non-absorbing particles embedded in a non-absorbing medium was considered. The corresponding single scattering Mueller matrix, which is required to model scattering of polarized light, was determined based on the Lorenz-Mie theory. From simulations of a reflected polarized laser beam, the Mueller matrix of the substrate was computed and compared with an established reference. The agreement is excellent, and it could be demonstrated that a significant speedup of the simulations is achieved due to the stencil approach compared with the classical Monte Carlo.
Viral load and stochastic mutation in a Monte Carlo simulation of HIV
NASA Astrophysics Data System (ADS)
Ruskin, H. J.; Pandey, R. B.; Liu, Y.
2002-08-01
Viral load is examined, as a function of primary viral growth factor ( Pg) and mutation, through a computer simulation model for HIV immune response. Cell-mediated immune response is considered on a cubic lattice with four cell types: macrophage ( M), helper ( H), cytotoxic ( C), and virus ( V). Rule-based interactions are used with random sequential update of the binary cellular states. The relative viral load (the concentration of virus with respect to helper cells) is found to increase with the primary viral growth factor above a critical value ( Pc), leading to a phase transition from immuno-competent to immuno-deficient state. The critical growth factor ( Pc) seems to depend on mobility and mutation. The stochastic growth due to mutation is found to depend non-monotonically on the relative viral load, with a maximum at a characteristic load which is lower for stronger viral growth.
Improvements of MCOR: A Monte Carlo depletion code system for fuel assembly reference calculations
Tippayakul, C.; Ivanov, K.; Misu, S.
2006-07-01
This paper presents the improvements of MCOR, a Monte Carlo depletion code system for fuel assembly reference calculations. The improvements of MCOR were initiated by the cooperation between the Penn State Univ. and AREVA NP to enhance the original Penn State Univ. MCOR version in order to be used as a new Monte Carlo depletion analysis tool. Essentially, a new depletion module using KORIGEN is utilized to replace the existing ORIGEN-S depletion module in MCOR. Furthermore, the online burnup cross section generation by the Monte Carlo calculation is implemented in the improved version instead of using the burnup cross section library pre-generated by a transport code. Other code features have also been added to make the new MCOR version easier to use. This paper, in addition, presents the result comparisons of the original and the improved MCOR versions against CASMO-4 and OCTOPUS. It was observed in the comparisons that there were quite significant improvements of the results in terms of k{sub inf}, fission rate distributions and isotopic contents. (authors)
NASA Astrophysics Data System (ADS)
Meng, Xiangcui; Wang, Shangxu; Tang, Genyang; Li, Jingnan; Sun, Chao
2017-06-01
Coda waves are usually regarded as noise in the conventional seismic exploration fields. Our work is to use the energy of coda waves to estimate the stochastic parameters of random media, which is necessary to characterize the subsurface reservoir and assess the oil or gas total volume in the heterogeneous reservoir. In this paper, we briefly present the Monte Carlo radiative transfer (MCRT) theory in acoustic media, which is often used to model the envelopes of seismic energy in approximated random media in seismology. Then, we estimate the fluctuation strength and correlation length in 2D acoustic heterogeneous media based on the MCRT simulation from the synthetic crosswell seismic data. Our results show that sufficient energy information at a range of offsets can alleviate the nonunicity of the inversion result. In order to properly balance the energy effect of direct waves and coda waves in the inversion process, we modify the objective function to compare the logarithm values of the RT envelopes and of the envelopes computed with the finite difference method. Revision of this objective function makes the inversion result more accurate and more stable. Even when there is strong noise in the envelopes of seismic data, the modified equation tends to estimate the correct values. Moreover, the estimated results of the correlation length and fluctuation strength are influenced by the type of random model used in the MCRT simulation. It is better to choose the type of random media matching the investigated medium, when we apply the MCRT simulation to estimate the stochastic parameters of the investigated medium.
Solution of deterministic-stochastic epidemic models by dynamical Monte Carlo method
NASA Astrophysics Data System (ADS)
Aièllo, O. E.; Haas, V. J.; daSilva, M. A. A.; Caliri, A.
2000-07-01
This work is concerned with dynamical Monte Carlo (MC) method and its application to models originally formulated in a continuous-deterministic approach. Specifically, a susceptible-infected-removed-susceptible (SIRS) model is used in order to analyze aspects of the dynamical MC algorithm and achieve its applications in epidemic contexts. We first examine two known approaches to the dynamical interpretation of the MC method and follow with the application of one of them in the SIRS model. The working method chosen is based on the Poisson process where hierarchy of events, properly calculated waiting time between events, and independence of the events simulated, are the basic requirements. To verify the consistence of the method, some preliminary MC results are compared against exact steady-state solutions and other general numerical results (provided by Runge-Kutta method): good agreement is found. Finally, a space-dependent extension of the SIRS model is introduced and treated by MC. The results are interpreted under and in accordance with aspects of the herd-immunity concept.
NASA Astrophysics Data System (ADS)
Kanjilal, Oindrila; Manohar, C. S.
2017-07-01
The study considers the problem of simulation based time variant reliability analysis of nonlinear randomly excited dynamical systems. Attention is focused on importance sampling strategies based on the application of Girsanov's transformation method. Controls which minimize the distance function, as in the first order reliability method (FORM), are shown to minimize a bound on the sampling variance of the estimator for the probability of failure. Two schemes based on the application of calculus of variations for selecting control signals are proposed: the first obtains the control force as the solution of a two-point nonlinear boundary value problem, and, the second explores the application of the Volterra series in characterizing the controls. The relative merits of these schemes, vis-à-vis the method based on ideas from the FORM, are discussed. Illustrative examples, involving archetypal single degree of freedom (dof) nonlinear oscillators, and a multi-degree of freedom nonlinear dynamical system, are presented. The credentials of the proposed procedures are established by comparing the solutions with pertinent results from direct Monte Carlo simulations.
A stochastic Markov chain approach for tennis: Monte Carlo simulation and modeling
NASA Astrophysics Data System (ADS)
Aslam, Kamran
This dissertation describes the computational formulation of probability density functions (pdfs) that facilitate head-to-head match simulations in tennis along with ranking systems developed from their use. A background on the statistical method used to develop the pdfs , the Monte Carlo method, and the resulting rankings are included along with a discussion on ranking methods currently being used both in professional sports and in other applications. Using an analytical theory developed by Newton and Keller in [34] that defines a tennis player's probability of winning a game, set, match and single elimination tournament, a computational simulation has been developed in Matlab that allows further modeling not previously possible with the analytical theory alone. Such experimentation consists of the exploration of non-iid effects, considers the concept the varying importance of points in a match and allows an unlimited number of matches to be simulated between unlikely opponents. The results of these studies have provided pdfs that accurately model an individual tennis player's ability along with a realistic, fair and mathematically sound platform for ranking them.
A new stochastic algorithm for proton exchange membrane fuel cell stack design optimization
NASA Astrophysics Data System (ADS)
Chakraborty, Uttara
2012-10-01
This paper develops a new stochastic heuristic for proton exchange membrane fuel cell stack design optimization. The problem involves finding the optimal size and configuration of stand-alone, fuel-cell-based power supply systems: the stack is to be configured so that it delivers the maximum power output at the load's operating voltage. The problem apparently looks straightforward but is analytically intractable and computationally hard. No exact solution can be found, nor is it easy to find the exact number of local optima; we, therefore, are forced to settle with approximate or near-optimal solutions. This real-world problem, first reported in Journal of Power Sources 131, poses both engineering challenges and computational challenges and is representative of many of today's open problems in fuel cell design involving a mix of discrete and continuous parameters. The new algorithm is compared against genetic algorithm, simulated annealing, and (1+1)-EA. Statistical tests of significance show that the results produced by our method are better than the best-known solutions for this problem published in the literature. A finite Markov chain analysis of the new algorithm establishes an upper bound on the expected time to find the optimum solution.
NASA Astrophysics Data System (ADS)
Zhang, Yue; Sun, Xian; Thiele, Antje; Hinz, Stefan
2015-10-01
Synthetic aperture radar (SAR) systems, such as TanDEM-X, TerraSAR-X and Cosmo-SkyMed, acquire imagery with high spatial resolution (HR), making it possible to observe objects in urban areas with high detail. In this paper, we propose a new top-down framework for three-dimensional (3D) building reconstruction from HR interferometric SAR (InSAR) data. Unlike most methods proposed before, we adopt a generative model and utilize the reconstruction process by maximizing a posteriori estimation (MAP) through Monte Carlo methods. The reason for this strategy refers to the fact that the noisiness of SAR images calls for a thorough prior model to better cope with the inherent amplitude and phase fluctuations. In the reconstruction process, according to the radar configuration and the building geometry, a 3D building hypothesis is mapped to the SAR image plane and decomposed to feature regions such as layover, corner line, and shadow. Then, the statistical properties of intensity, interferometric phase and coherence of each region are explored respectively, and are included as region terms. Roofs are not directly considered as they are mixed with wall into layover area in most cases. When estimating the similarity between the building hypothesis and the real data, the prior, the region term, together with the edge term related to the contours of layover and corner line, are taken into consideration. In the optimization step, in order to achieve convergent reconstruction outputs and get rid of local extrema, special transition kernels are designed. The proposed framework is evaluated on the TanDEM-X dataset and performs well for buildings reconstruction.
The costs of production of alternative jet fuel: A harmonized stochastic assessment.
Bann, Seamus J; Malina, Robert; Staples, Mark D; Suresh, Pooja; Pearlson, Matthew; Tyner, Wallace E; Hileman, James I; Barrett, Steven
2017-03-01
This study quantifies and compares the costs of production for six alternative jet fuel pathways using consistent financial and technical assumptions. Uncertainty was propagated through the analysis using Monte Carlo simulations. The six processes assessed were HEFA, advanced fermentation, Fischer-Tropsch, aqueous phase processing, hydrothermal liquefaction, and fast pyrolysis. The results indicate that none of the six processes would be profitable in the absence of government incentives, with HEFA using yellow grease, HEFA using tallow, and FT revealing the lowest mean jet fuel prices at $0.91/liter ($0.66/liter-$1.24/liter), $1.06/liter ($0.79/liter-$1.42/liter), and $1.15/liter ($0.95/liter-$1.39/liter), respectively. This study also quantifies plant performance in the United States with a Renewable Fuel Standard policy analysis. Results indicate that some pathways could achieve positive NPV with relatively high likelihood under existing policy supports, with HEFA and FPH revealing the highest probability of positive NPV at 94.9% and 99.7%, respectively, in the best-case scenario.
NASA Astrophysics Data System (ADS)
McNab, Walt W.
2001-02-01
Biotransformation of dissolved groundwater hydrocarbon plumes emanating from leaking underground fuel tanks should, in principle, result in plume length stabilization over relatively short distances, thus diminishing the environmental risk. However, because the behavior of hydrocarbon plumes is usually poorly constrained at most leaking underground fuel tank sites in terms of release history, groundwater velocity, dispersion, as well as the biotransformation rate, demonstrating such a limitation in plume length is problematic. Biotransformation signatures in the aquifer geochemistry, most notably elevated bicarbonate, may offer a means of constraining the relationship between plume length and the mean biotransformation rate. In this study, modeled plume lengths and spatial bicarbonate differences among a population of synthetic hydrocarbon plumes, generated through Monte Carlo simulation of an analytical solute transport model, are compared to field observations from six underground storage tank (UST) sites at military bases in California. Simulation results indicate that the relationship between plume length and the distribution of bicarbonate is best explained by biotransformation rates that are consistent with ranges commonly reported in the literature. This finding suggests that bicarbonate can indeed provide an independent means for evaluating limitations in hydrocarbon plume length resulting from biotransformation.
Kinetic Monte Carlo (KMC) simulation of fission product silver transport through TRISO fuel particle
NASA Astrophysics Data System (ADS)
de Bellefon, G. M.; Wirth, B. D.
2011-06-01
A mesoscale kinetic Monte Carlo (KMC) model developed to investigate the diffusion of silver through the pyrolytic carbon and silicon carbide containment layers of a TRISO fuel particle is described. The release of radioactive silver from TRISO particles has been studied for nearly three decades, yet the mechanisms governing silver transport are not fully understood. This model atomically resolves Ag, but provides a mesoscale medium of carbon and silicon carbide, which can include a variety of defects including grain boundaries, reflective interfaces, cracks, and radiation-induced cavities that can either accelerate silver diffusion or slow diffusion by acting as traps for silver. The key input parameters to the model (diffusion coefficients, trap binding energies, interface characteristics) are determined from available experimental data, or parametrically varied, until more precise values become available from lower length scale modeling or experiment. The predicted results, in terms of the time/temperature dependence of silver release during post-irradiation annealing and the variability of silver release from particle to particle have been compared to available experimental data from the German HTR Fuel Program ( Gontard and Nabielek [1]) and Minato and co-workers ( Minato et al. [2]).
A Stochastic Method for Estimating the Effect of Isotopic Uncertainties in Spent Nuclear Fuel
DeHart, M.D.
2001-08-24
This report describes a novel approach developed at the Oak Ridge National Laboratory (ORNL) for the estimation of the uncertainty in the prediction of the neutron multiplication factor for spent nuclear fuel. This technique focuses on burnup credit, where credit is taken in criticality safety analysis for the reduced reactivity of fuel irradiated in and discharged from a reactor. Validation methods for burnup credit have attempted to separate the uncertainty associated with isotopic prediction methods from that of criticality eigenvalue calculations. Biases and uncertainties obtained in each step are combined additively. This approach, while conservative, can be excessive because of a physical assumptions employed. This report describes a statistical approach based on Monte Carlo sampling to directly estimate the total uncertainty in eigenvalue calculations resulting from uncertainties in isotopic predictions. The results can also be used to demonstrate the relative conservatism and statistical confidence associated with the method of additively combining uncertainties. This report does not make definitive conclusions on the magnitude of biases and uncertainties associated with isotopic predictions in a burnup credit analysis. These terms will vary depending on system design and the set of isotopic measurements used as a basis for estimating isotopic variances. Instead, the report describes a method that can be applied with a given design and set of isotopic data for estimating design-specific biases and uncertainties.
NASA Astrophysics Data System (ADS)
Kotalczyk, G.; Kruis, F. E.
2017-07-01
Monte Carlo simulations based on weighted simulation particles can solve a variety of population balance problems and allow thus to formulate a solution-framework for many chemical engineering processes. This study presents a novel concept for the calculation of coagulation rates of weighted Monte Carlo particles by introducing a family of transformations to non-weighted Monte Carlo particles. The tuning of the accuracy (named 'stochastic resolution' in this paper) of those transformations allows the construction of a constant-number coagulation scheme. Furthermore, a parallel algorithm for the inclusion of newly formed Monte Carlo particles due to nucleation is presented in the scope of a constant-number scheme: the low-weight merging. This technique is found to create significantly less statistical simulation noise than the conventional technique (named 'random removal' in this paper). Both concepts are combined into a single GPU-based simulation method which is validated by comparison with the discrete-sectional simulation technique. Two test models describing a constant-rate nucleation coupled to a simultaneous coagulation in 1) the free-molecular regime or 2) the continuum regime are simulated for this purpose.
Conlin, Jeremy Lloyd; Tobin, Stephen J
2010-10-13
There is a great need in the safeguards community to be able to nondestructively quantify the mass of plutonium of a spent nuclear fuel assembly. As part of the Next Generation of Safeguards Initiative, we are investigating several techniques, or detector systems, which, when integrated, will be capable of quantifying the plutonium mass of a spent fuel assembly without dismantling the assembly. This paper reports on the simulation of one of these techniques, the Passive Neutron Albedo Reactivity with Fission Chambers (PNAR-FC) system. The response of this system over a wide range of spent fuel assemblies with different burnup, initial enrichment, and cooling time characteristics is shown. A Monte Carlo method of using these modeled results to estimate the fissile content of a spent fuel assembly has been developed. A few numerical simulations of using this method are shown. Finally, additional developments still needed and being worked on are discussed.
NASA Astrophysics Data System (ADS)
Kang, Jinfen; Moriyama, Koji; Kim, Seung Hyun
2016-09-01
This paper presents an extended, stochastic reconstruction method for catalyst layers (CLs) of Proton Exchange Membrane Fuel Cells (PEMFCs). The focus is placed on the reconstruction of customized, low platinum (Pt) loading CLs where the microstructure of CLs can substantially influence the performance. The sphere-based simulated annealing (SSA) method is extended to generate the CL microstructures with specified and controllable structural properties for agglomerates, ionomer, and Pt catalysts. In the present method, the agglomerate structures are controlled by employing a trial two-point correlation function used in the simulated annealing process. An off-set method is proposed to generate more realistic ionomer structures. The variations of ionomer structures at different humidity conditions are considered to mimic the swelling effects. A method to control Pt loading, distribution, and utilization is presented. The extension of the method to consider heterogeneity in structural properties, which can be found in manufactured CL samples, is presented. Various reconstructed CLs are generated to demonstrate the capability of the proposed method. Proton transport properties of the reconstructed CLs are calculated and validated with experimental data.
NASA Astrophysics Data System (ADS)
Jangi, Mehdi; Lucchini, Tommaso; Gong, Cheng; Bai, Xue-Song
2015-09-01
An Eulerian stochastic fields (ESF) method accelerated with the chemistry coordinate mapping (CCM) approach for modelling spray combustion is formulated, and applied to model diesel combustion in a constant volume vessel. In ESF-CCM, the thermodynamic states of the discretised stochastic fields are mapped into a low-dimensional phase space. Integration of the chemical stiff ODEs is performed in the phase space and the results are mapped back to the physical domain. After validating the ESF-CCM, the method is used to investigate the effects of fuel cetane number on the structure of diesel spray combustion. It is shown that, depending of the fuel cetane number, liftoff length is varied, which can lead to a change in combustion mode from classical diesel spray combustion to fuel-lean premixed burned combustion. Spray combustion with a shorter liftoff length exhibits the characteristics of the classical conceptual diesel combustion model proposed by Dec in 1997 (http://dx.doi.org/10.4271/970873), whereas in a case with a lower cetane number the liftoff length is much larger and the spray combustion probably occurs in a fuel-lean-premixed mode of combustion. Nevertheless, the transport budget at the liftoff location shows that stabilisation at all cetane numbers is governed primarily by the auto-ignition process.
NASA Astrophysics Data System (ADS)
Kawaki, Keima; Kuno, Yoshihito; Ichinose, Ikuo
2017-05-01
In this paper, we study phase diagrams of the extended Bose-Hubbard model (EBHM) in one dimension by means of the quantum Monte-Carlo (QMC) simulation using the stochastic-series expansion (SSE). In the EBHM, there exists a nearest-neighbor repulsion as well as the on-site repulsion. In the SSE-QMC simulation, the highest particle number at each site, nc, is also a controllable parameter, and we found that the phase diagrams depend on the value of nc. It is shown that in addition to the Mott insulator, superfluid, density wave, the phase so-called Haldane insulator, and supersolid appear in the phase diagrams, and their locations in the phase diagrams are clarified.
Bieda, Bogusław
2014-05-15
The purpose of the paper is to present the results of application of stochastic approach based on Monte Carlo (MC) simulation for life cycle inventory (LCI) data of Mittal Steel Poland (MSP) complex in Kraków, Poland. In order to assess the uncertainty, the software CrystalBall® (CB), which is associated with Microsoft® Excel spreadsheet model, is used. The framework of the study was originally carried out for 2005. The total production of steel, coke, pig iron, sinter, slabs from continuous steel casting (CSC), sheets from hot rolling mill (HRM) and blast furnace gas, collected in 2005 from MSP was analyzed and used for MC simulation of the LCI model. In order to describe random nature of all main products used in this study, normal distribution has been applied. The results of the simulation (10,000 trials) performed with the use of CB consist of frequency charts and statistical reports. The results of this study can be used as the first step in performing a full LCA analysis in the steel industry. Further, it is concluded that the stochastic approach is a powerful method for quantifying parameter uncertainty in LCA/LCI studies and it can be applied to any steel industry. The results obtained from this study can help practitioners and decision-makers in the steel production management.
Plante, Ianik; Ponomarev, Artem; Cucinotta, Francis A
2011-02-01
The description of energy deposition by high charge and energy (HZE) nuclei is of importance for space radiation risk assessment and due to their use in hadrontherapy. Such ions deposit a large fraction of their energy within the so-called core of the track and a smaller proportion in the penumbra (or track periphery). We study the stochastic patterns of the radial dependence of energy deposition using Monte Carlo track structure codes RITRACKS and RETRACKS, that were used to simulate HZE tracks and calculate energy deposition in voxels of 40 nm. The simulation of a (56)Fe(26+) ion of 1 GeV u(-1) revealed zones of high-energy deposition which maybe found as far as a few millimetres away from the track core in some simulations. The calculation also showed that ∼43 % of the energy was deposited in the penumbra. These 3D stochastic simulations combined with a visualisation interface are a powerful tool for biophysicists which may be used to study radiation-induced biological effects such as double strand breaks and oxidative damage and the subsequent cellular and tissue damage processing and signalling.
NASA Astrophysics Data System (ADS)
Iqbal, M. Javed; Mirza, Nasir M.; Mirza, Sikander M.
2008-01-01
During normal operation of PWRs, routine fuel rods failures result in release of radioactive fission products (RFPs) in the primary coolant of PWRs. In this work, a stochastic model has been developed for simulation of failure time sequences and release rates for the estimation of fission product activity in primary coolant of a typical PWR under power perturbations. In the first part, a stochastic approach is developed, based on generation of fuel failure event sequences by sampling the time dependent intensity functions. Then a three-stage model based deterministic methodology of the FPCART code has been extended to include failure sequences and random release rates in a computer code FPCART-ST, which uses state-of-the-art LEOPARD and ODMUG codes as its subroutines. The value of the 131I activity in primary coolant predicted by FPCART-ST code has been found in good agreement with the corresponding values measured at ANGRA-1 nuclear power plant. The predictions of FPCART-ST code with constant release option have also been found to have good agreement with corresponding experimental values for time dependent 135I, 135Xe and 89Kr concentrations in primary coolant measured during EDITHMOX-1 experiments.
NASA Technical Reports Server (NTRS)
Parrish, R. V.; Dieudonne, J. E.; Filippas, T. A.
1971-01-01
An algorithm employing a modified sequential random perturbation, or creeping random search, was applied to the problem of optimizing the parameters of a high-energy beam transport system. The stochastic solution of the mathematical model for first-order magnetic-field expansion allows the inclusion of state-variable constraints, and the inclusion of parameter constraints allowed by the method of algorithm application eliminates the possibility of infeasible solutions. The mathematical model and the algorithm were programmed for a real-time simulation facility; thus, two important features are provided to the beam designer: (1) a strong degree of man-machine communication (even to the extent of bypassing the algorithm and applying analog-matching techniques), and (2) extensive graphics for displaying information concerning both algorithm operation and transport-system behavior. Chromatic aberration was also included in the mathematical model and in the optimization process. Results presented show this method as yielding better solutions (in terms of resolutions) to the particular problem than those of a standard analog program as well as demonstrating flexibility, in terms of elements, constraints, and chromatic aberration, allowed by user interaction with both the algorithm and the stochastic model. Example of slit usage and a limited comparison of predicted results and actual results obtained with a 600 MeV cyclotron are given.
NASA Astrophysics Data System (ADS)
Chirici, G.; Scotti, R.; Montaghi, A.; Barbati, A.; Cartisano, R.; Lopez, G.; Marchetti, M.; McRoberts, R. E.; Olsson, H.; Corona, P.
2013-12-01
This paper presents an application of Airborne Laser Scanning (ALS) data in conjunction with an IRS LISS-III image for mapping forest fuel types. For two study areas of 165 km2 and 487 km2 in Sicily (Italy), 16,761 plots of size 30-m × 30-m were distributed using a tessellation-based stratified sampling scheme. ALS metrics and spectral signatures from IRS extracted for each plot were used as predictors to classify forest fuel types observed and identified by photointerpretation and fieldwork. Following use of traditional parametric methods that produced unsatisfactory results, three non-parametric classification approaches were tested: (i) classification and regression tree (CART), (ii) the CART bagging method called Random Forests, and (iii) the CART bagging/boosting stochastic gradient boosting (SGB) approach. This contribution summarizes previous experiences using ALS data for estimating forest variables useful for fire management in general and for fuel type mapping, in particular. It summarizes characteristics of classification and regression trees, presents the pre-processing operation, the classification algorithms, and the achieved results. The results demonstrated superiority of the SGB method with overall accuracy of 84%. The most relevant ALS metric was canopy cover, defined as the percent of non-ground returns. Other relevant metrics included the spectral information from IRS and several other ALS metrics such as percentiles of the height distribution, the mean height of all returns, and the number of returns.
NASA Astrophysics Data System (ADS)
Xoubi, Ned
2005-12-01
The ability to accurately predict the multiplication factor (keff) of a nuclear reactor core as a function of exposure continues to be an elusive task for core designers despite decades of advances in computational methods. The difference between a predicted eigenvalue (target) and the actual eigenvalue at critical reactor conditions is herein referred to as the "eigenvalue drift." This dissertation studies exposure-dependent eigenvalue drift using MCNP-based fuel management analysis of the ORNL High Flux Isotope Reactor core. Spatial-dependent burnup is evaluated using the MONTEBURNS and ALEPH codes to link MCNP to ORIGEN to help analyze the behavior of keff as a function of fuel exposure. Understanding the exposure-dependent eigenvalue drift of a nuclear reactor is of particular relevance when trying to predict the impact of major design changes upon fuel cycle behavior and length. In this research, the design of an advanced HFIR core with a fuel loading of 12 kg of 235U is contrasted against the current loading of 9.4 kg. The goal of applying exposure dependent eigenvalue characterization is to produce a more accurate prediction of the fuel cycle length than prior analysis techniques, and to improve our understanding of the reactivity behavior of the core throughout the cycle. This investigation predicted a fuel cycle length of 40 days, representing a 50% increase in the cycle length in response to a 25% increase in fuel loading. The average burnup increased by about 48 MWd/kg U and it was confirmed that the excess reactivity can be controlled with the present design and arrangement of control elements throughout the core's life. Another major design change studied was the effect of installing an internal beryllium reflector upon cycle length. Exposure dependent eigenvalue predictions indicate that the actual benefit could be twice as large as that originally assessed via beginning-of-life (BOL) analyses.
NASA Astrophysics Data System (ADS)
Zhang, Yanxiang; Ni, Meng; Yan, Mufu; Chen, Fanglin
2015-12-01
Nanostructured electrodes are widely used for low temperature solid oxide fuel cells, due to their remarkably high activity. However, the industrial applications of the infiltrated electrodes are hindered by the durability issues, such as the microstructure stability against thermal aging. Few strategies are available to overcome this challenge due to the limited knowledge about the coarsening kinetics of the infiltrated electrodes and how the potentially important factors affect the stability. In this work, the generic thermal aging kinetics of the three-dimensional microstructures of the infiltrate electrodes is investigated by a kinetic Monte Carlo simulation model considering surface diffusion mechanism. Effects of temperature, infiltration loading, wettability, and electrode configuration are studied and the key geometric parameters are calculated such as the infiltrate particle size, the total and percolated quantities of three-phase boundary length and infiltrate surface area, and the tortuosity factor of infiltrate network. Through parametric study, several strategies to improve the thermal aging stability are proposed.
Monte Carlo boundary source approach in MOX fuel test capsule design
Chang, G.S.; Ryskamp, J.M.
1999-09-01
To demonstrate that the differences between weapons-grade (WG) mixed oxide (MOX) and reactor-grade MOX fuel are minimal, and therefore the commercial MOX experience base is applicable, an average power test (6 to 10 kW/ft) of WG MOX fuel was inserted into the Advanced Test Reactor (ATR) in January 1998. A high-power test (10 to 15 kW/ft) of WG MOX fuel in ATR is being fabricated as a follow-on to the average-power test. Two MOX capsules with 8.9 GWd/t burnup were removed from ATR on September 13, 1998, and replaced by two fresh WG MOX fuel capsules in regions with less thermal neutron flux (top-1 and bottom-1, which are away from the core center). To compensate for {sup 239}Pu depletion, which causes the linear heat generation rates (LHGRs) to decrease, the INCONEL shield was replaced by an aluminum shield in the phase-II irradiation. The authors describe and compare the results of the detailed MCNP ATR quarter core model (QCM) and isolated box model with boundary source (IBMBS). Physics analysis were performed with these two different models to provide the neutron/fission heat rate distribution data in the WG MOX fuel test assembly, with INCONEL and aluminum shrouds, located in the small I-24 hole of ATR.
NASA Technical Reports Server (NTRS)
Bell, Thomas L.; Abdullah, A.; Martin, Russell L.; North, Gerald R.
1990-01-01
Estimates of monthly average rainfall based on satellite observations from a low earth orbit will differ from the true monthly average because the satellite observes a given area only intermittently. This sampling error inherent in satellite monitoring of rainfall would occur even if the satellite instruments could measure rainfall perfectly. The size of this error is estimated for a satellite system being studied at NASA, the Tropical Rainfall Measuring Mission (TRMM). First, the statistical description of rainfall on scales from 1 to 1000 km is examined in detail, based on rainfall data from the Global Atmospheric Research Project Atlantic Tropical Experiment (GATE). A TRMM-like satellite is flown over a two-dimensional time-evolving simulation of rainfall using a stochastic model with statistics tuned to agree with GATE statistics. The distribution of sampling errors found from many months of simulated observations is found to be nearly normal, even though the distribution of area-averaged rainfall is far from normal. For a range of orbits likely to be employed in TRMM, sampling error is found to be less than 10 percent of the mean for rainfall averaged over a 500 x 500 sq km area.
NASA Technical Reports Server (NTRS)
Bell, Thomas L.; Abdullah, A.; Martin, Russell L.; North, Gerald R.
1990-01-01
Estimates of monthly average rainfall based on satellite observations from a low earth orbit will differ from the true monthly average because the satellite observes a given area only intermittently. This sampling error inherent in satellite monitoring of rainfall would occur even if the satellite instruments could measure rainfall perfectly. The size of this error is estimated for a satellite system being studied at NASA, the Tropical Rainfall Measuring Mission (TRMM). First, the statistical description of rainfall on scales from 1 to 1000 km is examined in detail, based on rainfall data from the Global Atmospheric Research Project Atlantic Tropical Experiment (GATE). A TRMM-like satellite is flown over a two-dimensional time-evolving simulation of rainfall using a stochastic model with statistics tuned to agree with GATE statistics. The distribution of sampling errors found from many months of simulated observations is found to be nearly normal, even though the distribution of area-averaged rainfall is far from normal. For a range of orbits likely to be employed in TRMM, sampling error is found to be less than 10 percent of the mean for rainfall averaged over a 500 x 500 sq km area.
Fulger, Daniel; Scalas, Enrico; Germano, Guido
2008-02-01
We present a numerical method for the Monte Carlo simulation of uncoupled continuous-time random walks with a Lévy alpha -stable distribution of jumps in space and a Mittag-Leffler distribution of waiting times, and apply it to the stochastic solution of the Cauchy problem for a partial differential equation with fractional derivatives both in space and in time. The one-parameter Mittag-Leffler function is the natural survival probability leading to time-fractional diffusion equations. Transformation methods for Mittag-Leffler random variables were found later than the well-known transformation method by Chambers, Mallows, and Stuck for Lévy alpha -stable random variables and so far have not received as much attention; nor have they been used together with the latter in spite of their mathematical relationship due to the geometric stability of the Mittag-Leffler distribution. Combining the two methods, we obtain an accurate approximation of space- and time-fractional diffusion processes almost as easy and fast to compute as for standard diffusion processes.
NASA Astrophysics Data System (ADS)
Burdo, James S.
This research is based on the concept that the diversion of nuclear fuel pins from Light Water Reactor (LWR) spent fuel assemblies is feasible by a careful comparison of spontaneous fission neutron and gamma levels in the guide tube locations of the fuel assemblies. The goal is to be able to determine whether some of the assembly fuel pins are either missing or have been replaced with dummy or fresh fuel pins. It is known that for typical commercial power spent fuel assemblies, the dominant spontaneous neutron emissions come from Cm-242 and Cm-244. Because of the shorter half-life of Cm-242 (0.45 yr) relative to that of Cm-244 (18.1 yr), Cm-244 is practically the only neutron source contributing to the neutron source term after the spent fuel assemblies are more than two years old. Initially, this research focused upon developing MCNP5 models of PWR fuel assemblies, modeling their depletion using the MONTEBURNS code, and by carrying out a preliminary depletion of a ¼ model 17x17 assembly from the TAKAHAMA-3 PWR. Later, the depletion and more accurate isotopic distribution in the pins at discharge was modeled using the TRITON depletion module of the SCALE computer code. Benchmarking comparisons were performed with the MONTEBURNS and TRITON results. Subsequently, the neutron flux in each of the guide tubes of the TAKAHAMA-3 PWR assembly at two years after discharge as calculated by the MCNP5 computer code was determined for various scenarios. Cases were considered for all spent fuel pins present and for replacement of a single pin at a position near the center of the assembly (10,9) and at the corner (17,1). Some scenarios were duplicated with a gamma flux calculation for high energies associated with Cm-244. For each case, the difference between the flux (neutron or gamma) for all spent fuel pins and with a pin removed or replaced is calculated for each guide tube. Different detection criteria were established. The first was whether the relative error of the
Von Dreele, Robert
2017-08-29
One of the goals in developing GSAS-II was to expand from the capabilities of the original General Structure Analysis System (GSAS) which largely encompassed just structure refinement and post refinement analysis. GSAS-II has been written almost entirely in Python loaded with graphics, GUI and mathematical packages (matplotlib, pyOpenGL, wxpython, numpy and scipy). Thus, GSAS-II has a fully developed modern GUI as well as extensive graphical display of data and results. However, the structure and operation of Python has required new approaches to many of the algorithms used in crystal structure analysis. The extensions beyond GSAS include image calibration/integration as wellmore » as peak fitting and unit cell indexing for powder data which are precursors for structure solution. Structure solution within GSAS-II begins with either Pawley or LeBail extracted structure factors from powder data or those measured in a single crystal experiment. Both charge flipping and Monte Carlo-Simulated Annealing techniques are available; the former can be applied to (3+1) incommensurate structures as well as conventional 3D structures.« less
Random-Walk Monte Carlo Simulation of Intergranular Gas Bubble Nucleation in UO2 Fuel
Yongfeng Zhang; Michael R. Tonks; S. B. Biner; D.A. Andersson
2012-11-01
Using a random-walk particle algorithm, we investigate the clustering of fission gas atoms on grain bound- aries in oxide fuels. The computational algorithm implemented in this work considers a planar surface representing a grain boundary on which particles appear at a rate dictated by the Booth flux, migrate two dimensionally according to their grain boundary diffusivity, and coalesce by random encounters. Specifically, the intergranular bubble nucleation density is the key variable we investigate using a parametric study in which the temperature, grain boundary gas diffusivity, and grain boundary segregation energy are varied. The results reveal that the grain boundary bubble nucleation density can vary widely due to these three parameters, which may be an important factor in the observed variability in intergranular bubble percolation among grain boundaries in oxide fuel during fission gas release.
Stochastic Convection Parameterizations
NASA Technical Reports Server (NTRS)
Teixeira, Joao; Reynolds, Carolyn; Suselj, Kay; Matheou, Georgios
2012-01-01
computational fluid dynamics, radiation, clouds, turbulence, convection, gravity waves, surface interaction, radiation interaction, cloud and aerosol microphysics, complexity (vegetation, biogeochemistry, radiation versus turbulence/convection stochastic approach, non-linearities, Monte Carlo, high resolutions, large-Eddy Simulations, cloud structure, plumes, saturation in tropics, forecasting, parameterizations, stochastic, radiation-clod interaction, hurricane forecasts
NASA Astrophysics Data System (ADS)
Tsinko, Y.; Johnson, E. A.; Martin, Y. E.
2014-12-01
Natural range of variability of forest fire frequency is of great interest due to the current changing climate and seeming increase in the number of fires. The variability of the annual area burned in Canada has not been stable in the 20th century. Recently, these changes have been linked to large scale climate cycles, such as Pacific Decadal Oscillation (PDO) phases and El Nino Southern Oscillation (ENSO). The positive phase of the PDO was associated with the increased probability of hot dry spells leading to drier fuels and increased area burned. However, so far only one historical timeline was used to assess correlations between the natural climate oscillations and forest fire frequency. To counteract similar problems, weather generators are extensively used in hydrological and agricultural modeling to extend short instrumental record and to synthesize long sequences of daily weather parameters that are different from but statistically similar to historical weather. In the current study synthetic weather models were used to assess effects of alternative weather timelines on fuel moisture in Canada by using Canadian Forest Fire Weather Index moisture codes and potential fire frequency. The variability of fuel moisture codes was found to increase with the increased length of simulated series, thus indicating that the natural range of variability of forest fire frequency may be larger than that calculated from available short records. It may be viewed as a manifestation of a Hurst effect. Since PDO phases are thought to be caused by diverse mechanisms including overturning oceanic circulation, some of the lower frequency signals may be attributed to the long term memory of the oceanic system. Thus, care must be taken when assessing natural variability of climate dependent processes without accounting for potential long-term mechanisms.
Shedlock, Daniel; Haghighat, Alireza
2005-01-01
In the United States, the Nuclear Waste Policy Act of 1982 mandated centralised storage of spent nuclear fuel by 1988. However, the Yucca Mountain project is currently scheduled to start accepting spent nuclear fuel in 2010. Since many nuclear power plants were only designed for -10 y of spent fuel pool storage, > 35 plants have been forced into alternate means of spent fuel storage. In order to continue operation and make room in spent fuel pools, nuclear generators are turning towards independent spent fuel storage installations (ISFSIs). Typical vertical concrete ISFSIs are -6.1 m high and 3.3 m in diameter. The inherently large system, and the presence of thick concrete shields result in difficulties for both Monte Carlo (MC) and discrete ordinates (SN) calculations. MC calculations require significant variance reduction and multiple runs to obtain a detailed dose distribution. SN models need a large number of spatial meshes to accurately model the geometry and high quadrature orders to reduce ray effects, therefore, requiring significant amounts of computer memory and time. The use of various differencing schemes is needed to account for radial heterogeneity in material cross sections and densities. Two P3, S12, discrete ordinate, PENTRAN (parallel environment neutral-particle TRANsport) models were analysed and different MC models compared. A multigroup MCNP model was developed for direct comparison to the SN models. The biased A3MCNP (automated adjoint accelerated MCNP) and unbiased (MCNP) continuous energy MC models were developed to assess the adequacy of the CASK multigroup (22 neutron, 18 gamma) cross sections. The PENTRAN SN results are in close agreement (5%) with the multigroup MC results; however, they differ by -20-30% from the continuous-energy MC predictions. This large difference can be attributed to the expected difference between multigroup and continuous energy cross sections, and the fact that the CASK library is based on the old ENDF
Monte Carlo Reliability Analysis.
1987-10-01
to Stochastic Processes , Prentice-Hall, Englewood Cliffs, NJ, 1975. (5) R. E. Barlow and F. Proscham, Statistical TheorX of Reliability and Life...Lewis and Z. Tu, "Monte Carlo Reliability Modeling by Inhomogeneous ,Markov Processes, Reliab. Engr. 16, 277-296 (1986). (4) E. Cinlar, Introduction
NASA Astrophysics Data System (ADS)
Sabelfeld, K. K.
2015-09-01
A stochastic algorithm for simulation of fluctuation-induced kinetics of H2 formation on grain surfaces is suggested as a generalization of the technique developed in our recent studies [1] where this method was developed to describe the annihilation of spatially separate electrons and holes in a disordered semiconductor. The stochastic model is based on the spatially inhomogeneous, nonlinear integro-differential Smoluchowski equations with random source term. In this paper we derive the general system of Smoluchowski type equations for the formation of H2 from two hydrogen atoms on the surface of interstellar dust grains with physisorption and chemisorption sites. We focus in this study on the spatial distribution, and numerically investigate the segregation in the case of a source with a continuous generation in time and randomly distributed in space. The stochastic particle method presented is based on a probabilistic interpretation of the underlying process as a stochastic Markov process of interacting particle system in discrete but randomly progressed time instances. The segregation is analyzed through the correlation analysis of the vector random field of concentrations which appears to be isotropic in space and stationary in time.
NASA Astrophysics Data System (ADS)
Tayarani-Yoosefabadi, Z.; Harvey, D.; Bellerive, J.; Kjeang, E.
2016-01-01
Gas diffusion layer (GDL) materials in polymer electrolyte membrane fuel cells (PEMFCs) are commonly made hydrophobic to enhance water management by avoiding liquid water blockage of the pores and facilitating reactant gas transport to the adjacent catalyst layer. In this work, a stochastic microstructural modeling approach is developed to simulate the transport properties of a commercial carbon paper based GDL under a range of PTFE loadings and liquid water saturation levels. The proposed novel stochastic method mimics the GDL manufacturing process steps and resolves all relevant phases including fiber, binder, PTFE, liquid water, and gas. After thorough validation of the general microstructure with literature and in-house data, a comprehensive set of anisotropic transport properties is simulated for the reconstructed GDL in different PTFE loadings and liquid water saturation levels and validated through a comparison with in-house ex situ experimental data and empirical formulations. In general, the results show good agreement between simulated and measured data. Decreasing trends in porosity, gas diffusivity, and permeability is obtained by increasing the PTFE loading and liquid water content, while the thermal conductivity is found to increase with liquid water saturation. Using the validated model, new correlations for saturation dependent GDL properties are proposed.
NASA Astrophysics Data System (ADS)
Tribet, M.; Mougnaud, S.; Jégou, C.
2017-05-01
This work aims to better understand the nature and evolution of energy deposits at the UO2/water reactional interface subjected to alpha irradiation, through an original approach based on Monte-Carlo-type simulations, using the MCNPX code. Such an approach has the advantage of describing the energy deposit profiles on both sides of the interface (UO2 and water). The calculations have been performed on simple geometries, with data from an irradiated UOX fuel (burnup of 47 GWd.tHM-1 and 15 years of alpha decay). The influence of geometric parameters such as the diameter and the calculation steps at the reactional interface are discussed, and the exponential laws to be used in practice are suggested. The case of cracks with various different apertures (from 5 to 35 μm) has also been examined and these calculations have also enabled new information on the mean range of radiolytic species in cracks, and thus on the local chemistry.
Rogers, Kristin; Seager, Thomas P
2009-03-15
Life cycle impact assessment (LCIA) involves weighing trade-offs between multiple and incommensurate criteria. Current state-of-the-art LCIA tools typically compute an overall environmental score using a linear-weighted aggregation of characterized inventory data that has been normalized relative to total industry, regional, or national emissions. However, current normalization practices risk masking impacts that may be significant within the context of the decision, albeit small relative to the reference data (e.g., total U.S. emissions). Additionally, uncertainty associated with quantification of weights is generally very high. Partly for these reasons, many LCA studies truncate impact assessment at the inventory characterization step, rather than completing normalization and weighting steps. This paper describes a novel approach called stochastic multiattribute life cycle impact assessment (SMA-LCIA) that combines an outranking approach to normalization with stochastic exploration of weight spaces-avoiding some of the drawbacks of current LCIA methods. To illustrate the new approach, SMA-LCIA is compared with a typical LCIA method for crop-based, fossil-based, and electric fuels using the Greenhouse gas Regulated Emissions and Energy Use in Transportation (GREET) model for inventory data and the Tool for the Reduction and Assessment of Chemical and other Environmental Impacts (TRACI) model for data characterization. In contrast to the typical LCIA case, in which results are dominated by fossil fuel depletion and global warming considerations regardless of criteria weights, the SMA-LCIA approach results in a rank ordering that is more sensitive to decisionmaker preferences. The principal advantage of the SMA-LCIA method is the ability to facilitate exploration and construction of context-specific criteria preferences by simultaneously representing multiple weights spaces and the sensitivity of the rank ordering to uncertain stakeholder values.
Lee, Taehoon; Menlove, Howard O; Swinhoe, Martyn T; Tobin, Stephen J
2010-01-01
The differential die-away (DDA) technique has been simulated by using the MCNPX code to quantify its capability to measure the fissile content in spent fuel assemblies, For 64 different spent fuel cases of various initial enrichment, burnup and cooling time, the count rate and signal to background ratios of the DDA system were obtained, where neutron backgrounds are mainly coming from the {sup 244}Cm of the spent fuel. To quantify the total fissile mass of spent fuel, a concept of the effective {sup 239}Pu mass was introduced by weighting the relative contribution to the signal of {sup 235}U and {sup 241}Pu compared to {sup 239}Pu and the calibration curves of DDA count rate vs. {sup 239}Pu{sub eff} were obtained by using the MCNPX code. With a deuterium-tritium (DT) neutron generator of 10{sup 9} n/s strength, signal to background ratios of sufficient magnitude are acquired for a DDA system with the spent fuel assembly in water.
NASA Astrophysics Data System (ADS)
Polanski, A.; Barashenkov, V.; Puzynin, I.; Rakhno, I.; Sissakian, A.
It is considered a sub-critical assembly driven with existing 660 MeV JINR proton accelerator. The assembly consists of a central cylindrical lead target surrounded with a mixed-oxide (MOX) fuel (PuO2 + UO2) and with reflector made of beryllium. Dependence of the energetic gain on the proton energy, the neutron multiplication coefficient, and the neutron energetic spectra have been calculated. It is shown that for subcritical assembly with a mixed-oxide (MOX) BN-600 fuel (28%PuO 2 + 72%UO2) with effective density of fuel material equal to 9 g/cm 3 , the multiplication coefficient keff is equal to 0.945, the energetic gain is equal to 27, and the neutron flux density is 1012 cm˜2 s˜x for the protons with energy of 660 MeV and accelerator beam current of 1 uA.
Monte Carlo methods on advanced computer architectures
Martin, W.R.
1991-12-31
Monte Carlo methods describe a wide class of computational methods that utilize random numbers to perform a statistical simulation of a physical problem, which itself need not be a stochastic process. For example, Monte Carlo can be used to evaluate definite integrals, which are not stochastic processes, or may be used to simulate the transport of electrons in a space vehicle, which is a stochastic process. The name Monte Carlo came about during the Manhattan Project to describe the new mathematical methods being developed which had some similarity to the games of chance played in the casinos of Monte Carlo. Particle transport Monte Carlo is just one application of Monte Carlo methods, and will be the subject of this review paper. Other applications of Monte Carlo, such as reliability studies, classical queueing theory, molecular structure, the study of phase transitions, or quantum chromodynamics calculations for basic research in particle physics, are not included in this review. The reference by Kalos is an introduction to general Monte Carlo methods and references to other applications of Monte Carlo can be found in this excellent book. For the remainder of this paper, the term Monte Carlo will be synonymous to particle transport Monte Carlo, unless otherwise noted. 60 refs., 14 figs., 4 tabs.
Stochastic Optimization of Complex Systems
Birge, John R.
2014-03-20
This project focused on methodologies for the solution of stochastic optimization problems based on relaxation and penalty methods, Monte Carlo simulation, parallel processing, and inverse optimization. The main results of the project were the development of a convergent method for the solution of models that include expectation constraints as in equilibrium models, improvement of Monte Carlo convergence through the use of a new method of sample batch optimization, the development of new parallel processing methods for stochastic unit commitment models, and the development of improved methods in combination with parallel processing for incorporating automatic differentiation methods into optimization.
Controlled Stochastic Dynamical Systems
2007-04-18
the existence of value functions of two-player zero-sum stochastic differential games Indiana Univ. Math. Journal, 38 (1989), pp 293-314. [6] George ...control problems, Adv. Appl. Prob., 15, (1983) pp 225-254. [10] Karatzas, I. Ocone, D., Wang, H. and Zervos , M., Finite fuel singular control with
Ponomarev, Artem L; Huff, Janice; Cucinotta, Francis A
2010-06-01
To resolve the difficulty in counting merged DNA damage foci in high-LET (linear energy transfer) ion-induced patterns. The analysis of patterns of RIF (radiation-induced foci) produced by high-LET Fe and Ti ions were conducted by using a Monte Carlo model that combines the heavy ion track structure with characteristics of the human genome on the level of chromosomes. The foci patterns were also simulated in the maximum projection plane for flat nuclei. The model predicts the spatial and genomic distributions of DNA DSB (double-strand breaks) in a cell nucleus for a particular dose of radiation. We used the model to do analyses for three irradiation scenarios: (i) The ions were oriented perpendicular to the flattened nuclei in a cell culture monolayer; (ii) the ions were parallel to that plane; and (iii) round nucleus. In the parallel scenario we found that the foci appeared to be merged due to their high density, while, in the perpendicular scenario, the foci appeared as one bright spot per hit. The statistics and spatial distribution of regions of densely arranged foci, termed DNA foci chains, were predicted numerically using this model. Another analysis was done to evaluate the number of ion hits per nucleus, which were visible from streaks of closely located foci. We showed that DSB clustering needs to be taken into account to determine the true DNA damage foci yield, which helps to determine the DSB yield. Using the model analysis, a researcher can refine the DSB yield per nucleus per particle. We showed that purely geometric artifacts, present in the experimental images, can be analytically resolved with the model, and that the quantisation of track hits and DSB yields can be provided to the experimentalists who use enumeration of radiation-induced foci in immunofluorescence experiment using proteins that detect DNA damage.
NASA Technical Reports Server (NTRS)
Jahshan, S. N.; Singleterry, R. C.
2001-01-01
The effect of random fuel redistribution on the eigenvalue of a one-speed reactor is investigated. An ensemble of such reactors that are identical to a homogeneous reference critical reactor except for the fissile isotope density distribution is constructed such that it meets a set of well-posed redistribution requirements. The average eigenvalue,
NASA Technical Reports Server (NTRS)
Jahshan, S. N.; Singleterry, R. C.
2001-01-01
The effect of random fuel redistribution on the eigenvalue of a one-speed reactor is investigated. An ensemble of such reactors that are identical to a homogeneous reference critical reactor except for the fissile isotope density distribution is constructed such that it meets a set of well-posed redistribution requirements. The average eigenvalue,
NASA Astrophysics Data System (ADS)
Xu, Zuwei; Zhao, Haibo; Zheng, Chuguang
2015-01-01
This paper proposes a comprehensive framework for accelerating population balance-Monte Carlo (PBMC) simulation of particle coagulation dynamics. By combining Markov jump model, weighted majorant kernel and GPU (graphics processing unit) parallel computing, a significant gain in computational efficiency is achieved. The Markov jump model constructs a coagulation-rule matrix of differentially-weighted simulation particles, so as to capture the time evolution of particle size distribution with low statistical noise over the full size range and as far as possible to reduce the number of time loopings. Here three coagulation rules are highlighted and it is found that constructing appropriate coagulation rule provides a route to attain the compromise between accuracy and cost of PBMC methods. Further, in order to avoid double looping over all simulation particles when considering the two-particle events (typically, particle coagulation), the weighted majorant kernel is introduced to estimate the maximum coagulation rates being used for acceptance-rejection processes by single-looping over all particles, and meanwhile the mean time-step of coagulation event is estimated by summing the coagulation kernels of rejected and accepted particle pairs. The computational load of these fast differentially-weighted PBMC simulations (based on the Markov jump model) is reduced greatly to be proportional to the number of simulation particles in a zero-dimensional system (single cell). Finally, for a spatially inhomogeneous multi-dimensional (multi-cell) simulation, the proposed fast PBMC is performed in each cell, and multiple cells are parallel processed by multi-cores on a GPU that can implement the massively threaded data-parallel tasks to obtain remarkable speedup ratio (comparing with CPU computation, the speedup ratio of GPU parallel computing is as high as 200 in a case of 100 cells with 10 000 simulation particles per cell). These accelerating approaches of PBMC are
Xu, Zuwei; Zhao, Haibo Zheng, Chuguang
2015-01-15
This paper proposes a comprehensive framework for accelerating population balance-Monte Carlo (PBMC) simulation of particle coagulation dynamics. By combining Markov jump model, weighted majorant kernel and GPU (graphics processing unit) parallel computing, a significant gain in computational efficiency is achieved. The Markov jump model constructs a coagulation-rule matrix of differentially-weighted simulation particles, so as to capture the time evolution of particle size distribution with low statistical noise over the full size range and as far as possible to reduce the number of time loopings. Here three coagulation rules are highlighted and it is found that constructing appropriate coagulation rule provides a route to attain the compromise between accuracy and cost of PBMC methods. Further, in order to avoid double looping over all simulation particles when considering the two-particle events (typically, particle coagulation), the weighted majorant kernel is introduced to estimate the maximum coagulation rates being used for acceptance–rejection processes by single-looping over all particles, and meanwhile the mean time-step of coagulation event is estimated by summing the coagulation kernels of rejected and accepted particle pairs. The computational load of these fast differentially-weighted PBMC simulations (based on the Markov jump model) is reduced greatly to be proportional to the number of simulation particles in a zero-dimensional system (single cell). Finally, for a spatially inhomogeneous multi-dimensional (multi-cell) simulation, the proposed fast PBMC is performed in each cell, and multiple cells are parallel processed by multi-cores on a GPU that can implement the massively threaded data-parallel tasks to obtain remarkable speedup ratio (comparing with CPU computation, the speedup ratio of GPU parallel computing is as high as 200 in a case of 100 cells with 10 000 simulation particles per cell). These accelerating approaches of PBMC are
A multilevel stochastic collocation method for SPDEs
Gunzburger, Max; Jantsch, Peter; Teckentrup, Aretha; Webster, Clayton
2015-03-10
We present a multilevel stochastic collocation method that, as do multilevel Monte Carlo methods, uses a hierarchy of spatial approximations to reduce the overall computational complexity when solving partial differential equations with random inputs. For approximation in parameter space, a hierarchy of multi-dimensional interpolants of increasing fidelity are used. Rigorous convergence and computational cost estimates for the new multilevel stochastic collocation method are derived and used to demonstrate its advantages compared to standard single-level stochastic collocation approximations as well as multilevel Monte Carlo methods.
An advanced deterministic method for spent fuel criticality safety analysis
DeHart, M.D.
1998-01-01
Over the past two decades, criticality safety analysts have come to rely to a large extent on Monte Carlo methods for criticality calculations. Monte Carlo has become popular because of its capability to model complex, non-orthogonal configurations or fissile materials, typical of real world problems. Over the last few years, however, interest in determinist transport methods has been revived, due shortcomings in the stochastic nature of Monte Carlo approaches for certain types of analyses. Specifically, deterministic methods are superior to stochastic methods for calculations requiring accurate neutron density distributions or differential fluxes. Although Monte Carlo methods are well suited for eigenvalue calculations, they lack the localized detail necessary to assess uncertainties and sensitivities important in determining a range of applicability. Monte Carlo methods are also inefficient as a transport solution for multiple pin depletion methods. Discrete ordinates methods have long been recognized as one of the most rigorous and accurate approximations used to solve the transport equation. However, until recently, geometric constraints in finite differencing schemes have made discrete ordinates methods impractical for non-orthogonal configurations such as reactor fuel assemblies. The development of an extended step characteristic (ESC) technique removes the grid structure limitations of traditional discrete ordinates methods. The NEWT computer code, a discrete ordinates code built upon the ESC formalism, is being developed as part of the SCALE code system. This paper will demonstrate the power, versatility, and applicability of NEWT as a state-of-the-art solution for current computational needs.
Semistochastic Projector Monte Carlo Method
NASA Astrophysics Data System (ADS)
Petruzielo, F. R.; Holmes, A. A.; Changlani, Hitesh J.; Nightingale, M. P.; Umrigar, C. J.
2012-12-01
We introduce a semistochastic implementation of the power method to compute, for very large matrices, the dominant eigenvalue and expectation values involving the corresponding eigenvector. The method is semistochastic in that the matrix multiplication is partially implemented numerically exactly and partially stochastically with respect to expectation values only. Compared to a fully stochastic method, the semistochastic approach significantly reduces the computational time required to obtain the eigenvalue to a specified statistical uncertainty. This is demonstrated by the application of the semistochastic quantum Monte Carlo method to systems with a sign problem: the fermion Hubbard model and the carbon dimer.
Boyarinov, V. F.; Davidenko, V. D.; Nevinitsa, V. A.; Tsibulsky, V. F.
2006-07-01
Verification of the SUHAM-U code has been carried out by the calculation of two-dimensional benchmark-experiment on critical light-water facility VENUS-2. Comparisons with experimental data and calculations by Monte-Carlo code UNK with the same nuclear data library B645 for basic isotopes have been fulfilled. Calculations of two-dimensional facility were carried out with using experimentally measured buckling values. Possibility of SUHAM code application for computations of PWR reactor with uranium and MOX fuel has been demonstrated. (authors)
Stochastic models: theory and simulation.
Field, Richard V., Jr.
2008-03-01
Many problems in applied science and engineering involve physical phenomena that behave randomly in time and/or space. Examples are diverse and include turbulent flow over an aircraft wing, Earth climatology, material microstructure, and the financial markets. Mathematical models for these random phenomena are referred to as stochastic processes and/or random fields, and Monte Carlo simulation is the only general-purpose tool for solving problems of this type. The use of Monte Carlo simulation requires methods and algorithms to generate samples of the appropriate stochastic model; these samples then become inputs and/or boundary conditions to established deterministic simulation codes. While numerous algorithms and tools currently exist to generate samples of simple random variables and vectors, no cohesive simulation tool yet exists for generating samples of stochastic processes and/or random fields. There are two objectives of this report. First, we provide some theoretical background on stochastic processes and random fields that can be used to model phenomena that are random in space and/or time. Second, we provide simple algorithms that can be used to generate independent samples of general stochastic models. The theory and simulation of random variables and vectors is also reviewed for completeness.
Solan, Eilon; Vieille, Nicolas
2015-01-01
In 1953, Lloyd Shapley contributed his paper “Stochastic games” to PNAS. In this paper, he defined the model of stochastic games, which were the first general dynamic model of a game to be defined, and proved that it admits a stationary equilibrium. In this Perspective, we summarize the historical context and the impact of Shapley’s contribution. PMID:26556883
QB1 - Stochastic Gene Regulation
Munsky, Brian
2012-07-23
Summaries of this presentation are: (1) Stochastic fluctuations or 'noise' is present in the cell - Random motion and competition between reactants, Low copy, quantization of reactants, Upstream processes; (2) Fluctuations may be very important - Cell-to-cell variability, Cell fate decisions (switches), Signal amplification or damping, stochastic resonances; and (3) Some tools are available to mode these - Kinetic Monte Carlo simulations (SSA and variants), Moment approximation methods, Finite State Projection. We will see how modeling these reactions can tell us more about the underlying processes of gene regulation.
Error in Monte Carlo, quasi-error in Quasi-Monte Carlo
NASA Astrophysics Data System (ADS)
Kleiss, Ronald; Lazopoulos, Achilleas
2006-07-01
While the Quasi-Monte Carlo method of numerical integration achieves smaller integration error than standard Monte Carlo, its use in particle physics phenomenology has been hindered by the absence of a reliable way to estimate that error. The standard Monte Carlo error estimator relies on the assumption that the points are generated independently of each other and, therefore, fails to account for the error improvement advertised by the Quasi-Monte Carlo method. We advocate the construction of an estimator of stochastic nature, based on the ensemble of pointsets with a particular discrepancy value. We investigate the consequences of this choice and give some first empirical results on the suggested estimators.
Fast approximate stochastic tractography.
Iglesias, Juan Eugenio; Thompson, Paul M; Liu, Cheng-Yi; Tu, Zhuowen
2012-01-01
Many different probabilistic tractography methods have been proposed in the literature to overcome the limitations of classical deterministic tractography: (i) lack of quantitative connectivity information; and (ii) robustness to noise, partial volume effects and selection of seed region. However, these methods rely on Monte Carlo sampling techniques that are computationally very demanding. This study presents an approximate stochastic tractography algorithm (FAST) that can be used interactively, as opposed to having to wait several minutes to obtain the output after marking a seed region. In FAST, tractography is formulated as a Markov chain that relies on a transition tensor. The tensor is designed to mimic the features of a well-known probabilistic tractography method based on a random walk model and Monte-Carlo sampling, but can also accommodate other propagation rules. Compared to the baseline algorithm, our method circumvents the sampling process and provides a deterministic solution at the expense of partially sacrificing sub-voxel accuracy. Therefore, the method is strictly speaking not stochastic, but provides a probabilistic output in the spirit of stochastic tractography methods. FAST was compared with the random walk model using real data from 10 patients in two different ways: 1. the probability maps produced by the two methods on five well-known fiber tracts were directly compared using metrics from the image registration literature; and 2. the connectivity measurements between different regions of the brain given by the two methods were compared using the correlation coefficient ρ. The results show that the connectivity measures provided by the two algorithms are well-correlated (ρ = 0.83), and so are the probability maps (normalized cross correlation 0.818 ± 0.081). The maps are also qualitatively (i.e., visually) very similar. The proposed method achieves a 60x speed-up (7 s vs. 7 min) over the Monte Carlo sampling scheme, therefore
2015-11-24
This image from NASA 2001 Mars Odyssey spacecraft shows the northern margin of Tanaica Montes. These hills are cut by fractures, which are in alignment with the regional trend of tectonic faulting found east of Alba Mons. Orbit Number: 61129 Latitude: 40.1468 Longitude: 269.641 Instrument: VIS Captured: 2015-09-25 03:03
Comparing Several Robust Tests of Stochastic Equality.
ERIC Educational Resources Information Center
Vargha, Andras; Delaney, Harold D.
In this paper, six statistical tests of stochastic equality are compared with respect to Type I error and power through a Monte Carlo simulation. In the simulation, the skewness and kurtosis levels and the extent of variance heterogeneity of the two parent distributions were varied across a wide range. The sample sizes applied were either small or…
NASA Astrophysics Data System (ADS)
Eichhorn, Ralf; Aurell, Erik
2014-04-01
'Stochastic thermodynamics as a conceptual framework combines the stochastic energetics approach introduced a decade ago by Sekimoto [1] with the idea that entropy can consistently be assigned to a single fluctuating trajectory [2]'. This quote, taken from Udo Seifert's [3] 2008 review, nicely summarizes the basic ideas behind stochastic thermodynamics: for small systems, driven by external forces and in contact with a heat bath at a well-defined temperature, stochastic energetics [4] defines the exchanged work and heat along a single fluctuating trajectory and connects them to changes in the internal (system) energy by an energy balance analogous to the first law of thermodynamics. Additionally, providing a consistent definition of trajectory-wise entropy production gives rise to second-law-like relations and forms the basis for a 'stochastic thermodynamics' along individual fluctuating trajectories. In order to construct meaningful concepts of work, heat and entropy production for single trajectories, their definitions are based on the stochastic equations of motion modeling the physical system of interest. Because of this, they are valid even for systems that are prevented from equilibrating with the thermal environment by external driving forces (or other sources of non-equilibrium). In that way, the central notions of equilibrium thermodynamics, such as heat, work and entropy, are consistently extended to the non-equilibrium realm. In the (non-equilibrium) ensemble, the trajectory-wise quantities acquire distributions. General statements derived within stochastic thermodynamics typically refer to properties of these distributions, and are valid in the non-equilibrium regime even beyond the linear response. The extension of statistical mechanics and of exact thermodynamic statements to the non-equilibrium realm has been discussed from the early days of statistical mechanics more than 100 years ago. This debate culminated in the development of linear response
2015-09-22
This VIS image shows where an impact created a crater on top of a group of ridges called Tanaica Montes. The slightly out-of-round shape and the distribution of the ejecta was likely all due to the pre-existing landforms. Orbit Number: 60555 Latitude: 39.6442 Longitude: 268.824 Instrument: VIS Captured: 2015-08-08 20:37 http://photojournal.jpl.nasa.gov/catalog/PIA19780
2002-11-23
This image by NASA Mars Odyssey spacecraft shows the rugged cratered highland region of Libya Montes, which forms part of the rim of an ancient impact basin called Isidis. This region of the highlands is fairly dissected with valley networks. There is still debate within the scientific community as to how valley networks themselves form: surface runoff (rainfall/snowmelt) or headward erosion via groundwater sapping. The degree of dissection here in this region suggests surface runoff rather than groundwater sapping. Small dunes are also visible on the floors of some of these channels. http://photojournal.jpl.nasa.gov/catalog/PIA04008
Stochastic-field cavitation model
Dumond, J.; Magagnato, F.; Class, A.
2013-07-15
Nonlinear phenomena can often be well described using probability density functions (pdf) and pdf transport models. Traditionally, the simulation of pdf transport requires Monte-Carlo codes based on Lagrangian “particles” or prescribed pdf assumptions including binning techniques. Recently, in the field of combustion, a novel formulation called the stochastic-field method solving pdf transport based on Eulerian fields has been proposed which eliminates the necessity to mix Eulerian and Lagrangian techniques or prescribed pdf assumptions. In the present work, for the first time the stochastic-field method is applied to multi-phase flow and, in particular, to cavitating flow. To validate the proposed stochastic-field cavitation model, two applications are considered. First, sheet cavitation is simulated in a Venturi-type nozzle. The second application is an innovative fluidic diode which exhibits coolant flashing. Agreement with experimental results is obtained for both applications with a fixed set of model constants. The stochastic-field cavitation model captures the wide range of pdf shapes present at different locations.
Stochastic kinetic mean field model
NASA Astrophysics Data System (ADS)
Erdélyi, Zoltán; Pasichnyy, Mykola; Bezpalchuk, Volodymyr; Tomán, János J.; Gajdics, Bence; Gusak, Andriy M.
2016-07-01
This paper introduces a new model for calculating the change in time of three-dimensional atomic configurations. The model is based on the kinetic mean field (KMF) approach, however we have transformed that model into a stochastic approach by introducing dynamic Langevin noise. The result is a stochastic kinetic mean field model (SKMF) which produces results similar to the lattice kinetic Monte Carlo (KMC). SKMF is, however, far more cost-effective and easier to implement the algorithm (open source program code is provided on http://skmf.eu website). We will show that the result of one SKMF run may correspond to the average of several KMC runs. The number of KMC runs is inversely proportional to the amplitude square of the noise in SKMF. This makes SKMF an ideal tool also for statistical purposes.
Integrating simple stochastic fire spread model with the Regional Hydro-Ecological Simulation System
NASA Astrophysics Data System (ADS)
Kennedy, M. C.; McKenzie, D.
2012-12-01
Fire has an important role in watershed dynamics, and it is unclear how the interaction between fire and hydrological processes will be modified in a changing climate. Detailed landscape models of fire spread and fire effects require comprehensive data, are computationally intensive, and are subject to cumulative error from uncertainties in many parameters. In contrast, statistical models draw attributes such as extent, frequency, and severity a priori from selected distributions that are estimated from current data, implicitly assuming a stationary driving process that may not hold under climate change. We are designing a relatively simple stochastic model of fire spread (WMFire) that will be coupled with the Regional Hydro-Ecological Simulation System (RHESSys), for projecting the effects of climatic change on mountain watersheds. The model is an extension of exogenously constrained dynamic percolation (ECDP), wherein spread is controlled primarily by a spread probability from burning pixels, and which has been shown to have the capacity to identify dominant controls on cross-scale properties of low-severity fire regimes. Each year RHESSys will pass projected pixel-level values of fuel, fuel moistures, wind speed and wind direction to the fire spread model. Spread probabilities will then be calculated from the fuel load, fuel moisture, and orientation of the pixel relative to the slope gradient and wind direction. The stochastic structure of the spread model will subsume the uncertainties in future patterns of fire spread, fuels and climate. WMFire is being calibrated by and evaluated against current known fire regime properties for watersheds in the Pacific Northwest (USA) using Monte Carlo inference.
Stochastic solution to quantum dynamics
NASA Technical Reports Server (NTRS)
John, Sarah; Wilson, John W.
1994-01-01
The quantum Liouville equation in the Wigner representation is solved numerically by using Monte Carlo methods. For incremental time steps, the propagation is implemented as a classical evolution in phase space modified by a quantum correction. The correction, which is a momentum jump function, is simulated in the quasi-classical approximation via a stochastic process. The technique, which is developed and validated in two- and three- dimensional momentum space, extends an earlier one-dimensional work. Also, by developing a new algorithm, the application to bound state motion in an anharmonic quartic potential shows better agreement with exact solutions in two-dimensional phase space.
NASA Astrophysics Data System (ADS)
Ross, D. K.; Moreau, William
1995-08-01
We investigate stochastic gravity as a potentially fruitful avenue for studying quantum effects in gravity. Following the approach of stochastic electrodynamics ( sed), as a representation of the quantum gravity vacuum we construct a classical state of isotropic random gravitational radiation, expressed as a spin-2 field,h µυ (x), composed of plane waves of random phase on a flat spacetime manifold. Requiring Lorentz invariance leads to the result that the spectral composition function of the gravitational radiation,h(ω), must be proportional to 1/ω 2. The proportionality constant is determined by the Planck condition that the energy density consist ofħω/2 per normal mode, and this condition sets the amplitude scale of the random gravitational radiation at the order of the Planck length, giving a spectral composition functionh(ω) =√16πc 2Lp/ω2. As an application of stochastic gravity, we investigate the Davies-Unruh effect. We calculate the two-point correlation function (R iojo(Oτ-δτ/2)R kolo(O,τ+δτ/2)) of the measureable geodesic deviation tensor field,R iojo, for two situations: (i) at a point detector uniformly accelerating through the random gravitational radiation, and (ii) at an inertial detector in a heat bath of the random radiation at a finite temperature. We find that the two correlation functions agree to first order inaδτ/c provided that the temperature and acceleration satisfy the relationkT=ħa/2πc.
Parallel Markov chain Monte Carlo simulations
NASA Astrophysics Data System (ADS)
Ren, Ruichao; Orkoulas, G.
2007-06-01
With strict detailed balance, parallel Monte Carlo simulation through domain decomposition cannot be validated with conventional Markov chain theory, which describes an intrinsically serial stochastic process. In this work, the parallel version of Markov chain theory and its role in accelerating Monte Carlo simulations via cluster computing is explored. It is shown that sequential updating is the key to improving efficiency in parallel simulations through domain decomposition. A parallel scheme is proposed to reduce interprocessor communication or synchronization, which slows down parallel simulation with increasing number of processors. Parallel simulation results for the two-dimensional lattice gas model show substantial reduction of simulation time for systems of moderate and large size.
Parallel Markov chain Monte Carlo simulations.
Ren, Ruichao; Orkoulas, G
2007-06-07
With strict detailed balance, parallel Monte Carlo simulation through domain decomposition cannot be validated with conventional Markov chain theory, which describes an intrinsically serial stochastic process. In this work, the parallel version of Markov chain theory and its role in accelerating Monte Carlo simulations via cluster computing is explored. It is shown that sequential updating is the key to improving efficiency in parallel simulations through domain decomposition. A parallel scheme is proposed to reduce interprocessor communication or synchronization, which slows down parallel simulation with increasing number of processors. Parallel simulation results for the two-dimensional lattice gas model show substantial reduction of simulation time for systems of moderate and large size.
Interaction picture density matrix quantum Monte Carlo
Malone, Fionn D. Lee, D. K. K.; Foulkes, W. M. C.; Blunt, N. S.; Shepherd, James J.; Spencer, J. S.
2015-07-28
The recently developed density matrix quantum Monte Carlo (DMQMC) algorithm stochastically samples the N-body thermal density matrix and hence provides access to exact properties of many-particle quantum systems at arbitrary temperatures. We demonstrate that moving to the interaction picture provides substantial benefits when applying DMQMC to interacting fermions. In this first study, we focus on a system of much recent interest: the uniform electron gas in the warm dense regime. The basis set incompleteness error at finite temperature is investigated and extrapolated via a simple Monte Carlo sampling procedure. Finally, we provide benchmark calculations for a four-electron system, comparing our results to previous work where possible.
NASA Technical Reports Server (NTRS)
2002-01-01
[figure removed for brevity, see original site]
This image shows the rugged cratered highland region of Libya Montes. Libya Montes forms part of the rim of an ancient impact basin called Isidis. This region of the highlands is fairly dissected with valley networks. There is still debate within the scientific community as to how valley networks themselves form: surface runoff (rainfall/snowmelt) or headward erosion via groundwater sapping. The degree of dissection here in this region suggests surface runoff rather than groundwater sapping. Small dunes are also visible on the floors of some of these channels.
Note: this THEMIS visual image has not been radiometrically nor geometrically calibrated for this preliminary release. An empirical correction has been performed to remove instrumental effects. A linear shift has been applied in the cross-track and down-track direction to approximate spacecraft and planetary motion. Fully calibrated and geometrically projected images will be released through the Planetary Data System in accordance with Project policies at a later time. NASA's Jet Propulsion Laboratory manages the 2001 Mars Odyssey mission for NASA's Office of Space Science, Washington, D.C. The Thermal Emission Imaging System (THEMIS) was developed by Arizona State University, Tempe, in collaboration with Raytheon Santa Barbara Remote Sensing. The THEMIS investigation is led by Dr. Philip Christensen at Arizona State University. Lockheed Martin Astronautics, Denver, is the prime contractor for the Odyssey project, and developed and built the orbiter. Mission operations are conducted jointly from Lockheed Martin and from JPL, a division of the California Institute of Technology in Pasadena.
Planning under uncertainty solving large-scale stochastic linear programs
Infanger, G. |
1992-12-01
For many practical problems, solutions obtained from deterministic models are unsatisfactory because they fail to hedge against certain contingencies that may occur in the future. Stochastic models address this shortcoming, but up to recently seemed to be intractable due to their size. Recent advances both in solution algorithms and in computer technology now allow us to solve important and general classes of practical stochastic problems. We show how large-scale stochastic linear programs can be efficiently solved by combining classical decomposition and Monte Carlo (importance) sampling techniques. We discuss the methodology for solving two-stage stochastic linear programs with recourse, present numerical results of large problems with numerous stochastic parameters, show how to efficiently implement the methodology on a parallel multi-computer and derive the theory for solving a general class of multi-stage problems with dependency of the stochastic parameters within a stage and between different stages.
Master-equation approach to stochastic neurodynamics
NASA Astrophysics Data System (ADS)
Ohira, Toru; Cowan, Jack D.
1993-09-01
A master-equation approach to the stochastic neurodynamics proposed by Cowan [in Advances in Neural Information Processing Systems 3, edited by R. P. Lippman, J. E. Moody, and D. S. Touretzky (Morgan Kaufmann, San Mateo, 1991), p. 62] is investigated in this paper. We deal with a model neural network that is composed of two-state neurons obeying elementary stochastic transition rates. We show that such an approach yields concise expressions for multipoint moments and an equation of motion. We apply the formalism to a (1+1)-dimensional system. Exact and approximate expressions for various statistical parameters are obtained and compared with Monte Carlo simulations.
Monte Carlo Methods in the Physical Sciences
Kalos, M H
2007-06-06
I will review the role that Monte Carlo methods play in the physical sciences. They are very widely used for a number of reasons: they permit the rapid and faithful transformation of a natural or model stochastic process into a computer code. They are powerful numerical methods for treating the many-dimensional problems that derive from important physical systems. Finally, many of the methods naturally permit the use of modern parallel computers in efficient ways. In the presentation, I will emphasize four aspects of the computations: whether or not the computation derives from a natural or model stochastic process; whether the system under study is highly idealized or realistic; whether the Monte Carlo methodology is straightforward or mathematically sophisticated; and finally, the scientific role of the computation.
Algorithmic advances in stochastic programming
Morton, D.P.
1993-07-01
Practical planning problems with deterministic forecasts of inherently uncertain parameters often yield unsatisfactory solutions. Stochastic programming formulations allow uncertain parameters to be modeled as random variables with known distributions, but the size of the resulting mathematical programs can be formidable. Decomposition-based algorithms take advantage of special structure and provide an attractive approach to such problems. We consider two classes of decomposition-based stochastic programming algorithms. The first type of algorithm addresses problems with a ``manageable`` number of scenarios. The second class incorporates Monte Carlo sampling within a decomposition algorithm. We develop and empirically study an enhanced Benders decomposition algorithm for solving multistage stochastic linear programs within a prespecified tolerance. The enhancements include warm start basis selection, preliminary cut generation, the multicut procedure, and decision tree traversing strategies. Computational results are presented for a collection of ``real-world`` multistage stochastic hydroelectric scheduling problems. Recently, there has been an increased focus on decomposition-based algorithms that use sampling within the optimization framework. These approaches hold much promise for solving stochastic programs with many scenarios. A critical component of such algorithms is a stopping criterion to ensure the quality of the solution. With this as motivation, we develop a stopping rule theory for algorithms in which bounds on the optimal objective function value are estimated by sampling. Rules are provided for selecting sample sizes and terminating the algorithm under which asymptotic validity of confidence interval statements for the quality of the proposed solution can be verified. Issues associated with the application of this theory to two sampling-based algorithms are considered, and preliminary empirical coverage results are presented.
Parallel Monte Carlo Simulation for control system design
NASA Technical Reports Server (NTRS)
Schubert, Wolfgang M.
1995-01-01
The research during the 1993/94 academic year addressed the design of parallel algorithms for stochastic robustness synthesis (SRS). SRS uses Monte Carlo simulation to compute probabilities of system instability and other design-metric violations. The probabilities form a cost function which is used by a genetic algorithm (GA). The GA searches for the stochastic optimal controller. The existing sequential algorithm was analyzed and modified to execute in a distributed environment. For this, parallel approaches to Monte Carlo simulation and genetic algorithms were investigated. Initial empirical results are available for the KSR1.
Blaskiewicz, M.
2011-01-01
Stochastic Cooling was invented by Simon van der Meer and was demonstrated at the CERN ISR and ICE (Initial Cooling Experiment). Operational systems were developed at Fermilab and CERN. A complete theory of cooling of unbunched beams was developed, and was applied at CERN and Fermilab. Several new and existing rings employ coasting beam cooling. Bunched beam cooling was demonstrated in ICE and has been observed in several rings designed for coasting beam cooling. High energy bunched beams have proven more difficult. Signal suppression was achieved in the Tevatron, though operational cooling was not pursued at Fermilab. Longitudinal cooling was achieved in the RHIC collider. More recently a vertical cooling system in RHIC cooled both transverse dimensions via betatron coupling.
Stochastic light-cone CTMRG: a new DMRG approach to stochastic models
NASA Astrophysics Data System (ADS)
Kemper, A.; Gendiar, A.; Nishino, T.; Schadschneider, A.; Zittartz, J.
2003-01-01
We develop a new variant of the recently introduced stochastic transfer matrix DMRG which we call stochastic light-cone corner-transfer-matrix DMRG (LCTMRG). It is a numerical method to compute dynamic properties of one-dimensional stochastic processes. As suggested by its name, the LCTMRG is a modification of the corner-transfer-matrix DMRG, adjusted by an additional causality argument. As an example, two reaction-diffusion models, the diffusion-annihilation process and the branch-fusion process are studied and compared with exact data and Monte Carlo simulations to estimate the capability and accuracy of the new method. The number of possible Trotter steps of more than 105 shows a considerable improvement on the old stochastic TMRG algorithm.
Di Giorgio, Laura; Giorgio, Laura Di; Flaxman, Abraham D; Moses, Mark W; Fullman, Nancy; Hanlon, Michael; Conner, Ruben O; Wollum, Alexandra; Murray, Christopher J L
2016-01-01
Low-resource countries can greatly benefit from even small increases in efficiency of health service provision, supporting a strong case to measure and pursue efficiency improvement in low- and middle-income countries (LMICs). However, the knowledge base concerning efficiency measurement remains scarce for these contexts. This study shows that current estimation approaches may not be well suited to measure technical efficiency in LMICs and offers an alternative approach for efficiency measurement in these settings. We developed a simulation environment which reproduces the characteristics of health service production in LMICs, and evaluated the performance of Data Envelopment Analysis (DEA) and Stochastic Distance Function (SDF) for assessing efficiency. We found that an ensemble approach (ENS) combining efficiency estimates from a restricted version of DEA (rDEA) and restricted SDF (rSDF) is the preferable method across a range of scenarios. This is the first study to analyze efficiency measurement in a simulation setting for LMICs. Our findings aim to heighten the validity and reliability of efficiency analyses in LMICs, and thus inform policy dialogues about improving the efficiency of health service production in these settings.
2010-10-20
The "Monte Carlo Benchmark" (MCB) is intended to model the computatiional performance of Monte Carlo algorithms on parallel architectures. It models the solution of a simple heuristic transport equation using a Monte Carlo technique. The MCB employs typical features of Monte Carlo algorithms such as particle creation, particle tracking, tallying particle information, and particle destruction. Particles are also traded among processors using MPI calls.
Stochastic modeling of carbon oxidation
Chen, W.Y,; Kulkarni, A.; Milum, J.L.; Fan, L.T.
1999-12-01
Recent studies of carbon oxidation by scanning tunneling microscopy indicate that measured rates of carbon oxidation can be affected by randomly distributed defects in the carbon structure, which vary in size. Nevertheless, the impact of this observation on the analysis or modeling of the oxidation rate has not been critically assessed. This work focuses on the stochastic analysis of the dynamics of carbon clusters' conversions during the oxidation of a carbon sheet. According to the classic model of Nagle and Strickland-Constable (NSC), two classes of carbon clusters are involved in three types of reactions: gasification of basal-carbon clusters, gasification of edge-carbon clusters, and conversion of the edge-carbon clusters to the basal-carbon clusters due to the thermal annealing. To accommodate the dilution of basal clusters, however, the NSC model is modified for the later stage of oxidation in this work. Master equations governing the numbers of three classes of carbon clusters, basal, edge and gasified, are formulated from stochastic population balance. The stochastic pathways of three different classes of carbon during oxidation, that is, their means and the fluctuations around these means, have been numerically simulated independently by the algorithm derived from the master equations, as well as by an event-driven Monte Carlo algorithm. Both algorithms have given rise to identical results.
Stochastic analysis of dimerization systems.
Barzel, Baruch; Biham, Ofer
2009-09-01
The process of dimerization, in which two monomers bind to each other and form a dimer, is common in nature. This process can be modeled using rate equations, from which the average copy numbers of the reacting monomers and of the product dimers can then be obtained. However, the rate equations apply only when these copy numbers are large. In the limit of small copy numbers the system becomes dominated by fluctuations, which are not accounted for by the rate equations. In this limit one must use stochastic methods such as direct integration of the master equation or Monte Carlo simulations. These methods are computationally intensive and rarely succumb to analytical solutions. Here we use the recently introduced moment equations which provide a highly simplified stochastic treatment of the dimerization process. Using this approach, we obtain an analytical solution for the copy numbers and reaction rates both under steady-state conditions and in the time-dependent case. We analyze three different dimerization processes: dimerization without dissociation, dimerization with dissociation, and heterodimer formation. To validate the results we compare them with the results obtained from the master equation in the stochastic limit and with those obtained from the rate equations in the deterministic limit. Potential applications of the results in different physical contexts are discussed.
Stochastic histories of refractory interstellar dust
NASA Technical Reports Server (NTRS)
Liffman, Kurt; Clayton, Donald D.
1988-01-01
Histories of refractory interstellar dust particles (IDPs) are calculated. The profile of a particle population is assembled from a large number of stochastic, or Monte Carlo, histories of single particles; the probabilities for each of the events that may befall a given particle are specified, and the particle's history is unfolded by a sequence of random numbers. The assumptions that are made and the techniques of the calculation are described together with the results obtained. Several technical demonstrations are presented.
Stochastic histories of refractory interstellar dust
NASA Technical Reports Server (NTRS)
Liffman, Kurt; Clayton, Donald D.
1988-01-01
Histories of refractory interstellar dust particles (IDPs) are calculated. The profile of a particle population is assembled from a large number of stochastic, or Monte Carlo, histories of single particles; the probabilities for each of the events that may befall a given particle are specified, and the particle's history is unfolded by a sequence of random numbers. The assumptions that are made and the techniques of the calculation are described together with the results obtained. Several technical demonstrations are presented.
Stochastic approximation boosting for incomplete data problems.
Sexton, Joseph; Laake, Petter
2009-12-01
Boosting is a powerful approach to fitting regression models. This article describes a boosting algorithm for likelihood-based estimation with incomplete data. The algorithm combines boosting with a variant of stochastic approximation that uses Markov chain Monte Carlo to deal with the missing data. Applications to fitting generalized linear and additive models with missing covariates are given. The method is applied to the Pima Indians Diabetes Data where over half of the cases contain missing values.
Primal and Dual Integrated Force Methods Used for Stochastic Analysis
NASA Technical Reports Server (NTRS)
Patnaik, Surya N.
2005-01-01
At the NASA Glenn Research Center, the primal and dual integrated force methods are being extended for the stochastic analysis of structures. The stochastic simulation can be used to quantify the consequence of scatter in stress and displacement response because of a specified variation in input parameters such as load (mechanical, thermal, and support settling loads), material properties (strength, modulus, density, etc.), and sizing design variables (depth, thickness, etc.). All the parameters are modeled as random variables with given probability distributions, means, and covariances. The stochastic response is formulated through a quadratic perturbation theory, and it is verified through a Monte Carlo simulation.
Collisionally induced stochastic dynamics of fast ions in solids
Burgdoerfer, J.
1989-01-01
Recent developments in the theory of excited state formation in collisions of fast highly charged ions with solids are reviewed. We discuss a classical transport theory employing Monte-Carlo sampling of solutions of a microscopic Langevin equation. Dynamical screening by the dielectric medium as well as multiple collisions are incorporated through the drift and stochastic forces in the Langevin equation. The close relationship between the extrinsically stochastic dynamics described by the Langevin and the intrinsic stochasticity in chaotic nonlinear dynamical systems is stressed. Comparison with experimental data and possible modification by quantum corrections are discussed. 49 refs., 11 figs.
Zimmerman, G.B.
1997-06-24
Monte Carlo methods appropriate to simulate the transport of x-rays, neutrons, ion and electrons in Inertial Confinement Fusion targets are described and analyzed. The Implicit Monte Carlo method of x-ray transport handles symmetry within indirect drive ICF hohlraums well, but can be improved 50X in efficiency by angular biasing the x-rays towards the fuel capsule. Accurate simulation of thermonuclear burns nd burn diagnostics involves detailed particle source spectra, charged particle ranges, inflight reaction kinematics, corrections for bulk and thermal Doppler effects and variance reduction to obtain adequate statistics for rare events. It is found that the effects of angular Coulomb scattering must be included in models of charged particle transport through heterogeneous materials.
2012-08-01
AFRL-RX-WP-TP-2012-0397 INVERSE PROBLEM FOR ELECTROMAGNETIC PROPAGATION IN A DIELECTRIC MEDIUM USING MARKOV CHAIN MONTE CARLO METHOD ...SUBTITLE INVERSE PROBLEM FOR ELECTROMAGNETIC PROPAGATION IN A DIELECTRIC MEDIUM USING MARKOV CHAIN MONTE CARLO METHOD (PREPRINT) 5a. CONTRACT...a stochastic inverse methodology arising in electromagnetic imaging. Nondestructive testing using guided microwaves covers a wide range of
Computing Optimal Stochastic Portfolio Execution Strategies: A Parametric Approach Using Simulations
NASA Astrophysics Data System (ADS)
Moazeni, Somayeh; Coleman, Thomas F.; Li, Yuying
2010-09-01
Computing optimal stochastic portfolio execution strategies under appropriate risk consideration presents great computational challenge. We investigate a parametric approach for computing optimal stochastic strategies using Monte Carlo simulations. This approach allows reduction in computational complexity by computing coefficients for a parametric representation of a stochastic dynamic strategy based on static optimization. Using this technique, constraints can be similarly handled using appropriate penalty functions. We illustrate the proposed approach to minimize the expected execution cost and Conditional Value-at-Risk (CVaR).
NASA Technical Reports Server (NTRS)
Ponomarev, Artem; Cucinotta, F.
2011-01-01
To create a generalized mechanistic model of DNA damage in human cells that will generate analytical and image data corresponding to experimentally observed DNA damage foci and will help to improve the experimental foci yields by simulating spatial foci patterns and resolving problems with quantitative image analysis. Material and Methods: The analysis of patterns of RIFs (radiation-induced foci) produced by low- and high-LET (linear energy transfer) radiation was conducted by using a Monte Carlo model that combines the heavy ion track structure with characteristics of the human genome on the level of chromosomes. The foci patterns were also simulated in the maximum projection plane for flat nuclei. Some data analysis was done with the help of image segmentation software that identifies individual classes of RIFs and colocolized RIFs, which is of importance to some experimental assays that assign DNA damage a dual phosphorescent signal. Results: The model predicts the spatial and genomic distributions of DNA DSBs (double strand breaks) and associated RIFs in a human cell nucleus for a particular dose of either low- or high-LET radiation. We used the model to do analyses for different irradiation scenarios. In the beam-parallel-to-the-disk-of-a-flattened-nucleus scenario we found that the foci appeared to be merged due to their high density, while, in the perpendicular-beam scenario, the foci appeared as one bright spot per hit. The statistics and spatial distribution of regions of densely arranged foci, termed DNA foci chains, were predicted numerically using this model. Another analysis was done to evaluate the number of ion hits per nucleus, which were visible from streaks of closely located foci. In another analysis, our image segmentaiton software determined foci yields directly from images with single-class or colocolized foci. Conclusions: We showed that DSB clustering needs to be taken into account to determine the true DNA damage foci yield, which helps to
Fission Matrix Capability for MCNP Monte Carlo
Carney, Sean E.; Brown, Forrest B.; Kiedrowski, Brian C.; Martin, William R.
2012-09-05
In a Monte Carlo criticality calculation, before the tallying of quantities can begin, a converged fission source (the fundamental eigenvector of the fission kernel) is required. Tallies of interest may include powers, absorption rates, leakage rates, or the multiplication factor (the fundamental eigenvalue of the fission kernel, k{sub eff}). Just as in the power iteration method of linear algebra, if the dominance ratio (the ratio of the first and zeroth eigenvalues) is high, many iterations of neutron history simulations are required to isolate the fundamental mode of the problem. Optically large systems have large dominance ratios, and systems containing poor neutron communication between regions are also slow to converge. The fission matrix method, implemented into MCNP[1], addresses these problems. When Monte Carlo random walk from a source is executed, the fission kernel is stochastically applied to the source. Random numbers are used for: distances to collision, reaction types, scattering physics, fission reactions, etc. This method is used because the fission kernel is a complex, 7-dimensional operator that is not explicitly known. Deterministic methods use approximations/discretization in energy, space, and direction to the kernel. Consequently, they are faster. Monte Carlo directly simulates the physics, which necessitates the use of random sampling. Because of this statistical noise, common convergence acceleration methods used in deterministic methods do not work. In the fission matrix method, we are using the random walk information not only to build the next-iteration fission source, but also a spatially-averaged fission kernel. Just like in deterministic methods, this involves approximation and discretization. The approximation is the tallying of the spatially-discretized fission kernel with an incorrect fission source. We address this by making the spatial mesh fine enough that this error is negligible. As a consequence of discretization we get a
A heterogeneous stochastic FEM framework for elliptic PDEs
Hou, Thomas Y. Liu, Pengfei
2015-01-15
We introduce a new concept of sparsity for the stochastic elliptic operator −div(a(x,ω)∇(⋅)), which reflects the compactness of its inverse operator in the stochastic direction and allows for spatially heterogeneous stochastic structure. This new concept of sparsity motivates a heterogeneous stochastic finite element method (HSFEM) framework for linear elliptic equations, which discretizes the equations using the heterogeneous coupling of spatial basis with local stochastic basis to exploit the local stochastic structure of the solution space. We also provide a sampling method to construct the local stochastic basis for this framework using the randomized range finding techniques. The resulting HSFEM involves two stages and suits the multi-query setting: in the offline stage, the local stochastic structure of the solution space is identified; in the online stage, the equation can be efficiently solved for multiple forcing functions. An online error estimation and correction procedure through Monte Carlo sampling is given. Numerical results for several problems with high dimensional stochastic input are presented to demonstrate the efficiency of the HSFEM in the online stage.
Molecular Motors and Stochastic Models
NASA Astrophysics Data System (ADS)
Lipowsky, Reinhard
The behavior of single molecular motors such as kinesin or myosin V, which move on linear filaments, involves a nontrivial coupling between the biochemical motor cycle and the stochastic movement. This coupling can be studied in the framework of nonuniform ratchet models which are characterized by spatially localized transition rates between the different internal states of the motor. These models can be classified according to their functional relationships between the motor velocity and the concentration of the fuel molecules. The simplest such relationship applies to two subclasses of models for dimeric kinesin and agrees with experimental observations on this molecular motor.
Brennan,J.M.; Blaskiewicz, M. M.; Severino, F.
2009-05-04
After the success of longitudinal stochastic cooling of bunched heavy ion beam in RHIC, transverse stochastic cooling in the vertical plane of Yellow ring was installed and is being commissioned with proton beam. This report presents the status of the effort and gives an estimate, based on simulation, of the RHIC luminosity with stochastic cooling in all planes.
Kalos, M.
2006-05-09
The Monte Carlo example programs VARHATOM and DMCATOM are two small, simple FORTRAN programs that illustrate the use of the Monte Carlo Mathematical technique for calculating the ground state energy of the hydrogen atom.
Algebraic, geometric, and stochastic aspects of genetic operators
NASA Technical Reports Server (NTRS)
Foo, N. Y.; Bosworth, J. L.
1972-01-01
Genetic algorithms for function optimization employ genetic operators patterned after those observed in search strategies employed in natural adaptation. Two of these operators, crossover and inversion, are interpreted in terms of their algebraic and geometric properties. Stochastic models of the operators are developed which are employed in Monte Carlo simulations of their behavior.
Microtubules: dynamically unstable stochastic phase-switching polymers
NASA Astrophysics Data System (ADS)
Zakharov, P. N.; Arzhanik, V. K.; Ulyanov, E. V.; Gudimchuk, N. B.; Ataullakhanov, F. I.
2016-08-01
One of the simplest molecular motors, a biological microtubule, is reviewed as an example of a highly nonequilibrium molecular machine capable of stochastic transitions between slow growth and rapid disassembly phases. Basic properties of microtubules are described, and various approaches to simulating their dynamics, from statistical chemical kinetics models to molecular dynamics models using the Metropolis Monte Carlo and Brownian dynamics methods, are outlined.
Fast Quantum Algorithms for Numerical Integrals and Stochastic Processes
NASA Technical Reports Server (NTRS)
Abrams, D.; Williams, C.
1999-01-01
We discuss quantum algorithms that calculate numerical integrals and descriptive statistics of stochastic processes. With either of two distinct approaches, one obtains an exponential speed increase in comparison to the fastest known classical deterministic algotithms and a quadratic speed increase incomparison to classical Monte Carlo methods.
Stochastic and deterministic methods for the calculation of small-sample reactivity experiments
Gruel, A.; Leconte, P.
2011-07-01
Several calculations methods for the analysis of small-sample reactivity experiments are presented, as their main advantages and drawbacks. A numerical benchmark has been defined for this study consisting in a regular lattice of UO{sub 2} fuel pins, in which the central one is successively poisoned with isotopes of interest (actinides, absorbers, ...). A first method, based on a forward calculation ('eigenvalues difference method'), is presented, using either a deterministic or a stochastic calculation code. A first perturbative method studied is based on the 'Exact Perturbation Theory' (EPT) implemented in the deterministic code APOLLO2.8, and gives consistent results against forward calculations. A second perturbative method, the 'correlated sampling method', implemented in the stochastic calculation code TRIPOLI4.7 is tested. It should be use carefully as it is generally validated against small atomic density changes, but can be useful for conception studies. An 'hybrid method', based on the EPT formalism, using both Monte Carlo and deterministic results is tested, and shows reliable results. (authors)
Tuthill, Richard S; Davis, Dustin W; Dai, Zhongtao
2015-02-03
A disclosed fuel injector provides mixing of fuel with airflow by surrounding a swirled fuel flow with first and second swirled airflows that ensures mixing prior to or upon entering the combustion chamber. Fuel tubes produce a central fuel flow along with a central airflow through a plurality of openings to generate the high velocity fuel/air mixture along the axis of the fuel injector in addition to the swirled fuel/air mixture.
Stochastic differential equations
Sobczyk, K. )
1990-01-01
This book provides a unified treatment of both regular (or random) and Ito stochastic differential equations. It focuses on solution methods, including some developed only recently. Applications are discussed, in particular an insight is given into both the mathematical structure, and the most efficient solution methods (analytical as well as numerical). Starting from basic notions and results of the theory of stochastic processes and stochastic calculus (including Ito's stochastic integral), many principal mathematical problems and results related to stochastic differential equations are expounded here for the first time. Applications treated include those relating to road vehicles, earthquake excitations and offshore structures.
Stochastic Collocation Method for Three-dimensional Groundwater Flow
NASA Astrophysics Data System (ADS)
Shi, L.; Zhang, D.
2008-12-01
The stochastic collocation method (SCM) has recently gained extensive attention in several disciplines. The numerical implementation of SCM only requires repetitive runs of an existing deterministic solver or code as in the Monte Carlo simulation. But it is generally much more efficient than the Monte Carlo method. In this paper, the stochastic collocation method is used to efficiently qualify uncertainty of three-dimensional groundwater flow. We introduce the basic principles of common collocation methods, i.e., the tensor product collocation method (TPCM), Smolyak collocation method (SmCM), Stround-2 collocation method (StCM), and probability collocation method (PCM). Their accuracy, computational cost, and limitation are discussed. Illustrative examples reveal that the seamless combination of collocation techniques and existing simulators makes the new framework possible to efficiently handle complex stochastic problems.
Stochastic symmetries of Wick type stochastic ordinary differential equations
NASA Astrophysics Data System (ADS)
Ünal, Gazanfer
2015-04-01
We consider Wick type stochastic ordinary differential equations with Gaussian white noise. We define the stochastic symmetry transformations and Lie equations in Kondratiev space (S)-1N. We derive the determining system of Wick type stochastic partial differential equations with Gaussian white noise. Stochastic symmetries for stochastic Bernoulli, Riccati and general stochastic linear equation in (S)-1N are obtained. A stochastic version of canonical variables is also introduced.
Dynamic Response Analysis of Fuzzy Stochastic Truss Structures under Fuzzy Stochastic Excitation
NASA Astrophysics Data System (ADS)
Ma, Juan; Chen, Jian-Jun; Gao, Wei
2006-08-01
A novel method (Fuzzy factor method) is presented, which is used in the dynamic response analysis of fuzzy stochastic truss structures under fuzzy stochastic step loads. Considering the fuzzy randomness of structural physical parameters, geometric dimensions and the amplitudes of step loads simultaneously, fuzzy stochastic dynamic response of the truss structures is developed using the mode superposition method and fuzzy factor method. The fuzzy numerical characteristics of dynamic response are then obtained by using the random variable’s moment method and the algebra synthesis method. The influences of the fuzzy randomness of structural physical parameters, geometric dimensions and step load on the fuzzy randomness of the dynamic response are demonstrated via an engineering example, and Monte-Carlo method is used to simulate this example, verifying the feasibility and validity of the modeling and method given in this paper.
NASA Astrophysics Data System (ADS)
Sheehan, T.; Bachelet, D. M.; Ferschweiler, K.
2015-12-01
The MC2 dynamic global vegetation model fire module simulates fire occurrence, area burned, and fire impacts including mortality, biomass burned, and nitrogen volatilization. Fire occurrence is based on fuel load levels and vegetation-specific thresholds for three calculated fire weather indices: fine fuel moisture code (FFMC) for the moisture content of fine fuels; build-up index (BUI) for the total amount of fuel available for combustion; and energy release component (ERC) for the total energy available to fire. Ignitions are assumed (i.e. the probability of an ignition source is 1). The model is run with gridded inputs and the fraction of each grid cell burned is limited by a vegetation-specific fire return period (FRP) and the number of years since the last fire occurred in the grid cell. One consequence of assumed ignitions FRP constraint is that similar fire behavior can take place over large areas with identical vegetation type. In regions where thresholds are often exceeded, fires occur frequently (annually in some instances) with a very low fraction of a cell burned. In areas where fire is infrequent, a single hot, dry climate event can result in intense fire over a large region. Both cases can potentially result in large areas with uniform vegetation type and age. To better reflect realistic fire occurrence, we have developed a stochastic fire occurrence model that: a) uses a map of relative ignition probability and a multiplier to alter overall ignition occurrence; b) adjusts the original fixed fire thresholds with ignition success probabilities based on fire weather indices; and c) calculates spread by using a probability based on slope and wind direction. A Monte Carlo method is used with all three algorithms to determine occurrence. The new stochastic ignition approach yields more variety in fire intensity, a smaller annual total of cells burned, and patchier vegetation.
HTGR Unit Fuel Pebble k-infinity Results Using Chord Length Sampling
T.J. Donovan; Y. Danon
2003-06-16
There is considerable interest in transport models that will permit the simulation of neutral particle transport through stochastic mixtures. Chord length sampling techniques that simulate particle transport through binary stochastic mixtures consisting of spheres randomly arranged in a matrix have been implemented in several Monte Carlo Codes [1-3]. Though the use of these methods is growing, the accuracy and efficiency of these methods has not yet been thoroughly demonstrated for an application of particular interest--a high temperature gas reactor fuel pebble element. This paper presents comparison results of k-infinity calculations performed on a LEUPRO-1 pebble cell. Results are generated using a chord length sampling method implemented in a test version of MCNP [3]. This Limited Chord Length Sampling (LCLS) method eliminates the need to model the details of the micro-heterogeneity of the pebble. Results are also computed for an explicit pebble model where the TRISO fuel particles within the pebble are randomly distributed. Finally, the heterogeneous matrix region of the pebble cell is homogenized based simply on volume fractions. These three results are compared to results reported by Johnson et al [4], and duplicated here, using a cubic lattice representation of the TRISO fuel particles. Figures of Merit for the four k-infinity calculations are compared to judge relative efficiencies.
A Monte Carlo approach to water management
NASA Astrophysics Data System (ADS)
Koutsoyiannis, D.
2012-04-01
Common methods for making optimal decisions in water management problems are insufficient. Linear programming methods are inappropriate because hydrosystems are nonlinear with respect to their dynamics, operation constraints and objectives. Dynamic programming methods are inappropriate because water management problems cannot be divided into sequential stages. Also, these deterministic methods cannot properly deal with the uncertainty of future conditions (inflows, demands, etc.). Even stochastic extensions of these methods (e.g. linear-quadratic-Gaussian control) necessitate such drastic oversimplifications of hydrosystems that may make the obtained results irrelevant to the real world problems. However, a Monte Carlo approach is feasible and can form a general methodology applicable to any type of hydrosystem. This methodology uses stochastic simulation to generate system inputs, either unconditional or conditioned on a prediction, if available, and represents the operation of the entire system through a simulation model as faithful as possible, without demanding a specific mathematical form that would imply oversimplifications. Such representation fully respects the physical constraints, while at the same time it evaluates the system operation constraints and objectives in probabilistic terms, and derives their distribution functions and statistics through Monte Carlo simulation. As the performance criteria of a hydrosystem operation will generally be highly nonlinear and highly nonconvex functions of the control variables, a second Monte Carlo procedure, implementing stochastic optimization, is necessary to optimize system performance and evaluate the control variables of the system. The latter is facilitated if the entire representation is parsimonious, i.e. if the number of control variables is kept at a minimum by involving a suitable system parameterization. The approach is illustrated through three examples for (a) a hypothetical system of two reservoirs
Markov chain Monte Carlo without likelihoods.
Marjoram, Paul; Molitor, John; Plagnol, Vincent; Tavare, Simon
2003-12-23
Many stochastic simulation approaches for generating observations from a posterior distribution depend on knowing a likelihood function. However, for many complex probability models, such likelihoods are either impossible or computationally prohibitive to obtain. Here we present a Markov chain Monte Carlo method for generating observations from a posterior distribution without the use of likelihoods. It can also be used in frequentist applications, in particular for maximum-likelihood estimation. The approach is illustrated by an example of ancestral inference in population genetics. A number of open problems are highlighted in the discussion.
A Stochastic Diffusion Process for the Dirichlet Distribution
Bakosi, J.; Ristorcelli, J. R.
2013-03-01
The method of potential solutions of Fokker-Planck equations is used to develop a transport equation for the joint probability ofNcoupled stochastic variables with the Dirichlet distribution as its asymptotic solution. To ensure a bounded sample space, a coupled nonlinear diffusion process is required: the Wiener processes in the equivalent system of stochastic differential equations are multiplicative with coefficients dependent on all the stochastic variables. Individual samples of a discrete ensemble, obtained from the stochastic process, satisfy a unit-sum constraint at all times. The process may be used to represent realizations of a fluctuating ensemble ofNvariables subject to a conservation principle.more » Similar to the multivariate Wright-Fisher process, whose invariant is also Dirichlet, the univariate case yields a process whose invariant is the beta distribution. As a test of the results, Monte Carlo simulations are used to evolve numerical ensembles toward the invariant Dirichlet distribution.« less
Application of stochastic robustness to aircraft control systems
NASA Technical Reports Server (NTRS)
Stengel, Robert F.; Ryan, Laura E.
1989-01-01
Stochastic robustness, a simple numerical procedure for estimating the stability robustness of linear, time-invariant systems, is applied to a forward-swept-wing aircraft control system. Based on Monte Carlo evaluation of the system's closed-loop eignevalues, this analysis approach introduces the probability of instability as a scalar stability robustness measure. The related stochastic root locus provides insight into robustness characteristics of the closed-loop system. Three Linear Quadratic controllers of decreasing robustness are chosen to demonstrate the use of stochastic robustness to analyze and compare control designs. Examples are presented illustrating the use of stochastic robustness analysis to address the effects of actuator dynamics and unmodeled dynamics on the stability robustness of the forward-swept-wing aircraft.
Lenormand, Thomas; Roze, Denis; Rousset, François
2009-03-01
The debate over the role of stochasticity is central in evolutionary biology, often summarised by whether or not evolution is predictable or repeatable. Here we distinguish three types of stochasticity: stochasticity of mutation and variation, of individual life histories and of environmental change. We then explain when stochasticity matters in evolution, distinguishing four broad situations: stochasticity contributes to maladaptation or limits adaptation; it drives evolution on flat fitness landscapes (evolutionary freedom); it might promote jumps from one fitness peak to another (evolutionary revolutions); and it might shape the selection pressures themselves. We show that stochasticity, by directly steering evolution, has become an essential ingredient of evolutionary theory beyond the classical Wright-Fisher or neutralist-selectionist debates.
Brown, F.B.; Sutton, T.M.
1996-02-01
This report is composed of the lecture notes from the first half of a 32-hour graduate-level course on Monte Carlo methods offered at KAPL. These notes, prepared by two of the principle developers of KAPL`s RACER Monte Carlo code, cover the fundamental theory, concepts, and practices for Monte Carlo analysis. In particular, a thorough grounding in the basic fundamentals of Monte Carlo methods is presented, including random number generation, random sampling, the Monte Carlo approach to solving transport problems, computational geometry, collision physics, tallies, and eigenvalue calculations. Furthermore, modern computational algorithms for vector and parallel approaches to Monte Carlo calculations are covered in detail, including fundamental parallel and vector concepts, the event-based algorithm, master/slave schemes, parallel scaling laws, and portability issues.
Monte Carlo algorithms for Brownian phylogenetic models.
Horvilleur, Benjamin; Lartillot, Nicolas
2014-11-01
Brownian models have been introduced in phylogenetics for describing variation in substitution rates through time, with applications to molecular dating or to the comparative analysis of variation in substitution patterns among lineages. Thus far, however, the Monte Carlo implementations of these models have relied on crude approximations, in which the Brownian process is sampled only at the internal nodes of the phylogeny or at the midpoints along each branch, and the unknown trajectory between these sampled points is summarized by simple branchwise average substitution rates. A more accurate Monte Carlo approach is introduced, explicitly sampling a fine-grained discretization of the trajectory of the (potentially multivariate) Brownian process along the phylogeny. Generic Monte Carlo resampling algorithms are proposed for updating the Brownian paths along and across branches. Specific computational strategies are developed for efficient integration of the finite-time substitution probabilities across branches induced by the Brownian trajectory. The mixing properties and the computational complexity of the resulting Markov chain Monte Carlo sampler scale reasonably with the discretization level, allowing practical applications with up to a few hundred discretization points along the entire depth of the tree. The method can be generalized to other Markovian stochastic processes, making it possible to implement a wide range of time-dependent substitution models with well-controlled computational precision. The program is freely available at www.phylobayes.org. © The Author 2014. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.
Monte Carlo algorithms for Brownian phylogenetic models
Horvilleur, Benjamin; Lartillot, Nicolas
2014-01-01
Motivation: Brownian models have been introduced in phylogenetics for describing variation in substitution rates through time, with applications to molecular dating or to the comparative analysis of variation in substitution patterns among lineages. Thus far, however, the Monte Carlo implementations of these models have relied on crude approximations, in which the Brownian process is sampled only at the internal nodes of the phylogeny or at the midpoints along each branch, and the unknown trajectory between these sampled points is summarized by simple branchwise average substitution rates. Results: A more accurate Monte Carlo approach is introduced, explicitly sampling a fine-grained discretization of the trajectory of the (potentially multivariate) Brownian process along the phylogeny. Generic Monte Carlo resampling algorithms are proposed for updating the Brownian paths along and across branches. Specific computational strategies are developed for efficient integration of the finite-time substitution probabilities across branches induced by the Brownian trajectory. The mixing properties and the computational complexity of the resulting Markov chain Monte Carlo sampler scale reasonably with the discretization level, allowing practical applications with up to a few hundred discretization points along the entire depth of the tree. The method can be generalized to other Markovian stochastic processes, making it possible to implement a wide range of time-dependent substitution models with well-controlled computational precision. Availability: The program is freely available at www.phylobayes.org Contact: nicolas.lartillot@univ-lyon1.fr PMID:25053744
Stochastic Simulation Tool for Aerospace Structural Analysis
NASA Technical Reports Server (NTRS)
Knight, Norman F.; Moore, David F.
2006-01-01
Stochastic simulation refers to incorporating the effects of design tolerances and uncertainties into the design analysis model and then determining their influence on the design. A high-level evaluation of one such stochastic simulation tool, the MSC.Robust Design tool by MSC.Software Corporation, has been conducted. This stochastic simulation tool provides structural analysts with a tool to interrogate their structural design based on their mathematical description of the design problem using finite element analysis methods. This tool leverages the analyst's prior investment in finite element model development of a particular design. The original finite element model is treated as the baseline structural analysis model for the stochastic simulations that are to be performed. A Monte Carlo approach is used by MSC.Robust Design to determine the effects of scatter in design input variables on response output parameters. The tool was not designed to provide a probabilistic assessment, but to assist engineers in understanding cause and effect. It is driven by a graphical-user interface and retains the engineer-in-the-loop strategy for design evaluation and improvement. The application problem for the evaluation is chosen to be a two-dimensional shell finite element model of a Space Shuttle wing leading-edge panel under re-entry aerodynamic loading. MSC.Robust Design adds value to the analysis effort by rapidly being able to identify design input variables whose variability causes the most influence in response output parameters.
Microgrid Reliability Modeling and Battery Scheduling Using Stochastic Linear Programming
Cardoso, Goncalo; Stadler, Michael; Siddiqui, Afzal; Marnay, Chris; DeForest, Nicholas; Barbosa-Povoa, Ana; Ferrao, Paulo
2013-05-23
This paper describes the introduction of stochastic linear programming into Operations DER-CAM, a tool used to obtain optimal operating schedules for a given microgrid under local economic and environmental conditions. This application follows previous work on optimal scheduling of a lithium-iron-phosphate battery given the output uncertainty of a 1 MW molten carbonate fuel cell. Both are in the Santa Rita Jail microgrid, located in Dublin, California. This fuel cell has proven unreliable, partially justifying the consideration of storage options. Several stochastic DER-CAM runs are executed to compare different scenarios to values obtained by a deterministic approach. Results indicate that using a stochastic approach provides a conservative yet more lucrative battery schedule. Lower expected energy bills result, given fuel cell outages, in potential savings exceeding 6percent.
Stochastic simulations of genetic switch systems.
Loinger, Adiel; Lipshtat, Azi; Balaban, Nathalie Q; Biham, Ofer
2007-02-01
Genetic switch systems with mutual repression of two transcription factors are studied using deterministic methods (rate equations) and stochastic methods (the master equation and Monte Carlo simulations). These systems exhibit bistability, namely two stable states such that spontaneous transitions between them are rare. Induced transitions may take place as a result of an external stimulus. We study several variants of the genetic switch and examine the effects of cooperative binding, exclusive binding, protein-protein interactions, and degradation of bound repressors. We identify the range of parameters in which bistability takes place, enabling the system to function as a switch. Numerous studies have concluded that cooperative binding is a necessary condition for the emergence of bistability in these systems. We show that a suitable combination of network structure and stochastic effects gives rise to bistability even without cooperative binding. The average time between spontaneous transitions is evaluated as a function of the biological parameters.
Stochastic robustness of linear control systems
NASA Technical Reports Server (NTRS)
Stengel, Robert F.; Ryan, Laura E.
1990-01-01
A simple numerical procedure for estimating the stochastic robustness of a linear, time-invariant system is described. Monte Carlo evaluation of the system's eigenvalues allows the probability of instability and the related stochastic root locus to be estimated. This definition of robustness is an alternative to existing deterministic definitions that address both structured and unstructured parameter variations directly. This analysis approach treats not only Gaussian parameter uncertainties but non-Gaussian cases, including uncertain-but-bounded variations. Trivial extensions of the procedure admit alternate discriminants to be considered. Thus, the probabilities that stipulated degrees of instability will be exceeded or that closed-loop roots will leave desirable regions also can be estimated. Results are particularly amenable to graphical presentation.
Characterizing model uncertainties in the life cycle of lignocellulose-based ethanol fuels.
Spatari, Sabrina; MacLean, Heather L
2010-11-15
Renewable and low carbon fuel standards being developed at federal and state levels require an estimation of the life cycle carbon intensity (LCCI) of candidate fuels that can substitute for gasoline, such as second generation bioethanol. Estimating the LCCI of such fuels with a high degree of confidence requires the use of probabilistic methods to account for known sources of uncertainty. We construct life cycle models for the bioconversion of agricultural residue (corn stover) and energy crops (switchgrass) and explicitly examine uncertainty using Monte Carlo simulation. Using statistical methods to identify significant model variables from public data sets and Aspen Plus chemical process models,we estimate stochastic life cycle greenhouse gas (GHG) emissions for the two feedstocks combined with two promising fuel conversion technologies. The approach can be generalized to other biofuel systems. Our results show potentially high and uncertain GHG emissions for switchgrass-ethanol due to uncertain CO₂ flux from land use change and N₂O flux from N fertilizer. However, corn stover-ethanol,with its low-in-magnitude, tight-in-spread LCCI distribution, shows considerable promise for reducing life cycle GHG emissions relative to gasoline and corn-ethanol. Coproducts are important for reducing the LCCI of all ethanol fuels we examine.
Scattering of light by stochastically rough particles
NASA Technical Reports Server (NTRS)
Peltoniemi, Jouni I.; Lumme, Kari; Muinonen, Karri; Irvine, William M.
1989-01-01
The single particle phase function and the linear polarization for large stochastically deformed spheres have been calculated by Monte Carlo simulation using the geometrical optics approximation. The radius vector of a particle is assumed to obey a bivariate lognormal distribution with three free parameters: mean radius, its standard deviation and the coherence length of the autocorrelation function. All reflections/refractions which include sufficient energy have been included. Real and imaginary parts of the refractive index can be varied without any restrictions. Results and comparisons with some earlier less general theories are presented. Applications of this theory to the photometric properties of atmosphereless bodies and interplanetary dust are discussed.
The isolation limits of stochastic vibration
NASA Technical Reports Server (NTRS)
Knopse, C. R.; Allaire, P. E.
1993-01-01
The vibration isolation problem is formulated as a 1D kinematic problem. The geometry of the stochastic wall trajectories arising from the stroke constraint is defined in terms of their significant extrema. An optimal control solution for the minimum acceleration return path determines a lower bound on platform mean square acceleration. This bound is expressed in terms of the probability density function on the significant maxima and the conditional fourth moment of the first passage time inverse. The first of these is found analytically while the second is found using a Monte Carlo simulation. The rms acceleration lower bound as a function of available space is then determined through numerical quadrature.
Stochastic longshore current dynamics
NASA Astrophysics Data System (ADS)
Restrepo, Juan M.; Venkataramani, Shankar
2016-12-01
We develop a stochastic parametrization, based on a 'simple' deterministic model for the dynamics of steady longshore currents, that produces ensembles that are statistically consistent with field observations of these currents. Unlike deterministic models, stochastic parameterization incorporates randomness and hence can only match the observations in a statistical sense. Unlike statistical emulators, in which the model is tuned to the statistical structure of the observation, stochastic parametrization are not directly tuned to match the statistics of the observations. Rather, stochastic parameterization combines deterministic, i.e physics based models with stochastic models for the "missing physics" to create hybrid models, that are stochastic, but yet can be used for making predictions, especially in the context of data assimilation. We introduce a novel measure of the utility of stochastic models of complex processes, that we call consistency of sensitivity. A model with poor consistency of sensitivity requires a great deal of tuning of parameters and has a very narrow range of realistic parameters leading to outcomes consistent with a reasonable spectrum of physical outcomes. We apply this metric to our stochastic parametrization and show that, the loss of certainty inherent in model due to its stochastic nature is offset by the model's resulting consistency of sensitivity. In particular, the stochastic model still retains the forward sensitivity of the deterministic model and hence respects important structural/physical constraints, yet has a broader range of parameters capable of producing outcomes consistent with the field data used in evaluating the model. This leads to an expanded range of model applicability. We show, in the context of data assimilation, the stochastic parametrization of longshore currents achieves good results in capturing the statistics of observation that were not used in tuning the model.
A Stochastic Employment Problem
ERIC Educational Resources Information Center
Wu, Teng
2013-01-01
The Stochastic Employment Problem(SEP) is a variation of the Stochastic Assignment Problem which analyzes the scenario that one assigns balls into boxes. Balls arrive sequentially with each one having a binary vector X = (X[subscript 1], X[subscript 2],...,X[subscript n]) attached, with the interpretation being that if X[subscript i] = 1 the ball…
Research in Stochastic Processes
1988-10-10
26 L. Gorostiza ................................................. 25 G. Hardy...Technical Report No. 219, Dec. 1987. Sequential Anat., 7. 1988, 111-126 25 DONALD DAWSON and LUIS G. GOROSTIZA The work of Professors Dawson and Gorostiza ... Gorostiza , Generalized solutions of a class of nuclear space valued stochastic evolution equations. University of North Carolina Center for Stochastic
A Stochastic Employment Problem
ERIC Educational Resources Information Center
Wu, Teng
2013-01-01
The Stochastic Employment Problem(SEP) is a variation of the Stochastic Assignment Problem which analyzes the scenario that one assigns balls into boxes. Balls arrive sequentially with each one having a binary vector X = (X[subscript 1], X[subscript 2],...,X[subscript n]) attached, with the interpretation being that if X[subscript i] = 1 the ball…
Evolution with Stochastic Fitness and Stochastic Migration
Rice, Sean H.; Papadopoulos, Anthony
2009-01-01
Background Migration between local populations plays an important role in evolution - influencing local adaptation, speciation, extinction, and the maintenance of genetic variation. Like other evolutionary mechanisms, migration is a stochastic process, involving both random and deterministic elements. Many models of evolution have incorporated migration, but these have all been based on simplifying assumptions, such as low migration rate, weak selection, or large population size. We thus have no truly general and exact mathematical description of evolution that incorporates migration. Methodology/Principal Findings We derive an exact equation for directional evolution, essentially a stochastic Price equation with migration, that encompasses all processes, both deterministic and stochastic, contributing to directional change in an open population. Using this result, we show that increasing the variance in migration rates reduces the impact of migration relative to selection. This means that models that treat migration as a single parameter tend to be biassed - overestimating the relative impact of immigration. We further show that selection and migration interact in complex ways, one result being that a strategy for which fitness is negatively correlated with migration rates (high fitness when migration is low) will tend to increase in frequency, even if it has lower mean fitness than do other strategies. Finally, we derive an equation for the effective migration rate, which allows some of the complex stochastic processes that we identify to be incorporated into models with a single migration parameter. Conclusions/Significance As has previously been shown with selection, the role of migration in evolution is determined by the entire distributions of immigration and emigration rates, not just by the mean values. The interactions of stochastic migration with stochastic selection produce evolutionary processes that are invisible to deterministic evolutionary theory
Evolution with stochastic fitness and stochastic migration.
Rice, Sean H; Papadopoulos, Anthony
2009-10-09
Migration between local populations plays an important role in evolution - influencing local adaptation, speciation, extinction, and the maintenance of genetic variation. Like other evolutionary mechanisms, migration is a stochastic process, involving both random and deterministic elements. Many models of evolution have incorporated migration, but these have all been based on simplifying assumptions, such as low migration rate, weak selection, or large population size. We thus have no truly general and exact mathematical description of evolution that incorporates migration. We derive an exact equation for directional evolution, essentially a stochastic Price equation with migration, that encompasses all processes, both deterministic and stochastic, contributing to directional change in an open population. Using this result, we show that increasing the variance in migration rates reduces the impact of migration relative to selection. This means that models that treat migration as a single parameter tend to be biassed - overestimating the relative impact of immigration. We further show that selection and migration interact in complex ways, one result being that a strategy for which fitness is negatively correlated with migration rates (high fitness when migration is low) will tend to increase in frequency, even if it has lower mean fitness than do other strategies. Finally, we derive an equation for the effective migration rate, which allows some of the complex stochastic processes that we identify to be incorporated into models with a single migration parameter. As has previously been shown with selection, the role of migration in evolution is determined by the entire distributions of immigration and emigration rates, not just by the mean values. The interactions of stochastic migration with stochastic selection produce evolutionary processes that are invisible to deterministic evolutionary theory.
Stochastic volatility of the futures prices of emission allowances: A Bayesian approach
NASA Astrophysics Data System (ADS)
Kim, Jungmu; Park, Yuen Jung; Ryu, Doojin
2017-01-01
Understanding the stochastic nature of the spot volatility of emission allowances is crucial for risk management in emissions markets. In this study, by adopting a stochastic volatility model with or without jumps to represent the dynamics of European Union Allowances (EUA) futures prices, we estimate the daily volatilities and model parameters by using the Markov Chain Monte Carlo method for stochastic volatility (SV), stochastic volatility with return jumps (SVJ) and stochastic volatility with correlated jumps (SVCJ) models. Our empirical results reveal three important features of emissions markets. First, the data presented herein suggest that EUA futures prices exhibit significant stochastic volatility. Second, the leverage effect is noticeable regardless of whether or not jumps are included. Third, the inclusion of jumps has a significant impact on the estimation of the volatility dynamics. Finally, the market becomes very volatile and large jumps occur at the beginning of a new phase. These findings are important for policy makers and regulators.
Stochastic hard-sphere dynamics for hydrodynamics of nonideal fluids.
Donev, Aleksandar; Alder, Berni J; Garcia, Alejandro L
2008-08-15
A novel stochastic fluid model is proposed with a nonideal structure factor consistent with compressibility, and adjustable transport coefficients. This stochastic hard-sphere dynamics (SHSD) algorithm is a modification of the direct simulation Monte Carlo algorithm and has several computational advantages over event-driven hard-sphere molecular dynamics. Surprisingly, SHSD results in an equation of state and a pair correlation function identical to that of a deterministic Hamiltonian system of penetrable spheres interacting with linear core pair potentials. The fluctuating hydrodynamic behavior of the SHSD fluid is verified for the Brownian motion of a nanoparticle suspended in a compressible solvent.
Solution of the stochastic control problem in unbounded domains.
NASA Technical Reports Server (NTRS)
Robinson, P.; Moore, J.
1973-01-01
Bellman's dynamic programming equation for the optimal index and control law for stochastic control problems is a parabolic or elliptic partial differential equation frequently defined in an unbounded domain. Existing methods of solution require bounded domain approximations, the application of singular perturbation techniques or Monte Carlo simulation procedures. In this paper, using the fact that Poisson impulse noise tends to a Gaussian process under certain limiting conditions, a method which achieves an arbitrarily good approximate solution to the stochastic control problem is given. The method uses the two iterative techniques of successive approximation and quasi-linearization and is inherently more efficient than existing methods of solution.
A hybrid multiscale kinetic Monte Carlo method for simulation of copper electrodeposition
Zheng Zheming; Stephens, Ryan M.; Braatz, Richard D.; Alkire, Richard C.; Petzold, Linda R.
2008-05-01
A hybrid multiscale kinetic Monte Carlo (HMKMC) method for speeding up the simulation of copper electrodeposition is presented. The fast diffusion events are simulated deterministically with a heterogeneous diffusion model which considers site-blocking effects of additives. Chemical reactions are simulated by an accelerated (tau-leaping) method for discrete stochastic simulation which adaptively selects exact discrete stochastic simulation for the appropriate reaction whenever that is necessary. The HMKMC method is seen to be accurate and highly efficient.
A Stochastic Cratering Model for Asteroid Surfaces
NASA Technical Reports Server (NTRS)
Richardson, J. E.; Melosh, H. J.; Greenberg, R. J.
2005-01-01
The observed cratering records on asteroid surfaces (four so far: Gaspra, Ida, Mathilde, and Eros [1-4]) provide us with important clues to their past bombardment histories. Previous efforts toward interpreting these records have led to two basic modeling styles for reproducing the statistics of the observed crater populations. The first, and most direct, method is to use Monte Carlo techniques [5] to stochastically populate a matrix-model test surface with craters as a function of time [6,7]. The second method is to use a more general, parameterized approach to duplicate the statistics of the observed crater population [8,9]. In both methods, several factors must be included beyond the simple superposing of circular features: (1) crater erosion by subsequent impacts, (2) infilling of craters by impact ejecta, and (3) crater degradation and era- sure due to the seismic effects of subsequent impacts. Here we present an updated Monte Carlo (stochastic) modeling approach, designed specifically with small- to medium-sized asteroids in mind.
2005-12-30
decade or so, there has been increasing interest in probabilistic, or stochastic, robust control theory . Monte Carlo simulations methods have been...AFRL-VS-PS- AFRL-VS-PS- TR-2005 -1174 TR-2005-1174 STRUCTURAL VIBRATION MODELING & VALIDATION Modeling Uncertainty and Stochastic Control ...for Structural Control Dr. Vít Babuška, Dr. Delano Carter, and Dr. Steven Lane 30 December 2005 Final Report
SEU43 fuel bundle shielding analysis during spent fuel transport
Margeanu, C. A.; Ilie, P.; Olteanu, G.
2006-07-01
The basic task accomplished by the shielding calculations in a nuclear safety analysis consist in radiation doses calculation, in order to prevent any risks both for personnel protection and impact on the environment during the spent fuel manipulation, transport and storage. The paper investigates the effects induced by fuel bundle geometry modifications on the CANDU SEU spent fuel shielding analysis during transport. For this study, different CANDU-SEU43 fuel bundle projects, developed in INR Pitesti, have been considered. The spent fuel characteristics will be obtained by means of ORIGEN-S code. In order to estimate the corresponding radiation doses for different measuring points the Monte Carlo MORSE-SGC code will be used. Both codes are included in ORNL's SCALE 5 programs package. A comparison between the considered SEU43 fuel bundle projects will be also provided, with CANDU standard fuel bundle taken as reference. (authors)
Stochastic Pseudo-Boolean Optimization
2011-07-31
analysis of two-stage stochastic minimum s-t cut problems; (iv) exact solution algorithm for a class of stochastic bilevel knapsack problems; (v) exact...57 5 Bilevel Knapsack Problems with Stochastic Right-Hand Sides 58 6 Two-Stage Stochastic Assignment Problems 59 6.1 Introduction...programming formulations and related computational complexity issues. • Section 5 considers a specific stochastic extension of the bilevel knapsack
Spring, William Joseph
2009-04-13
We consider quantum analogues of n-parameter stochastic processes, associated integrals and martingale properties extending classical results obtained in [1, 2, 3], and quantum results in [4, 5, 6, 7, 8, 9, 10].
Stochastic self-assembly of incommensurate clusters.
D'Orsogna, M R; Lakatos, G; Chou, T
2012-02-28
Nucleation and molecular aggregation are important processes in numerous physical and biological systems. In many applications, these processes often take place in confined spaces, involving a finite number of particles. Analogous to treatments of stochastic chemical reactions, we examine the classic problem of homogeneous nucleation and self-assembly by deriving and analyzing a fully discrete stochastic master equation. We enumerate the highest probability steady states, and derive exact analytical formulae for quenched and equilibrium mean cluster size distributions. Upon comparison with results obtained from the associated mass-action Becker-Döring equations, we find striking differences between the two corresponding equilibrium mean cluster concentrations. These differences depend primarily on the divisibility of the total available mass by the maximum allowed cluster size, and the remainder. When such mass "incommensurability" arises, a single remainder particle can "emulsify" the system by significantly broadening the equilibrium mean cluster size distribution. This discreteness-induced broadening effect is periodic in the total mass of the system but arises even when the system size is asymptotically large, provided the ratio of the total mass to the maximum cluster size is finite. Ironically, classic mass-action equations are fairly accurate in the coarsening regime, before equilibrium is reached, despite the presence of large stochastic fluctuations found via kinetic Monte-Carlo simulations. Our findings define a new scaling regime in which results from classic mass-action theories are qualitatively inaccurate, even in the limit of large total system size.
Stochastic development in biologically structured population models.
De Valpine, Perry
2009-10-01
Variation in organismal development is ubiquitous in nature but omitted from most age- and stage-structured population models. I give a general approach for formulating and analyzing its role in density-independent population models using the framework of integral projection models. The approach allows flexible assumptions, including correlated development times among multiple life stages. I give a new Monte Carlo numerical integration approach to calculate long-term growth rate, its sensitivities, stable age-stage distributions and reproductive value. This method requires only simulations of individual life schedules, rather than iteration of full population dynamics, and has practical and theoretical appeal because it ties easily implemented simulations to numerical solution of demographic equations. I show that stochastic development is demographically important using two examples. For a desert cactus, many stochastic development models, with independent or correlated stage durations, can generate the same stable stage distribution (SSD) as the real data, but stable age-within-stage distributions and sensitivities of growth rate to demographic rates differ greatly among stochastic development scenarios. For Mediterranean fruit flies, empirical variation in maturation time has a large impact on population growth. The systematic model formulation and analysis approach given here should make consideration of variable development models widely accessible and readily extendible.
Stochastic dynamics and denaturation of thermalized DNA.
Deng, Mao Lin; Zhu, Wei Qiu
2008-02-01
In the first part of the paper, the stochastic dynamics of the Peyrard-Bishop-Dauxois (PBD) DNA model is studied. A one-dimensional averaged Itô stochastic differential equation governing the total energy of the system and the associated Fokker-Planck equation governing the transition probability density function of the total energy are derived from the Langevin equations for the base-pair (bp) separation of the PBD DNA model by using the stochastic averaging method for quasinonintegrable Hamiltonian systems. The stationary probability density function of the average energy and the mean square of the bp separation are obtained by solving the reduced Fokker-Planck equation. In the second part of the paper, the local denaturation of the thermalized PBD DNA model is studied as a first-passage-time problem in the energy. A backward Kolmogorov equation and a Pontryagin equation are derived from the averaged Itô equation and solved to yield the waiting-time distribution and the mean bp opening time. All the analytical results are confirmed with those from Monte Carlo simulation. It is pointed out that the proposed method may yield a reasonable mean bp opening time if the friction coefficient is fixed using experimental results.
Stochastic self-assembly of incommensurate clusters
NASA Astrophysics Data System (ADS)
D'Orsogna, M. R.; Lakatos, G.; Chou, T.
2012-02-01
Nucleation and molecular aggregation are important processes in numerous physical and biological systems. In many applications, these processes often take place in confined spaces, involving a finite number of particles. Analogous to treatments of stochastic chemical reactions, we examine the classic problem of homogeneous nucleation and self-assembly by deriving and analyzing a fully discrete stochastic master equation. We enumerate the highest probability steady states, and derive exact analytical formulae for quenched and equilibrium mean cluster size distributions. Upon comparison with results obtained from the associated mass-action Becker-Döring equations, we find striking differences between the two corresponding equilibrium mean cluster concentrations. These differences depend primarily on the divisibility of the total available mass by the maximum allowed cluster size, and the remainder. When such mass "incommensurability" arises, a single remainder particle can "emulsify" the system by significantly broadening the equilibrium mean cluster size distribution. This discreteness-induced broadening effect is periodic in the total mass of the system but arises even when the system size is asymptotically large, provided the ratio of the total mass to the maximum cluster size is finite. Ironically, classic mass-action equations are fairly accurate in the coarsening regime, before equilibrium is reached, despite the presence of large stochastic fluctuations found via kinetic Monte-Carlo simulations. Our findings define a new scaling regime in which results from classic mass-action theories are qualitatively inaccurate, even in the limit of large total system size.
Gorker, G.E.
1987-01-01
This report deals with concepts of the Tiber II tokamak reactor fueling systems. Contained in this report are the fuel injection requirement data, startup fueling requirements, intermediate range fueling requirements, power range fueling requirements and research and development considerations. (LSR)
Research in Stochastic Processes.
1985-09-01
appear. G. Kallianpur, Finitely additive approach to nonlinear filtering, Proc. Bernoulli Soc. Conf. on Stochastic Processes, T. Hida , ed., Springer, to...Nov. 85, in Proc. Bernoulli Soc. Conf. on Stochastic Processes, T. Hida , ed., Springer, to appear. i. Preparation T. Hsing, Extreme value theory for...1507 Carroll, R.J., Spiegelman, C.H., Lan, K.K.G., Bailey , K.T. and Abbott, R.D., Errors in-variables for binary regression models, Aug.82. 1508
Application of tabu search to deterministic and stochastic optimization problems
NASA Astrophysics Data System (ADS)
Gurtuna, Ozgur
During the past two decades, advances in computer science and operations research have resulted in many new optimization methods for tackling complex decision-making problems. One such method, tabu search, forms the basis of this thesis. Tabu search is a very versatile optimization heuristic that can be used for solving many different types of optimization problems. Another research area, real options, has also gained considerable momentum during the last two decades. Real options analysis is emerging as a robust and powerful method for tackling decision-making problems under uncertainty. Although the theoretical foundations of real options are well-established and significant progress has been made in the theory side, applications are lagging behind. A strong emphasis on practical applications and a multidisciplinary approach form the basic rationale of this thesis. The fundamental concepts and ideas behind tabu search and real options are investigated in order to provide a concise overview of the theory supporting both of these two fields. This theoretical overview feeds into the design and development of algorithms that are used to solve three different problems. The first problem examined is a deterministic one: finding the optimal servicing tours that minimize energy and/or duration of missions for servicing satellites around Earth's orbit. Due to the nature of the space environment, this problem is modeled as a time-dependent, moving-target optimization problem. Two solution methods are developed: an exhaustive method for smaller problem instances, and a method based on tabu search for larger ones. The second and third problems are related to decision-making under uncertainty. In the second problem, tabu search and real options are investigated together within the context of a stochastic optimization problem: option valuation. By merging tabu search and Monte Carlo simulation, a new method for studying options, Tabu Search Monte Carlo (TSMC) method, is
On a full Monte Carlo approach to quantum mechanics
NASA Astrophysics Data System (ADS)
Sellier, J. M.; Dimov, I.
2016-12-01
The Monte Carlo approach to numerical problems has shown to be remarkably efficient in performing very large computational tasks since it is an embarrassingly parallel technique. Additionally, Monte Carlo methods are well known to keep performance and accuracy with the increase of dimensionality of a given problem, a rather counterintuitive peculiarity not shared by any known deterministic method. Motivated by these very peculiar and desirable computational features, in this work we depict a full Monte Carlo approach to the problem of simulating single- and many-body quantum systems by means of signed particles. In particular we introduce a stochastic technique, based on the strategy known as importance sampling, for the computation of the Wigner kernel which, so far, has represented the main bottleneck of this method (it is equivalent to the calculation of a multi-dimensional integral, a problem in which complexity is known to grow exponentially with the dimensions of the problem). The introduction of this stochastic technique for the kernel is twofold: firstly it reduces the complexity of a quantum many-body simulation from non-linear to linear, secondly it introduces an embarassingly parallel approach to this very demanding problem. To conclude, we perform concise but indicative numerical experiments which clearly illustrate how a full Monte Carlo approach to many-body quantum systems is not only possible but also advantageous. This paves the way towards practical time-dependent, first-principle simulations of relatively large quantum systems by means of affordable computational resources.
Learning Weight Uncertainty with Stochastic Gradient MCMC for Shape Classification
Li, Chunyuan; Stevens, Andrew J.; Chen, Changyou; Pu, Yunchen; Gan, Zhe; Carin, Lawrence
2016-08-10
Learning the representation of shape cues in 2D & 3D objects for recognition is a fundamental task in computer vision. Deep neural networks (DNNs) have shown promising performance on this task. Due to the large variability of shapes, accurate recognition relies on good estimates of model uncertainty, ignored in traditional training of DNNs, typically learned via stochastic optimization. This paper leverages recent advances in stochastic gradient Markov Chain Monte Carlo (SG-MCMC) to learn weight uncertainty in DNNs. It yields principled Bayesian interpretations for the commonly used Dropout/DropConnect techniques and incorporates them into the SG-MCMC framework. Extensive experiments on 2D & 3D shape datasets and various DNN models demonstrate the superiority of the proposed approach over stochastic optimization. Our approach yields higher recognition accuracy when used in conjunction with Dropout and Batch-Normalization.
Classical Perturbation Theory for Monte Carlo Studies of System Reliability
Lewins, Jeffrey D.
2001-03-15
A variational principle for a Markov system allows the derivation of perturbation theory for models of system reliability, with prospects of extension to generalized Markov processes of a wide nature. It is envisaged that Monte Carlo or stochastic simulation will supply the trial functions for such a treatment, which obviates the standard difficulties of direct analog Monte Carlo perturbation studies. The development is given in the specific mode for first- and second-order theory, using an example with known analytical solutions. The adjoint equation is identified with the importance function and a discussion given as to how both the forward and backward (adjoint) fields can be obtained from a single Monte Carlo study, with similar interpretations for the additional functions required by second-order theory. Generalized Markov models with age-dependence are identified as coming into the scope of this perturbation theory.
Optimization of the Fixed Node Energy in Quantum Monte Carlo
NASA Astrophysics Data System (ADS)
Lin, Chang; Ceperley, David
1998-03-01
Good wave functions play an important role in Fixed-Node Quantum Monte Carlo simulations. Typical wave function optimization methods minimize the energy or variance within Variational Monte Carlo. We present a method to minimize the fixed node energy directly in Diffusion Monte Carlo(DMC). The fixed node energy, together with its derivatives with respect to the variational parameters in the wave function, is calculated. The derivative information is used to dynamically optimize variational parameters during a single DMC run using the Stochastic Gradient Approximation (SGA) method. We give results for the Be atom with a single variational parameter, and the Li2 molecule, with multiple parameters. (One of the Authors, C.L. would like to thank Claudia Filippi for providing a good Li2 wave function and many valuable discussions.)
An Advanced Neutronic Analysis Toolkit with Inline Monte Carlo capability for BHTR Analysis
William R. Martin; John C. Lee
2009-12-30
Monte Carlo capability has been combined with a production LWR lattice physics code to allow analysis of high temperature gas reactor configurations, accounting for the double heterogeneity due to the TRISO fuel. The Monte Carlo code MCNP5 has been used in conjunction with CPM3, which was the testbench lattice physics code for this project. MCNP5 is used to perform two calculations for the geometry of interest, one with homogenized fuel compacts and the other with heterogeneous fuel compacts, where the TRISO fuel kernels are resolved by MCNP5.
Stochastic calculus for uncoupled continuous-time random walks
NASA Astrophysics Data System (ADS)
Germano, Guido; Politi, Mauro; Scalas, Enrico; Schilling, René L.
2009-06-01
The continuous-time random walk (CTRW) is a pure-jump stochastic process with several applications not only in physics but also in insurance, finance, and economics. A definition is given for a class of stochastic integrals driven by a CTRW, which includes the Itō and Stratonovich cases. An uncoupled CTRW with zero-mean jumps is a martingale. It is proved that, as a consequence of the martingale transform theorem, if the CTRW is a martingale, the Itō integral is a martingale too. It is shown how the definition of the stochastic integrals can be used to easily compute them by Monte Carlo simulation. The relations between a CTRW, its quadratic variation, its Stratonovich integral, and its Itō integral are highlighted by numerical calculations when the jumps in space of the CTRW have a symmetric Lévy α -stable distribution and its waiting times have a one-parameter Mittag-Leffler distribution. Remarkably, these distributions have fat tails and an unbounded quadratic variation. In the diffusive limit of vanishing scale parameters, the probability density of this kind of CTRW satisfies the space-time fractional diffusion equation (FDE) or more in general the fractional Fokker-Planck equation, which generalizes the standard diffusion equation, solved by the probability density of the Wiener process, and thus provides a phenomenologic model of anomalous diffusion. We also provide an analytic expression for the quadratic variation of the stochastic process described by the FDE and check it by Monte Carlo.
Stochastic calculus for uncoupled continuous-time random walks.
Germano, Guido; Politi, Mauro; Scalas, Enrico; Schilling, René L
2009-06-01
The continuous-time random walk (CTRW) is a pure-jump stochastic process with several applications not only in physics but also in insurance, finance, and economics. A definition is given for a class of stochastic integrals driven by a CTRW, which includes the Itō and Stratonovich cases. An uncoupled CTRW with zero-mean jumps is a martingale. It is proved that, as a consequence of the martingale transform theorem, if the CTRW is a martingale, the Itō integral is a martingale too. It is shown how the definition of the stochastic integrals can be used to easily compute them by Monte Carlo simulation. The relations between a CTRW, its quadratic variation, its Stratonovich integral, and its Itō integral are highlighted by numerical calculations when the jumps in space of the CTRW have a symmetric Lévy alpha -stable distribution and its waiting times have a one-parameter Mittag-Leffler distribution. Remarkably, these distributions have fat tails and an unbounded quadratic variation. In the diffusive limit of vanishing scale parameters, the probability density of this kind of CTRW satisfies the space-time fractional diffusion equation (FDE) or more in general the fractional Fokker-Planck equation, which generalizes the standard diffusion equation, solved by the probability density of the Wiener process, and thus provides a phenomenologic model of anomalous diffusion. We also provide an analytic expression for the quadratic variation of the stochastic process described by the FDE and check it by Monte Carlo.
Criticality of spent reactor fuel
Harris, D.R.
1987-01-01
The storage capacity of spent reactor fuel pools can be greatly increased by consolidation. In this process, the fuel rods are removed from reactor fuel assemblies and are stored in close-packed arrays in a canister or skeleton. An earlier study examined criticality consideration for consolidation of Westinghouse fuel, assumed to be fresh, in canisters at the Millstone-2 spent-fuel pool and in the General Electric IF-300 shipping cask. The conclusions were that the fuel rods in the canister are so deficient in water that they are adequately subcritical, both in normal and in off-normal conditions. One potential accident, the water spill event, remained unresolved in the earlier study. A methodology is developed here for spent-fuel criticality and is applied to the water spill event. The methodology utilizes LEOPARD to compute few-group cross sections for the diffusion code PDQ7, which then is used to compute reactivity. These codes give results for fresh fuel that are in good agreement with KENO IV-NITAWL Monte Carlo results, which themselves are in good agreement with continuous energy Monte Carlo calculations. These methodologies are in reasonable agreement with critical measurements for undepleted fuel.
NASA Astrophysics Data System (ADS)
Dai, Liyi
2016-05-01
Stochastic optimization is a fundamental problem that finds applications in many areas including biological and cognitive sciences. The classical stochastic approximation algorithm for iterative stochastic optimization requires gradient information of the sample object function that is typically difficult to obtain in practice. Recently there has been renewed interests in derivative free approaches to stochastic optimization. In this paper, we examine the rates of convergence for the Kiefer-Wolfowitz algorithm and the mirror descent algorithm, by approximating gradient using finite differences generated through common random numbers. It is shown that the convergence of these algorithms can be accelerated by controlling the implementation of the finite differences. Particularly, it is shown that the rate can be increased to n-2/5 in general and to n-1/2, the best possible rate of stochastic approximation, in Monte Carlo optimization for a broad class of problems, in the iteration number n.
MULTILEVEL ACCELERATION OF STOCHASTIC COLLOCATION METHODS FOR PDE WITH RANDOM INPUT DATA
Webster, Clayton G; Jantsch, Peter A; Teckentrup, Aretha L; Gunzburger, Max D
2013-01-01
Stochastic Collocation (SC) methods for stochastic partial differential equa- tions (SPDEs) suffer from the curse of dimensionality, whereby increases in the stochastic dimension cause an explosion of computational effort. To combat these challenges, multilevel approximation methods seek to decrease computational complexity by balancing spatial and stochastic discretization errors. As a form of variance reduction, multilevel techniques have been successfully applied to Monte Carlo (MC) methods, but may be extended to accelerate other methods for SPDEs in which the stochastic and spatial degrees of freedom are de- coupled. This article presents general convergence and computational complexity analysis of a multilevel method for SPDEs, demonstrating its advantages with regard to standard, single level approximation. The numerical results will highlight conditions under which multilevel sparse grid SC is preferable to the more traditional MC and SC approaches.
Automated variance reduction for Monte Carlo shielding analyses with MCNP
NASA Astrophysics Data System (ADS)
Radulescu, Georgeta
Variance reduction techniques are employed in Monte Carlo analyses to increase the number of particles in the space phase of interest and thereby lower the variance of statistical estimation. Variance reduction parameters are required to perform Monte Carlo calculations. It is well known that adjoint solutions, even approximate ones, are excellent biasing functions that can significantly increase the efficiency of a Monte Carlo calculation. In this study, an automated method of generating Monte Carlo variance reduction parameters, and of implementing the source energy biasing and the weight window technique in MCNP shielding calculations has been developed. The method is based on the approach used in the SAS4 module of the SCALE code system, which derives the biasing parameters from an adjoint one-dimensional Discrete Ordinates calculation. Unlike SAS4 that determines the radial and axial dose rates of a spent fuel cask in separate calculations, the present method provides energy and spatial biasing parameters for the entire system that optimize the simulation of particle transport towards all external surfaces of a spent fuel cask. The energy and spatial biasing parameters are synthesized from the adjoint fluxes of three one-dimensional Discrete Ordinates adjoint calculations. Additionally, the present method accommodates multiple source regions, such as the photon sources in light-water reactor spent nuclear fuel assemblies, in one calculation. With this automated method, detailed and accurate dose rate maps for photons, neutrons, and secondary photons outside spent fuel casks or other containers can be efficiently determined with minimal efforts.
Losick, Richard; Desplan, Claude
2008-01-01
Summary Fundamental to living cells is the capacity to differentiate into subtypes with specialized attributes. Understanding the way cells acquire their fates is a major challenge in developmental biology. How cells adopt a particular fate is usually thought of as being deterministic, and in the large majority of cases it is. That is, cells acquire their fate by virtue of their lineage or their proximity to an inductive signal from another cell. In some cases, however, and in organisms ranging from bacteria to humans, cells choose one or another pathway of differentiation stochastically without apparent regard to environment or history. Stochasticity has important mechanistic requirements as we discuss. We will also speculate on why stochasticity is advantageous, and even critical in some circumstances, to the individual, the colony, or the species. PMID:18388284
Stochastic cooling at Fermilab
Marriner, J.
1986-08-01
The topics discussed are the stochastic cooling systems in use at Fermilab and some of the techniques that have been employed to meet the particular requirements of the anti-proton source. Stochastic cooling at Fermilab became of paramount importance about 5 years ago when the anti-proton source group at Fermilab abandoned the electron cooling ring in favor of a high flux anti-proton source which relied solely on stochastic cooling to achieve the phase space densities necessary for colliding proton and anti-proton beams. The Fermilab systems have constituted a substantial advance in the techniques of cooling including: large pickup arrays operating at microwave frequencies, extensive use of cryogenic techniques to reduce thermal noise, super-conducting notch filters, and the development of tools for controlling and for accurately phasing the system.
Monte Carlo simulation of air sampling methods for the measurement of radon decay products.
Sima, Octavian; Luca, Aurelian; Sahagia, Maria
2017-02-21
A stochastic model of the processes involved in the measurement of the activity of the (222)Rn decay products was developed. The distributions of the relevant factors, including air sampling and radionuclide collection, are propagated using Monte Carlo simulation to the final distribution of the measurement results. The uncertainties of the (222)Rn decay products concentrations in the air are realistically evaluated.
Noncovalent Interactions by Quantum Monte Carlo.
Dubecký, Matúš; Mitas, Lubos; Jurečka, Petr
2016-05-11
Quantum Monte Carlo (QMC) is a family of stochastic methods for solving quantum many-body problems such as the stationary Schrödinger equation. The review introduces basic notions of electronic structure QMC based on random walks in real space as well as its advances and adaptations to systems with noncovalent interactions. Specific issues such as fixed-node error cancellation, construction of trial wave functions, and efficiency considerations that allow for benchmark quality QMC energy differences are described in detail. Comprehensive overview of articles covers QMC applications to systems with noncovalent interactions over the last three decades. The current status of QMC with regard to efficiency, applicability, and usability by nonexperts together with further considerations about QMC developments, limitations, and unsolved challenges are discussed as well.
Chemical application of diffusion quantum Monte Carlo
NASA Technical Reports Server (NTRS)
Reynolds, P. J.; Lester, W. A., Jr.
1984-01-01
The diffusion quantum Monte Carlo (QMC) method gives a stochastic solution to the Schroedinger equation. This approach is receiving increasing attention in chemical applications as a result of its high accuracy. However, reducing statistical uncertainty remains a priority because chemical effects are often obtained as small differences of large numbers. As an example, the single-triplet splitting of the energy of the methylene molecule CH sub 2 is given. The QMC algorithm was implemented on the CYBER 205, first as a direct transcription of the algorithm running on the VAX 11/780, and second by explicitly writing vector code for all loops longer than a crossover length C. The speed of the codes relative to one another as a function of C, and relative to the VAX, are discussed. The computational time dependence obtained versus the number of basis functions is discussed and this is compared with that obtained from traditional quantum chemistry codes and that obtained from traditional computer architectures.
STOCHASTIC COOLING FOR BUNCHED BEAMS.
BLASKIEWICZ, M.
2005-05-16
Problems associated with bunched beam stochastic cooling are reviewed. A longitudinal stochastic cooling system for RHIC is under construction and has been partially commissioned. The state of the system and future plans are discussed.
A Stochastic Multi-Attribute Assessment of Energy Options for Fairbanks, Alaska
NASA Astrophysics Data System (ADS)
Read, L.; Madani, K.; Mokhtari, S.; Hanks, C. L.; Sheets, B.
2012-12-01
Many competing projects have been proposed to address Interior Alaska's high cost of energy—both for electricity production and for heating. Public and private stakeholders are considering the costs associated with these competing projects which vary in fuel source, subsidy requirements, proximity, and other factors. As a result, the current projects under consideration involve a complex cost structure of potential subsidies and reliance on present and future market prices, introducing a significant amount of uncertainty associated with each selection. Multi-criteria multi-decision making (MCMDM) problems of this nature can benefit from game theory and systems engineering methods, which account for behavior and preferences of stakeholders in the analysis to produce feasible and relevant solutions. This work uses a stochastic MCMDM framework to evaluate the trade-offs of each proposed project based on a complete cost analysis, environmental impact, and long-term sustainability. Uncertainty in the model is quantified via a Monte Carlo analysis, which helps characterize the sensitivity and risk associated with each project. Based on performance measures and criteria outlined by the stakeholders, a decision matrix will inform policy on selecting a project that is both efficient and preferred by the constituents.
Stochastic demographic forecasting.
Lee, R D
1992-11-01
"This paper describes a particular approach to stochastic population forecasting, which is implemented for the U.S.A. through 2065. Statistical time series methods are combined with demographic models to produce plausible long run forecasts of vital rates, with probability distributions. The resulting mortality forecasts imply gains in future life expectancy that are roughly twice as large as those forecast by the Office of the Social Security Actuary.... Resulting stochastic forecasts of the elderly population, elderly dependency ratios, and payroll tax rates for health, education and pensions are presented."
Stochastic modeling of rainfall
Guttorp, P.
1996-12-31
We review several approaches in the literature for stochastic modeling of rainfall, and discuss some of their advantages and disadvantages. While stochastic precipitation models have been around at least since the 1850`s, the last two decades have seen an increased development of models based (more or less) on the physical processes involved in precipitation. There are interesting questions of scale and measurement that pertain to these modeling efforts. Recent modeling efforts aim at including meteorological variables, and may be useful for regional down-scaling of general circulation models.
Markov stochasticity coordinates
NASA Astrophysics Data System (ADS)
Eliazar, Iddo
2017-01-01
Markov dynamics constitute one of the most fundamental models of random motion between the states of a system of interest. Markov dynamics have diverse applications in many fields of science and engineering, and are particularly applicable in the context of random motion in networks. In this paper we present a two-dimensional gauging method of the randomness of Markov dynamics. The method-termed Markov Stochasticity Coordinates-is established, discussed, and exemplified. Also, the method is tweaked to quantify the stochasticity of the first-passage-times of Markov dynamics, and the socioeconomic equality and mobility in human societies.
An Overview of the Monte Carlo Application ToolKit (MCATK)
Trahan, Travis John
2016-01-07
MCATK is a C++ component-based Monte Carlo neutron-gamma transport software library designed to build specialized applications and designed to provide new functionality in existing general-purpose Monte Carlo codes like MCNP; it was developed with Agile software engineering methodologies under the motivation to reduce costs. The characteristics of MCATK can be summarized as follows: MCATK physics – continuous energy neutron-gamma transport with multi-temperature treatment, static eigenvalue (k and α) algorithms, time-dependent algorithm, fission chain algorithms; MCATK geometry – mesh geometries, solid body geometries. MCATK provides verified, unit-tested Monte Carlo components, flexibility in Monte Carlo applications development, and numerous tools such as geometry and cross section plotters. Recent work has involved deterministic and Monte Carlo analysis of stochastic systems. Static and dynamic analysis is discussed, and the results of a dynamic test problem are given.
Stochastic analysis of transport in tubes with rough walls
Tartakovsky, Daniel M. . E-mail: dmt@lanl.gov; Xiu Dongbin . E-mail: dxiu@math.purdue.edu
2006-09-01
Flow and transport in tubes with rough surfaces play an important role in a variety of applications. Often the topology of such surfaces cannot be accurately described in all of its relevant details due to either insufficient data or measurement errors or both. In such cases, this topological uncertainty can be efficiently handled by treating rough boundaries as random fields, so that an underlying physical phenomenon is described by deterministic or stochastic differential equations in random domains. To deal with this class of problems, we use a computational framework, which is based on stochastic mappings to transform the original deterministic/stochastic problem in a random domain into a stochastic problem in a deterministic domain. The latter problem has been studied more extensively and existing analytical/numerical techniques can be readily applied. In this paper, we employ both a generalized polynomial chaos and Monte Carlo simulations to solve the transformed stochastic problem. We use our approach to describe transport of a passive scalar in Stokes' flow and to quantify the corresponding predictive uncertainty.
Synchronizing stochastic circadian oscillators in single cells of Neurospora crassa
Deng, Zhaojie; Arsenault, Sam; Caranica, Cristian; Griffith, James; Zhu, Taotao; Al-Omari, Ahmad; Schüttler, Heinz-Bernd; Arnold, Jonathan; Mao, Leidong
2016-01-01
The synchronization of stochastic coupled oscillators is a central problem in physics and an emerging problem in biology, particularly in the context of circadian rhythms. Most measurements on the biological clock are made at the macroscopic level of millions of cells. Here measurements are made on the oscillators in single cells of the model fungal system, Neurospora crassa, with droplet microfluidics and the use of a fluorescent recorder hooked up to a promoter on a clock controlled gene-2 (ccg-2). The oscillators of individual cells are stochastic with a period near 21 hours (h), and using a stochastic clock network ensemble fitted by Markov Chain Monte Carlo implemented on general-purpose graphical processing units (or GPGPUs) we estimated that >94% of the variation in ccg-2 expression was stochastic (as opposed to experimental error). To overcome this stochasticity at the macroscopic level, cells must synchronize their oscillators. Using a classic measure of similarity in cell trajectories within droplets, the intraclass correlation (ICC), the synchronization surface ICC is measured on >25,000 cells as a function of the number of neighboring cells within a droplet and of time. The synchronization surface provides evidence that cells communicate, and synchronization varies with genotype. PMID:27786253
Stochastic simulation and analysis of biomolecular reaction networks
Frazier, John M; Chushak, Yaroslav; Foy, Brent
2009-01-01
Background In recent years, several stochastic simulation algorithms have been developed to generate Monte Carlo trajectories that describe the time evolution of the behavior of biomolecular reaction networks. However, the effects of various stochastic simulation and data analysis conditions on the observed dynamics of complex biomolecular reaction networks have not recieved much attention. In order to investigate these issues, we employed a a software package developed in out group, called Biomolecular Network Simulator (BNS), to simulate and analyze the behavior of such systems. The behavior of a hypothetical two gene in vitro transcription-translation reaction network is investigated using the Gillespie exact stochastic algorithm to illustrate some of the factors that influence the analysis and interpretation of these data. Results Specific issues affecting the analysis and interpretation of simulation data are investigated, including: (1) the effect of time interval on data presentation and time-weighted averaging of molecule numbers, (2) effect of time averaging interval on reaction rate analysis, (3) effect of number of simulations on precision of model predictions, and (4) implications of stochastic simulations on optimization procedures. Conclusion The two main factors affecting the analysis of stochastic simulations are: (1) the selection of time intervals to compute or average state variables and (2) the number of simulations generated to evaluate the system behavior. PMID:19534796
Distributed parallel computing in stochastic modeling of groundwater systems.
Dong, Yanhui; Li, Guomin; Xu, Haizhen
2013-03-01
Stochastic modeling is a rapidly evolving, popular approach to the study of the uncertainty and heterogeneity of groundwater systems. However, the use of Monte Carlo-type simulations to solve practical groundwater problems often encounters computational bottlenecks that hinder the acquisition of meaningful results. To improve the computational efficiency, a system that combines stochastic model generation with MODFLOW-related programs and distributed parallel processing is investigated. The distributed computing framework, called the Java Parallel Processing Framework, is integrated into the system to allow the batch processing of stochastic models in distributed and parallel systems. As an example, the system is applied to the stochastic delineation of well capture zones in the Pinggu Basin in Beijing. Through the use of 50 processing threads on a cluster with 10 multicore nodes, the execution times of 500 realizations are reduced to 3% compared with those of a serial execution. Through this application, the system demonstrates its potential in solving difficult computational problems in practical stochastic modeling. © 2012, The Author(s). Groundwater © 2012, National Ground Water Association.
Stochastic Synapses Enable Efficient Brain-Inspired Learning Machines
Neftci, Emre O.; Pedroni, Bruno U.; Joshi, Siddharth; Al-Shedivat, Maruan; Cauwenberghs, Gert
2016-01-01
Recent studies have shown that synaptic unreliability is a robust and sufficient mechanism for inducing the stochasticity observed in cortex. Here, we introduce Synaptic Sampling Machines (S2Ms), a class of neural network models that uses synaptic stochasticity as a means to Monte Carlo sampling and unsupervised learning. Similar to the original formulation of Boltzmann machines, these models can be viewed as a stochastic counterpart of Hopfield networks, but where stochasticity is induced by a random mask over the connections. Synaptic stochasticity plays the dual role of an efficient mechanism for sampling, and a regularizer during learning akin to DropConnect. A local synaptic plasticity rule implementing an event-driven form of contrastive divergence enables the learning of generative models in an on-line fashion. S2Ms perform equally well using discrete-timed artificial units (as in Hopfield networks) or continuous-timed leaky integrate and fire neurons. The learned representations are remarkably sparse and robust to reductions in bit precision and synapse pruning: removal of more than 75% of the weakest connections followed by cursory re-learning causes a negligible performance loss on benchmark classification tasks. The spiking neuron-based S2Ms outperform existing spike-based unsupervised learners, while potentially offering substantial advantages in terms of power and complexity, and are thus promising models for on-line learning in brain-inspired hardware. PMID:27445650
Synchronizing stochastic circadian oscillators in single cells of Neurospora crassa
NASA Astrophysics Data System (ADS)
Deng, Zhaojie; Arsenault, Sam; Caranica, Cristian; Griffith, James; Zhu, Taotao; Al-Omari, Ahmad; Schüttler, Heinz-Bernd; Arnold, Jonathan; Mao, Leidong
2016-10-01
The synchronization of stochastic coupled oscillators is a central problem in physics and an emerging problem in biology, particularly in the context of circadian rhythms. Most measurements on the biological clock are made at the macroscopic level of millions of cells. Here measurements are made on the oscillators in single cells of the model fungal system, Neurospora crassa, with droplet microfluidics and the use of a fluorescent recorder hooked up to a promoter on a clock controlled gene-2 (ccg-2). The oscillators of individual cells are stochastic with a period near 21 hours (h), and using a stochastic clock network ensemble fitted by Markov Chain Monte Carlo implemented on general-purpose graphical processing units (or GPGPUs) we estimated that >94% of the variation in ccg-2 expression was stochastic (as opposed to experimental error). To overcome this stochasticity at the macroscopic level, cells must synchronize their oscillators. Using a classic measure of similarity in cell trajectories within droplets, the intraclass correlation (ICC), the synchronization surface ICC is measured on >25,000 cells as a function of the number of neighboring cells within a droplet and of time. The synchronization surface provides evidence that cells communicate, and synchronization varies with genotype.
Stochastic simulation and analysis of biomolecular reaction networks.
Frazier, John M; Chushak, Yaroslav; Foy, Brent
2009-06-17
In recent years, several stochastic simulation algorithms have been developed to generate Monte Carlo trajectories that describe the time evolution of the behavior of biomolecular reaction networks. However, the effects of various stochastic simulation and data analysis conditions on the observed dynamics of complex biomolecular reaction networks have not received much attention. In order to investigate these issues, we employed a a software package developed in out group, called Biomolecular Network Simulator (BNS), to simulate and analyze the behavior of such systems. The behavior of a hypothetical two gene in vitro transcription-translation reaction network is investigated using the Gillespie exact stochastic algorithm to illustrate some of the factors that influence the analysis and interpretation of these data. Specific issues affecting the analysis and interpretation of simulation data are investigated, including: (1) the effect of time interval on data presentation and time-weighted averaging of molecule numbers, (2) effect of time averaging interval on reaction rate analysis, (3) effect of number of simulations on precision of model predictions, and (4) implications of stochastic simulations on optimization procedures. The two main factors affecting the analysis of stochastic simulations are: (1) the selection of time intervals to compute or average state variables and (2) the number of simulations generated to evaluate the system behavior.
Stochastic Synapses Enable Efficient Brain-Inspired Learning Machines.
Neftci, Emre O; Pedroni, Bruno U; Joshi, Siddharth; Al-Shedivat, Maruan; Cauwenberghs, Gert
2016-01-01
Recent studies have shown that synaptic unreliability is a robust and sufficient mechanism for inducing the stochasticity observed in cortex. Here, we introduce Synaptic Sampling Machines (S2Ms), a class of neural network models that uses synaptic stochasticity as a means to Monte Carlo sampling and unsupervised learning. Similar to the original formulation of Boltzmann machines, these models can be viewed as a stochastic counterpart of Hopfield networks, but where stochasticity is induced by a random mask over the connections. Synaptic stochasticity plays the dual role of an efficient mechanism for sampling, and a regularizer during learning akin to DropConnect. A local synaptic plasticity rule implementing an event-driven form of contrastive divergence enables the learning of generative models in an on-line fashion. S2Ms perform equally well using discrete-timed artificial units (as in Hopfield networks) or continuous-timed leaky integrate and fire neurons. The learned representations are remarkably sparse and robust to reductions in bit precision and synapse pruning: removal of more than 75% of the weakest connections followed by cursory re-learning causes a negligible performance loss on benchmark classification tasks. The spiking neuron-based S2Ms outperform existing spike-based unsupervised learners, while potentially offering substantial advantages in terms of power and complexity, and are thus promising models for on-line learning in brain-inspired hardware.
Analysis of bilinear stochastic systems
NASA Technical Reports Server (NTRS)
Willsky, A. S.; Martin, D. N.; Marcus, S. I.
1975-01-01
Analysis of stochastic dynamical systems that involve multiplicative (bilinear) noise processes. After defining the systems of interest, consideration is given to the evolution of the moments of such systems, the question of stochastic stability, and estimation for bilinear stochastic systems. Both exact and approximate methods of analysis are introduced, and, in particular, the uses of Lie-theoretic concepts and harmonic analysis are discussed.
1998-03-01
Fossil fuels -- coal, oil, and natural gas -- built America`s historic economic strength. Today, coal supplies more than 55% of the electricity, oil more than 97% of the transportation needs, and natural gas 24% of the primary energy used in the US. Even taking into account increased use of renewable fuels and vastly improved powerplant efficiencies, 90% of national energy needs will still be met by fossil fuels in 2020. If advanced technologies that boost efficiency and environmental performance can be successfully developed and deployed, the US can continue to depend upon its rich resources of fossil fuels.
Wang, Yuanfeng; Christley, Scott; Mjolsness, Eric; Xie, Xiaohui
2010-07-21
Stochastic effects can be important for the behavior of processes involving small population numbers, so the study of stochastic models has become an important topic in the burgeoning field of computational systems biology. However analysis techniques for stochastic models have tended to lag behind their deterministic cousins due to the heavier computational demands of the statistical approaches for fitting the models to experimental data. There is a continuing need for more effective and efficient algorithms. In this article we focus on the parameter inference problem for stochastic kinetic models of biochemical reactions given discrete time-course observations of either some or all of the molecular species. We propose an algorithm for inference of kinetic rate parameters based upon maximum likelihood using stochastic gradient descent (SGD). We derive a general formula for the gradient of the likelihood function given discrete time-course observations. The formula applies to any explicit functional form of the kinetic rate laws such as mass-action, Michaelis-Menten, etc. Our algorithm estimates the gradient of the likelihood function by reversible jump Markov chain Monte Carlo sampling (RJMCMC), and then gradient descent method is employed to obtain the maximum likelihood estimation of parameter values. Furthermore, we utilize flux balance analysis and show how to automatically construct reversible jump samplers for arbitrary biochemical reaction models. We provide RJMCMC sampling algorithms for both fully observed and partially observed time-course observation data. Our methods are illustrated with two examples: a birth-death model and an auto-regulatory gene network. We find good agreement of the inferred parameters with the actual parameters in both models. The SGD method proposed in the paper presents a general framework of inferring parameters for stochastic kinetic models. The method is computationally efficient and is effective for both partially and fully
Topology optimization under stochastic stiffness
NASA Astrophysics Data System (ADS)
Asadpoure, Alireza
Topology optimization is a systematic computational tool for optimizing the layout of materials within a domain for engineering design problems. It allows variation of structural boundaries and connectivities. This freedom in the design space often enables discovery of new, high performance designs. However, solutions obtained by performing the optimization in a deterministic setting may be impractical or suboptimal when considering real-world engineering conditions with inherent variabilities including (for example) variabilities in fabrication processes and operating conditions. The aim of this work is to provide a computational methodology for topology optimization in the presence of uncertainties associated with structural stiffness, such as uncertain material properties and/or structural geometry. Existing methods for topology optimization under deterministic conditions are first reviewed. Modifications are then proposed to improve the numerical performance of the so-called Heaviside Projection Method (HPM) in continuum domains. Next, two approaches, perturbation and Polynomial Chaos Expansion (PCE), are proposed to account for uncertainties in the optimization procedure. These approaches are intrusive, allowing tight and efficient coupling of the uncertainty quantification with the optimization sensitivity analysis. The work herein develops a robust topology optimization framework aimed at reducing the sensitivity of optimized solutions to uncertainties. The perturbation-based approach combines deterministic topology optimization with a perturbation method for the quantification of uncertainties. The use of perturbation transforms the problem of topology optimization under uncertainty to an augmented deterministic topology optimization problem. The PCE approach combines the spectral stochastic approach for the representation and propagation of uncertainties with an existing deterministic topology optimization technique. The resulting compact representations
A non-stochastic iterative computational method to model light propagation in turbid media
NASA Astrophysics Data System (ADS)
McIntyre, Thomas J.; Zemp, Roger J.
2015-03-01
Monte Carlo models are widely used to model light transport in turbid media, however their results implicitly contain stochastic variations. These fluctuations are not ideal, especially for inverse problems where Jacobian matrix errors can lead to large uncertainties upon matrix inversion. Yet Monte Carlo approaches are more computationally favorable than solving the full Radiative Transport Equation. Here, a non-stochastic computational method of estimating fluence distributions in turbid media is proposed, which is called the Non-Stochastic Propagation by Iterative Radiance Evaluation method (NSPIRE). Rather than using stochastic means to determine a random walk for each photon packet, the propagation of light from any element to all other elements in a grid is modelled simultaneously. For locally homogeneous anisotropic turbid media, the matrices used to represent scattering and projection are shown to be block Toeplitz, which leads to computational simplifications via convolution operators. To evaluate the accuracy of the algorithm, 2D simulations were done and compared against Monte Carlo models for the cases of an isotropic point source and a pencil beam incident on a semi-infinite turbid medium. The model was shown to have a mean percent error less than 2%. The algorithm represents a new paradigm in radiative transport modelling and may offer a non-stochastic alternative to modeling light transport in anisotropic scattering media for applications where the diffusion approximation is insufficient.
Sensitivity of footbridge vibrations to stochastic walking parameters
NASA Astrophysics Data System (ADS)
Pedersen, Lars; Frier, Christian
2010-06-01
Some footbridges are so slender that pedestrian traffic can cause excessive vibrations and serviceability problems. Design guidelines outline procedures for vibration serviceability checks, but it is noticeable that they rely on the assumption that the action is deterministic, although in fact it is stochastic as different pedestrians generate different dynamic forces. For serviceability checks of footbridge designs it would seem reasonable to consider modelling the stochastic nature of the main parameters describing the excitation, such as for instance the load amplitude and the step frequency of the pedestrian. A stochastic modelling approach is adopted for this paper and it facilitates quantifying the probability of exceeding various vibration levels, which is useful in a discussion of serviceability of a footbridge design. However, estimates of statistical distributions of footbridge vibration levels to walking loads might be influenced by the models assumed for the parameters of the load model (the walking parameters). The paper explores how sensitive estimates of the statistical distribution of vertical footbridge response are to various stochastic assumptions for the walking parameters. The basis for the study is a literature review identifying different suggestions as to how the stochastic nature of these parameters may be modelled, and a parameter study examines how the different models influence estimates of the statistical distribution of footbridge vibrations. By neglecting scatter in some of the walking parameters, the significance of modelling the various walking parameters stochastically rather than deterministically is also investigated providing insight into which modelling efforts need to be made for arriving at reliable estimates of statistical distributions of footbridge vibrations. The studies for the paper are based on numerical simulations of footbridge responses and on the use of Monte Carlo simulations for modelling the stochastic nature of
Research in Stochastic Processes.
1982-10-31
locally convex spaces is studied. We obtain a general form of convergent p-cylindrical martingales in barrelled spaces. Using the locally convex space...topology of certain Orlicz and Lorentz spaces. References 1. Z. Suchanecki and A. Weron, Decomposability of p-cylindrical martingales, Center for Stochastic
Stochastic Local Distinguishability
NASA Astrophysics Data System (ADS)
Bandyopadhyay, Somshubhro; Roy, Anirban; Walgate, Jonathan
2007-03-01
We pose the question, ``when is globally available information is also locally available?'', formally as the problem of local state discrimination, and show that the deep qualitative link between local distinguishability and entanglement lies at the level of stochastic rather than deterministic local protocols. We restrict our attention to sets of mutually orthogonal pure quantum states. We define a set of states |ψi> as beingstochastically locally distinguishable if and only if there is a LOCC protocol whereby the parties can conclusively identify a member of the set with some nonzero probability. If a set is stochastically locally distinguishable, then the complete global information is potentially locally available. If not, the physical information encoded by the system can never be completely locally exposed. Our results are proved true for all orthogonal quantum states regardless of their dimensionality or multipartiality. First, we prove that entanglement is a necessary property of any system whose total global information can never be locally accessed. Second, entangled states that form part of an orthogonal basis can never be locally singled out. Completely entangled bases are, always stochastically locally indistinguishable. Third, we prove that any set of three orthogonal states, is stochastically locally distinguishable.
ERIC Educational Resources Information Center
Wolff, Hans
This paper deals with a stochastic process for the approximation of the root of a regression equation. This process was first suggested by Robbins and Monro. The main result here is a necessary and sufficient condition on the iteration coefficients for convergence of the process (convergence with probability one and convergence in the quadratic…
Research in Stochastic Processes
1988-08-31
25 L. de Haan ................................................... 26 L. Gorostiza ...DAISON and LUIS C. COROSTIZA The work of Professors Dawson and Gorostiza is concerned with obtaining a Langevin equation for the fluctuation limit of a...its uniqueness established. Reference 1. D.A. Dawson and L.G. Gorostiza , Generalized solutions of a class of nuclear space valued stochastic
Stochastic decentralized systems
NASA Astrophysics Data System (ADS)
Barfoot, Timothy David
Fundamental aspects of decentralized systems are considered from a control perspective. The stochastic framework afforded by Markov systems is presented as a formal setting in which to study decentralized systems. A stochastic algebra is introduced which allows Markov systems to be considered in matrix format but also strikes an important connection to the classic linear system originally studied by Kalman [1960]. The process of decentralization is shown to impose constraints on observability and controllability of a system. However, it is argued that communicating decentralized controllers can implement any control law possible with a centralized controller. Communication is shown to serve a dual role, both enabling sensor data to be shared and actions to be coordinated. The viabilities of these two types of communication are tested on a real network of mobile robots where they are found to be successful at a variety of tasks. Action coordination is reframed as a decentralized decision making process whereupon stochastic cellular automata (SCA) are introduced as a model. Through studies of SCA it is found that coordination in a group of arbitrarily and sparsely connected agents is possible using simple rules. The resulting stochastic mechanism may be immediately used as a practical decentralized decision making tool (it is tested on a group of mobile robots) but, it furthermore provides insight into the general features of self-organizing systems.
Tollestrup, A.V.; Dugan, G
1983-12-01
Major headings in this review include: proton sources; antiproton production; antiproton sources and Liouville, the role of the Debuncher; transverse stochastic cooling, time domain; the accumulator; frequency domain; pickups and kickers; Fokker-Planck equation; calculation of constants in the Fokker-Planck equation; and beam feedback. (GHT)
Variational and Diffusion Monte Carlo Approaches to the Nuclear Few- and Many-Body Problem
NASA Astrophysics Data System (ADS)
Pederiva, Francesco; Roggero, Alessandro; Schmidt, Kevin E.
We review Quantum Monte Carlo methods, a class of stochastic methods allowing for solving the many-body Schrödinger equation for an arbitrary Hamiltonian. The basic elements of the stochastic integration theory are first presented, followed by the implementation to the variational solution of the quantum many-body problem. Projection algorithms are then introduced, beginning with a formulation in coordinate space for central potentials, in order to illustrate the fundamental ideas. The extension to Hamiltonians with an explicit dependence on the spin-isospin degrees of freedom is then presented by making use of auxiliary fields (Auxiliary Field Diffusion Monte Carlo, AFDMC). Finally, we present the Configuration Interaction Monte Carlo algorithm (CIMC) a method to compute the ground state of general, local or non-local, Hamiltonians based on the configuration space sampling.
Multiple-time-stepping generalized hybrid Monte Carlo methods
Escribano, Bruno; Akhmatskaya, Elena; Reich, Sebastian; Azpiroz, Jon M.
2015-01-01
Performance of the generalized shadow hybrid Monte Carlo (GSHMC) method [1], which proved to be superior in sampling efficiency over its predecessors [2–4], molecular dynamics and hybrid Monte Carlo, can be further improved by combining it with multi-time-stepping (MTS) and mollification of slow forces. We demonstrate that the comparatively simple modifications of the method not only lead to better performance of GSHMC itself but also allow for beating the best performed methods, which use the similar force splitting schemes. In addition we show that the same ideas can be successfully applied to the conventional generalized hybrid Monte Carlo method (GHMC). The resulting methods, MTS-GHMC and MTS-GSHMC, provide accurate reproduction of thermodynamic and dynamical properties, exact temperature control during simulation and computational robustness and efficiency. MTS-GHMC uses a generalized momentum update to achieve weak stochastic stabilization to the molecular dynamics (MD) integrator. MTS-GSHMC adds the use of a shadow (modified) Hamiltonian to filter the MD trajectories in the HMC scheme. We introduce a new shadow Hamiltonian formulation adapted to force-splitting methods. The use of such Hamiltonians improves the acceptance rate of trajectories and has a strong impact on the sampling efficiency of the method. Both methods were implemented in the open-source MD package ProtoMol and were tested on a water and a protein systems. Results were compared to those obtained using a Langevin Molly (LM) method [5] on the same systems. The test results demonstrate the superiority of the new methods over LM in terms of stability, accuracy and sampling efficiency. This suggests that putting the MTS approach in the framework of hybrid Monte Carlo and using the natural stochasticity offered by the generalized hybrid Monte Carlo lead to improving stability of MTS and allow for achieving larger step sizes in the simulation of complex systems.
Edgeworth expansions of stochastic trading time
NASA Astrophysics Data System (ADS)
Decamps, Marc; De Schepper, Ann
2010-08-01
Under most local and stochastic volatility models the underlying forward is assumed to be a positive function of a time-changed Brownian motion. It relates nicely the implied volatility smile to the so-called activity rate in the market. Following Young and DeWitt-Morette (1986) [8], we propose to apply the Duru-Kleinert process-cum-time transformation in path integral to formulate the transition density of the forward. The method leads to asymptotic expansions of the transition density around a Gaussian kernel corresponding to the average activity in the market conditional on the forward value. The approximation is numerically illustrated for pricing vanilla options under the CEV model and the popular normal SABR model. The asymptotics can also be used for Monte Carlo simulations or backward integration schemes.
Detecting synchronization in coupled stochastic ecosystem networks
NASA Astrophysics Data System (ADS)
Kouvaris, N.; Provata, A.; Kugiumtzis, D.
2010-01-01
Instantaneous phase difference, synchronization index and mutual information are considered in order to detect phase transitions, collective behaviours and synchronization phenomena that emerge for different levels of diffusive and reactive activity in stochastic networks. The network under investigation is a spatial 2D lattice which serves as a substrate for Lotka-Volterra dynamics with 3rd order nonlinearities. Kinetic Monte Carlo simulations demonstrate that the system spontaneously organizes into a number of asynchronous local oscillators, when only nearest neighbour interactions are considered. In contrast, the oscillators can be correlated, phase synchronized and completely synchronized when introducing different interactivity rules (diffusive or reactive) for nearby and distant species. The quantitative measures of synchronization show that long distance diffusion coupling induces phase synchronization after a well defined transition point, while long distance reaction coupling induces smeared phase synchronization.
Stochastic Inversion of 2D Magnetotelluric Data
Chen, Jinsong
2010-07-01
The algorithm is developed to invert 2D magnetotelluric (MT) data based on sharp boundary parametrization using a Bayesian framework. Within the algorithm, we consider the locations and the resistivity of regions formed by the interfaces are as unknowns. We use a parallel, adaptive finite-element algorithm to forward simulate frequency-domain MT responses of 2D conductivity structure. Those unknown parameters are spatially correlated and are described by a geostatistical model. The joint posterior probability distribution function is explored by Markov Chain Monte Carlo (MCMC) sampling methods. The developed stochastic model is effective for estimating the interface locations and resistivity. Most importantly, it provides details uncertainty information on each unknown parameter. Hardware requirements: PC, Supercomputer, Multi-platform, Workstation; Software requirements C and Fortan; Operation Systems/version is Linux/Unix or Windows
Stochastic computing with biomolecular automata.
Adar, Rivka; Benenson, Yaakov; Linshiz, Gregory; Rosner, Amit; Tishby, Naftali; Shapiro, Ehud
2004-07-06
Stochastic computing has a broad range of applications, yet electronic computers realize its basic step, stochastic choice between alternative computation paths, in a cumbersome way. Biomolecular computers use a different computational paradigm and hence afford novel designs. We constructed a stochastic molecular automaton in which stochastic choice is realized by means of competition between alternative biochemical pathways, and choice probabilities are programmed by the relative molar concentrations of the software molecules coding for the alternatives. Programmable and autonomous stochastic molecular automata have been shown to perform direct analysis of disease-related molecular indicators in vitro and may have the potential to provide in situ medical diagnosis and cure.
Phylogenetic Stochastic Mapping Without Matrix Exponentiation
Irvahn, Jan; Minin, Vladimir N.
2014-01-01
Abstract Phylogenetic stochastic mapping is a method for reconstructing the history of trait changes on a phylogenetic tree relating species/organism carrying the trait. State-of-the-art methods assume that the trait evolves according to a continuous-time Markov chain (CTMC) and works well for small state spaces. The computations slow down considerably for larger state spaces (e.g., space of codons), because current methodology relies on exponentiating CTMC infinitesimal rate matrices—an operation whose computational complexity grows as the size of the CTMC state space cubed. In this work, we introduce a new approach, based on a CTMC technique called uniformization, which does not use matrix exponentiation for phylogenetic stochastic mapping. Our method is based on a new Markov chain Monte Carlo (MCMC) algorithm that targets the distribution of trait histories conditional on the trait data observed at the tips of the tree. The computational complexity of our MCMC method grows as the size of the CTMC state space squared. Moreover, in contrast to competing matrix exponentiation methods, if the rate matrix is sparse, we can leverage this sparsity and increase the computational efficiency of our algorithm further. Using simulated data, we illustrate advantages of our MCMC algorithm and investigate how large the state space needs to be for our method to outperform matrix exponentiation approaches. We show that even on the moderately large state space of codons our MCMC method can be significantly faster than currently used matrix exponentiation methods. PMID:24918812
Simulating stochastic dynamics using large time steps.
Corradini, O; Faccioli, P; Orland, H
2009-12-01
We present an approach to investigate the long-time stochastic dynamics of multidimensional classical systems, in contact with a heat bath. When the potential energy landscape is rugged, the kinetics displays a decoupling of short- and long-time scales and both molecular dynamics or Monte Carlo (MC) simulations are generally inefficient. Using a field theoretic approach, we perform analytically the average over the short-time stochastic fluctuations. This way, we obtain an effective theory, which generates the same long-time dynamics of the original theory, but has a lower time-resolution power. Such an approach is used to develop an improved version of the MC algorithm, which is particularly suitable to investigate the dynamics of rare conformational transitions. In the specific case of molecular systems at room temperature, we show that elementary integration time steps used to simulate the effective theory can be chosen a factor approximately 100 larger than those used in the original theory. Our results are illustrated and tested on a simple system, characterized by a rugged energy landscape.
Wollaber, Allan Benton
2016-06-16
This is a powerpoint presentation which serves as lecture material for the Parallel Computing summer school. It goes over the fundamentals of the Monte Carlo calculation method. The material is presented according to the following outline: Introduction (background, a simple example: estimating π), Why does this even work? (The Law of Large Numbers, The Central Limit Theorem), How to sample (inverse transform sampling, rejection), and An example from particle transport.
Linear-scaling and parallelisable algorithms for stochastic quantum chemistry
NASA Astrophysics Data System (ADS)
Booth, George H.; Smart, Simon D.; Alavi, Ali
2014-07-01
For many decades, quantum chemical method development has been dominated by algorithms which involve increasingly complex series of tensor contractions over one-electron orbital spaces. Procedures for their derivation and implementation have evolved to require the minimum amount of logic and rely heavily on computationally efficient library-based matrix algebra and optimised paging schemes. In this regard, the recent development of exact stochastic quantum chemical algorithms to reduce computational scaling and memory overhead requires a contrasting algorithmic philosophy, but one which when implemented efficiently can achieve higher accuracy/cost ratios with small random errors. Additionally, they can exploit the continuing trend for massive parallelisation which hinders the progress of deterministic high-level quantum chemical algorithms. In the Quantum Monte Carlo community, stochastic algorithms are ubiquitous but the discrete Fock space of quantum chemical methods is often unfamiliar, and the methods introduce new concepts required for algorithmic efficiency. In this paper, we explore these concepts and detail an algorithm used for Full Configuration Interaction Quantum Monte Carlo (FCIQMC), which is implemented and available in MOLPRO and as a standalone code, and is designed for high-level parallelism and linear-scaling with walker number. Many of the algorithms are also in use in, or can be transferred to, other stochastic quantum chemical methods and implementations. We apply these algorithms to the strongly correlated chromium dimer to demonstrate their efficiency and parallelism.
Stochastic lag time in nucleated linear self-assembly
NASA Astrophysics Data System (ADS)
Tiwari, Nitin S.; van der Schoot, Paul
2016-06-01
Protein aggregation is of great importance in biology, e.g., in amyloid fibrillation. The aggregation processes that occur at the cellular scale must be highly stochastic in nature because of the statistical number fluctuations that arise on account of the small system size at the cellular scale. We study the nucleated reversible self-assembly of monomeric building blocks into polymer-like aggregates using the method of kinetic Monte Carlo. Kinetic Monte Carlo, being inherently stochastic, allows us to study the impact of fluctuations on the polymerization reactions. One of the most important characteristic features in this kind of problem is the existence of a lag phase before self-assembly takes off, which is what we focus attention on. We study the associated lag time as a function of system size and kinetic pathway. We find that the leading order stochastic contribution to the lag time before polymerization commences is inversely proportional to the system volume for large-enough system size for all nine reaction pathways tested. Finite-size corrections to this do depend on the kinetic pathway.
Monte Carlo eikonal scattering
NASA Astrophysics Data System (ADS)
Gibbs, W. R.; Dedonder, J. P.
2012-08-01
Background: The eikonal approximation is commonly used to calculate heavy-ion elastic scattering. However, the full evaluation has only been done (without the use of Monte Carlo techniques or additional approximations) for α-α scattering.Purpose: Develop, improve, and test the Monte Carlo eikonal method for elastic scattering over a wide range of nuclei, energies, and angles.Method: Monte Carlo evaluation is used to calculate heavy-ion elastic scattering for heavy nuclei including the center-of-mass correction introduced in this paper and the Coulomb interaction in terms of a partial-wave expansion. A technique for the efficient expansion of the Glauber amplitude in partial waves is developed.Results: Angular distributions are presented for a number of nuclear pairs over a wide energy range using nucleon-nucleon scattering parameters taken from phase-shift analyses and densities from independent sources. We present the first calculations of the Glauber amplitude, without further approximation, and with realistic densities for nuclei heavier than helium. These densities respect the center-of-mass constraints. The Coulomb interaction is included in these calculations.Conclusion: The center-of-mass and Coulomb corrections are essential. Angular distributions can be predicted only up to certain critical angles which vary with the nuclear pairs and the energy, but we point out that all critical angles correspond to a momentum transfer near 1 fm-1.
Stochastic multiscale modeling of polycrystalline materials
NASA Astrophysics Data System (ADS)
Wen, Bin
Mechanical properties of engineering materials are sensitive to the underlying random microstructure. Quantification of mechanical property variability induced by microstructure variation is essential for the prediction of extreme properties and microstructure-sensitive design of materials. Recent advances in high throughput characterization of polycrystalline microstructures have resulted in huge data sets of microstructural descriptors and image snapshots. To utilize these large scale experimental data for computing the resulting variability of macroscopic properties, appropriate mathematical representation of microstructures is needed. By exploring the space containing all admissible microstructures that are statistically similar to the available data, one can estimate the distribution/envelope of possible properties by employing efficient stochastic simulation methodologies along with robust physics-based deterministic simulators. The focus of this thesis is on the construction of low-dimensional representations of random microstructures and the development of efficient physics-based simulators for polycrystalline materials. By adopting appropriate stochastic methods, such as Monte Carlo and Adaptive Sparse Grid Collocation methods, the variability of microstructure-sensitive properties of polycrystalline materials is investigated. The primary outcomes of this thesis include: (1) Development of data-driven reduced-order representations of microstructure variations to construct the admissible space of random polycrystalline microstructures. (2) Development of accurate and efficient physics-based simulators for the estimation of material properties based on mesoscale microstructures. (3) Investigating property variability of polycrystalline materials using efficient stochastic simulation methods in combination with the above two developments. The uncertainty quantification framework developed in this work integrates information science and materials science, and
A User’s Manual for MASH 1.0 - A Monte Carlo Adjoint Shielding Code System
1992-03-01
INTRODUCTION TO MORSE The Multigroup Oak Ridge Stochastic Experiment code (MORSE)’ is a multipurpose neutron and gamma-ray transport Monte Carlo code...in the energy transfer process. Thus, these multigroup cross sections have the same format for both neutrons and gamma rays. In addition, the... multigroup cross sections in a Monte Carlo code means that the effort required to produce cross-section libraries is reduced. Coupled neutron gamma-ray cross
Stochastic Modelling of Shallow Water Flows
NASA Astrophysics Data System (ADS)
Horritt, M. S.
2002-05-01
The application of computational fluid dynamics approaches to modelling shallow water flows in the environment is hindered by the uncertainty inherent to natural landforms, vegetation and processes. A stochastic approach to modelling is therefore required, but this has previously only been attempted through computationally intensive Monte Carlo methods. An efficient second order perturbation method is outlined in this presentation, whereby the governing equations are first discretised to form a non-linear system mapping model parameters to predictions. This system is then approximated using Taylor expansions to derive tractable expressions for the model prediction statistics. The approach is tested on a simple 1-D model of shallow water flow over uncertain topography, verified against ensembles of Monte Carlo simulations and approximate solutions derived by Fourier methods. Criteria for the applicability of increasing orders of Taylor expansions are derived as a function of flow depth and topographic variability. The results show that non-linear effects are important for even small topographic perturbations, and the second order perturbation method is required to derive model prediction statistics. This approximation holds well even as the flow depth tends towards the topographic roughness. The model predicted statistics are also well described by a Gaussian approximation, so only first and second moments need be calculated, even if these are significantly different to values predicted by a linear approximation. The implications for more sophisticated (2-D, advective etc.) models are discussed.
Stochastic ice stream dynamics
Bertagni, Matteo Bernard; Ridolfi, Luca
2016-01-01
Ice streams are narrow corridors of fast-flowing ice that constitute the arterial drainage network of ice sheets. Therefore, changes in ice stream flow are key to understanding paleoclimate, sea level changes, and rapid disintegration of ice sheets during deglaciation. The dynamics of ice flow are tightly coupled to the climate system through atmospheric temperature and snow recharge, which are known exhibit stochastic variability. Here we focus on the interplay between stochastic climate forcing and ice stream temporal dynamics. Our work demonstrates that realistic climate fluctuations are able to (i) induce the coexistence of dynamic behaviors that would be incompatible in a purely deterministic system and (ii) drive ice stream flow away from the regime expected in a steady climate. We conclude that environmental noise appears to be crucial to interpreting the past behavior of ice sheets, as well as to predicting their future evolution. PMID:27457960
BLASKIEWICZ,M.BRENNAN,J.M.CAMERON,P.WEI,J.
2003-05-12
Emittance growth due to Intra-Beam Scattering significantly reduces the heavy ion luminosity lifetime in RHIC. Stochastic cooling of the stored beam could improve things considerably by counteracting IBS and preventing particles from escaping the rf bucket [1]. High frequency bunched-beam stochastic cooling is especially challenging but observations of Schottky signals in the 4-8 GHz band indicate that conditions are favorable in RHIC [2]. We report here on measurements of the longitudinal beam transfer function carried out with a pickup kicker pair on loan from FNAL TEVATRON. Results imply that for ions a coasting beam description is applicable and we outline some general features of a viable momentum cooling system for RHIC.
Stochastic ice stream dynamics.
Mantelli, Elisa; Bertagni, Matteo Bernard; Ridolfi, Luca
2016-08-09
Ice streams are narrow corridors of fast-flowing ice that constitute the arterial drainage network of ice sheets. Therefore, changes in ice stream flow are key to understanding paleoclimate, sea level changes, and rapid disintegration of ice sheets during deglaciation. The dynamics of ice flow are tightly coupled to the climate system through atmospheric temperature and snow recharge, which are known exhibit stochastic variability. Here we focus on the interplay between stochastic climate forcing and ice stream temporal dynamics. Our work demonstrates that realistic climate fluctuations are able to (i) induce the coexistence of dynamic behaviors that would be incompatible in a purely deterministic system and (ii) drive ice stream flow away from the regime expected in a steady climate. We conclude that environmental noise appears to be crucial to interpreting the past behavior of ice sheets, as well as to predicting their future evolution.
Stochastic ice stream dynamics
NASA Astrophysics Data System (ADS)
Mantelli, Elisa; Bertagni, Matteo Bernard; Ridolfi, Luca
2016-08-01
Ice streams are narrow corridors of fast-flowing ice that constitute the arterial drainage network of ice sheets. Therefore, changes in ice stream flow are key to understanding paleoclimate, sea level changes, and rapid disintegration of ice sheets during deglaciation. The dynamics of ice flow are tightly coupled to the climate system through atmospheric temperature and snow recharge, which are known exhibit stochastic variability. Here we focus on the interplay between stochastic climate forcing and ice stream temporal dynamics. Our work demonstrates that realistic climate fluctuations are able to (i) induce the coexistence of dynamic behaviors that would be incompatible in a purely deterministic system and (ii) drive ice stream flow away from the regime expected in a steady climate. We conclude that environmental noise appears to be crucial to interpreting the past behavior of ice sheets, as well as to predicting their future evolution.
Monte Carlo Techniques for Nuclear Systems - Theory Lectures
Brown, Forrest B.
2016-11-29
These are lecture notes for a Monte Carlo class given at the University of New Mexico. The following topics are covered: course information; nuclear eng. review & MC; random numbers and sampling; computational geometry; collision physics; tallies and statistics; eigenvalue calculations I; eigenvalue calculations II; eigenvalue calculations III; variance reduction; parallel Monte Carlo; parameter studies; fission matrix and higher eigenmodes; doppler broadening; Monte Carlo depletion; HTGR modeling; coupled MC and T/H calculations; fission energy deposition. Solving particle transport problems with the Monte Carlo method is simple - just simulate the particle behavior. The devil is in the details, however. These lectures provide a balanced approach to the theory and practice of Monte Carlo simulation codes. The first lectures provide an overview of Monte Carlo simulation methods, covering the transport equation, random sampling, computational geometry, collision physics, and statistics. The next lectures focus on the state-of-the-art in Monte Carlo criticality simulations, covering the theory of eigenvalue calculations, convergence analysis, dominance ratio calculations, bias in Keff and tallies, bias in uncertainties, a case study of a realistic calculation, and Wielandt acceleration techniques. The remaining lectures cover advanced topics, including HTGR modeling and stochastic geometry, temperature dependence, fission energy deposition, depletion calculations, parallel calculations, and parameter studies. This portion of the class focuses on using MCNP to perform criticality calculations for reactor physics and criticality safety applications. It is an intermediate level class, intended for those with at least some familiarity with MCNP. Class examples provide hands-on experience at running the code, plotting both geometry and results, and understanding the code output. The class includes lectures & hands-on computer use for a variety of Monte Carlo calculations
Holmes-Cerfon, Miranda
2016-11-01
We study a model of rolling particles subject to stochastic fluctuations, which may be relevant in systems of nano- or microscale particles where rolling is an approximation for strong static friction. We consider the simplest possible nontrivial system: a linear polymer of three disks constrained to remain in contact and immersed in an equilibrium heat bath so the internal angle of the polymer changes due to stochastic fluctuations. We compare two cases: one where the disks can slide relative to each other and the other where they are constrained to roll, like gears. Starting from the Langevin equations with arbitrary linear velocity constraints, we use formal homogenization theory to derive the overdamped equations that describe the process in configuration space only. The resulting dynamics have the formal structure of a Brownian motion on a Riemannian or sub-Riemannian manifold, depending on if the velocity constraints are holonomic or nonholonomic. We use this to compute the trimer's equilibrium distribution with and without the rolling constraints. Surprisingly, the two distributions are different. We suggest two possible interpretations of this result: either (i) dry friction (or other dissipative, nonequilibrium forces) changes basic thermodynamic quantities like the free energy of a system, a statement that could be tested experimentally, or (ii) as a lesson in modeling rolling or friction more generally as a velocity constraint when stochastic fluctuations are present. In the latter case, we speculate there could be a "roughness" entropy whose inclusion as an effective force could compensate the constraint and preserve classical Boltzmann statistics. Regardless of the interpretation, our calculation shows the word "rolling" must be used with care when stochastic fluctuations are present.
Stochastic Thermodynamics of Learning
NASA Astrophysics Data System (ADS)
Goldt, Sebastian; Seifert, Udo
2017-01-01
Virtually every organism gathers information about its noisy environment and builds models from those data, mostly using neural networks. Here, we use stochastic thermodynamics to analyze the learning of a classification rule by a neural network. We show that the information acquired by the network is bounded by the thermodynamic cost of learning and introduce a learning efficiency η ≤1 . We discuss the conditions for optimal learning and analyze Hebbian learning in the thermodynamic limit.
Dorogovtsev, Andrei A
2010-06-29
For sets in a Hilbert space the concept of quadratic entropy is introduced. It is shown that this entropy is finite for the range of a stochastic flow of Brownian particles on R. This implies, in particular, the fact that the total time of the free travel in the Arratia flow of all particles that started from a bounded interval is finite. Bibliography: 10 titles.
Methodology for Stochastic Modeling.
1985-01-01
AD-AISS 851 METHODOLOGY FOR STOCHASTIC MODELING(U) ARMY MATERIEL 11 SYSTEMS ANALYSIS ACTIYITY ABERDEEN PROVING GROUND MD H E COHEN JAN 95 RNSAA-TR-41...FORM T REPORT NUMBER 2. GOVT ACCESSION NO. 3. RECIPIENT’$ CATALOG NUMBER 4. TITLE (and Subtitle) S. TYPE OF REPORT & PERIOD COVERED Methodology for...autoregression models, moving average models, ARMA, adaptive modeling, covariance methods , singular value decom- position, order determination rational
NASA Astrophysics Data System (ADS)
Holmes-Cerfon, Miranda
2016-11-01
We study a model of rolling particles subject to stochastic fluctuations, which may be relevant in systems of nano- or microscale particles where rolling is an approximation for strong static friction. We consider the simplest possible nontrivial system: a linear polymer of three disks constrained to remain in contact and immersed in an equilibrium heat bath so the internal angle of the polymer changes due to stochastic fluctuations. We compare two cases: one where the disks can slide relative to each other and the other where they are constrained to roll, like gears. Starting from the Langevin equations with arbitrary linear velocity constraints, we use formal homogenization theory to derive the overdamped equations that describe the process in configuration space only. The resulting dynamics have the formal structure of a Brownian motion on a Riemannian or sub-Riemannian manifold, depending on if the velocity constraints are holonomic or nonholonomic. We use this to compute the trimer's equilibrium distribution with and without the rolling constraints. Surprisingly, the two distributions are different. We suggest two possible interpretations of this result: either (i) dry friction (or other dissipative, nonequilibrium forces) changes basic thermodynamic quantities like the free energy of a system, a statement that could be tested experimentally, or (ii) as a lesson in modeling rolling or friction more generally as a velocity constraint when stochastic fluctuations are present. In the latter case, we speculate there could be a "roughness" entropy whose inclusion as an effective force could compensate the constraint and preserve classical Boltzmann statistics. Regardless of the interpretation, our calculation shows the word "rolling" must be used with care when stochastic fluctuations are present.
Stochastic Quantization of Instantons
NASA Astrophysics Data System (ADS)
Grandati, Y.; Bérard, A.; Grangé, P.
1996-03-01
The method of Parisi and Wu to quantize classical fields is applied to instanton solutionsϕIof euclidian non-linear theory in one dimension. The solutionϕεof the corresponding Langevin equation is built through a singular perturbative expansion inε=ℏ1/2in the frame of the center of mass of the instanton, where the differenceϕε-ϕIcarries only fluctuations of the instanton form. The relevance of the method is shown for the stochasticK dVequation with uniform noise in space: the exact solution usually obtained by the inverse scattering method is retrieved easily by the singular expansion. A general diagrammatic representation of the solution is then established which makes a thorough use of regrouping properties of stochastic diagrams derived in scalar field theory. Averaging over the noise and in the limit of infinite stochastic time, we obtain explicit expressions for the first two orders inεof the perturbed instanton and of its Green function. Specializing to the Sine-Gordon andϕ4models, the first anharmonic correction is obtained analytically. The calculation is carried to second order for theϕ4model, showing good convergence.
Stochastic image reconstruction for a dual-particle imaging system
NASA Astrophysics Data System (ADS)
Hamel, M. C.; Polack, J. K.; Poitrasson-Rivière, A.; Flaska, M.; Clarke, S. D.; Pozzi, S. A.; Tomanin, A.; Peerani, P.
2016-02-01
Stochastic image reconstruction has been applied to a dual-particle imaging system being designed for nuclear safeguards applications. The dual-particle imager (DPI) is a combined Compton-scatter and neutron-scatter camera capable of producing separate neutron and photon images. The stochastic origin ensembles (SOE) method was investigated as an imaging method for the DPI because only a minimal estimation of system response is required to produce images with quality that is comparable to common maximum-likelihood methods. This work contains neutron and photon SOE image reconstructions for a 252Cf point source, two mixed-oxide (MOX) fuel canisters representing point sources, and the MOX fuel canisters representing a distributed source. Simulation of the DPI using MCNPX-PoliMi is validated by comparison of simulated and measured results. Because image quality is dependent on the number of counts and iterations used, the relationship between these quantities is investigated.
Stochastic techno-economic evaluation of cellulosic biofuel pathways.
Zhao, Xin; Brown, Tristan R; Tyner, Wallace E
2015-12-01
This study evaluates the economic feasibility and stochastic dominance rank of eight cellulosic biofuel production pathways (including gasification, pyrolysis, liquefaction, and fermentation) under technological and economic uncertainty. A techno-economic assessment based financial analysis is employed to derive net present values and breakeven prices for each pathway. Uncertainty is investigated and incorporated into fuel prices and techno-economic variables: capital cost, conversion technology yield, hydrogen cost, natural gas price and feedstock cost using @Risk, a Palisade Corporation software. The results indicate that none of the eight pathways would be profitable at expected values under projected energy prices. Fast pyrolysis and hydroprocessing (FPH) has the lowest breakeven fuel price at 3.11$/gallon of gasoline equivalent (0.82$/liter of gasoline equivalent). With the projected energy prices, FPH investors could expect a 59% probability of loss. Stochastic dominance is done based on return on investment. Most risk-averse decision makers would prefer FPH to other pathways.
Markov chain Monte Carlo method without detailed balance.
Suwa, Hidemaro; Todo, Synge
2010-09-17
We present a specific algorithm that generally satisfies the balance condition without imposing the detailed balance in the Markov chain Monte Carlo. In our algorithm, the average rejection rate is minimized, and even reduced to zero in many relevant cases. The absence of the detailed balance also introduces a net stochastic flow in a configuration space, which further boosts up the convergence. We demonstrate that the autocorrelation time of the Potts model becomes more than 6 times shorter than that by the conventional Metropolis algorithm. Based on the same concept, a bounce-free worm algorithm for generic quantum spin models is formulated as well.
Electronic structure quantum Monte Carlo
NASA Astrophysics Data System (ADS)
Bajdich, Michal; Mitas, Lubos
2009-04-01
Quantum Monte Carlo (QMC) is an advanced simulation methodology for studies of manybody quantum systems. The QMC approaches combine analytical insights with stochastic computational techniques for efficient solution of several classes of important many-body problems such as the stationary Schrödinger equation. QMC methods of various flavors have been applied to a great variety of systems spanning continuous and lattice quantum models, molecular and condensed systems, BEC-BCS ultracold condensates, nuclei, etc. In this review, we focus on the electronic structure QMC, i.e., methods relevant for systems described by the electron-ion Hamiltonians. Some of the key QMC achievements include direct treatment of electron correlation, accuracy in predicting energy differences and favorable scaling in the system size. Calculations of atoms, molecules, clusters and solids have demonstrated QMC applicability to real systems with hundreds of electrons while providing 90-95% of the correlation energy and energy differences typically within a few percent of experiments. Advances in accuracy beyond these limits are hampered by the so-called fixed-node approximation which is used to circumvent the notorious fermion sign problem. Many-body nodes of fermion states and their properties have therefore become one of the important topics for further progress in predictive power and efficiency of QMC calculations. Some of our recent results on the wave function nodes and related nodal domain topologies will be briefly reviewed. This includes analysis of few-electron systems and descriptions of exact and approximate nodes using transformations and projections of the highly-dimensional nodal hypersurfaces into the 3D space. Studies of fermion nodes offer new insights into topological properties of eigenstates such as explicit demonstrations that generic fermionic ground states exhibit the minimal number of two nodal domains. Recently proposed trial wave functions based on Pfaffians with
Stochastic lattice model of synaptic membrane protein domains
NASA Astrophysics Data System (ADS)
Li, Yiwei; Kahraman, Osman; Haselwandter, Christoph A.
2017-05-01
Neurotransmitter receptor molecules, concentrated in synaptic membrane domains along with scaffolds and other kinds of proteins, are crucial for signal transmission across chemical synapses. In common with other membrane protein domains, synaptic domains are characterized by low protein copy numbers and protein crowding, with rapid stochastic turnover of individual molecules. We study here in detail a stochastic lattice model of the receptor-scaffold reaction-diffusion dynamics at synaptic domains that was found previously to capture, at the mean-field level, the self-assembly, stability, and characteristic size of synaptic domains observed in experiments. We show that our stochastic lattice model yields quantitative agreement with mean-field models of nonlinear diffusion in crowded membranes. Through a combination of analytic and numerical solutions of the master equation governing the reaction dynamics at synaptic domains, together with kinetic Monte Carlo simulations, we find substantial discrepancies between mean-field and stochastic models for the reaction dynamics at synaptic domains. Based on the reaction and diffusion properties of synaptic receptors and scaffolds suggested by previous experiments and mean-field calculations, we show that the stochastic reaction-diffusion dynamics of synaptic receptors and scaffolds provide a simple physical mechanism for collective fluctuations in synaptic domains, the molecular turnover observed at synaptic domains, key features of the observed single-molecule trajectories, and spatial heterogeneity in the effective rates at which receptors and scaffolds are recycled at the cell membrane. Our work sheds light on the physical mechanisms and principles linking the collective properties of membrane protein domains to the stochastic dynamics that rule their molecular components.
Decomposition and (importance) sampling techniques for multi-stage stochastic linear programs
Infanger, G.
1993-11-01
The difficulty of solving large-scale multi-stage stochastic linear programs arises from the sheer number of scenarios associated with numerous stochastic parameters. The number of scenarios grows exponentially with the number of stages and problems get easily out of hand even for very moderate numbers of stochastic parameters per stage. Our method combines dual (Benders) decomposition with Monte Carlo sampling techniques. We employ importance sampling to efficiently obtain accurate estimates of both expected future costs and gradients and right-hand sides of cuts. The method enables us to solve practical large-scale problems with many stages and numerous stochastic parameters per stage. We discuss the theory of sharing and adjusting cuts between different scenarios in a stage. We derive probabilistic lower and upper bounds, where we use importance path sampling for the upper bound estimation. Initial numerical results turned out to be promising.
Monte Carlo approach to tissue-cell populations
NASA Astrophysics Data System (ADS)
Drasdo, D.; Kree, R.; McCaskill, J. S.
1995-12-01
We describe a stochastic dynamics of tissue cells with special emphasis on epithelial cells and fibro- blasts and fibrocytes of the connective tissue. Pattern formation and growth characteristics of such cell populations in culture are investigated numerically by Monte Carlo simulations for quasi-two-dimensional systems of cells. A number of quantitative predictions are obtained which may be confronted with experimental results. Furthermore we introduce several biologically motivated variants of our basic model and briefly discuss the simulation of two dimensional analogs of two complex processes in tissues: the growth of a sarcoma across an epithelial boundary and the wound healing of a skin cut. As compared to other approaches, we find the Monte Carlo approach to tissue growth and structure to be particularly simple and flexible. It allows for a hierarchy of models reaching from global description of birth-death processes to very specific features of intracellular dynamics. (c) 1995 The American Physical Society
Alternative fuels include gaseous fuels such as hydrogen, natural gas, and propane; alcohols such as ethanol, methanol, and butanol; vegetable and waste-derived oils; and electricity. Overview of alternative fuels is here.
A retrodictive stochastic simulation algorithm
Vaughan, T.G. Drummond, P.D.; Drummond, A.J.
2010-05-20
In this paper we describe a simple method for inferring the initial states of systems evolving stochastically according to master equations, given knowledge of the final states. This is achieved through the use of a retrodictive stochastic simulation algorithm which complements the usual predictive stochastic simulation approach. We demonstrate the utility of this new algorithm by applying it to example problems, including the derivation of likely ancestral states of a gene sequence given a Markovian model of genetic mutation.
Fast and Efficient Stochastic Optimization for Analytic Continuation
Bao, Feng; Zhang, Guannan; Webster, Clayton G; Tang, Yanfei; Scarola, Vito; Summers, Michael Stuart; Maier, Thomas A
2016-09-28
In this analytic continuation of imaginary-time quantum Monte Carlo data to extract real-frequency spectra remains a key problem in connecting theory with experiment. Here we present a fast and efficient stochastic optimization method (FESOM) as a more accessible variant of the stochastic optimization method introduced by Mishchenko et al. [Phys. Rev. B 62, 6317 (2000)], and we benchmark the resulting spectra with those obtained by the standard maximum entropy method for three representative test cases, including data taken from studies of the two-dimensional Hubbard model. Genearally, we find that our FESOM approach yields spectra similar to the maximum entropy results. In particular, while the maximum entropy method yields superior results when the quality of the data is strong, we find that FESOM is able to resolve fine structure with more detail when the quality of the data is poor. In addition, because of its stochastic nature, the method provides detailed information on the frequency-dependent uncertainty of the resulting spectra, while the maximum entropy method does so only for the spectral weight integrated over a finite frequency region. Therefore, we believe that this variant of the stochastic optimization approach provides a viable alternative to the routinely used maximum entropy method, especially for data of poor quality.
A rigorous framework for multiscale simulation of stochastic cellular networks
Chevalier, Michael W.; El-Samad, Hana
2009-01-01
Noise and stochasticity are fundamental to biology and derive from the very nature of biochemical reactions where thermal motion of molecules translates into randomness in the sequence and timing of reactions. This randomness leads to cell-cell variability even in clonal populations. Stochastic biochemical networks are modeled as continuous time discrete state Markov processes whose probability density functions evolve according to a chemical master equation (CME). The CME is not solvable but for the simplest cases, and one has to resort to kinetic Monte Carlo techniques to simulate the stochastic trajectories of the biochemical network under study. A commonly used such algorithm is the stochastic simulation algorithm (SSA). Because it tracks every biochemical reaction that occurs in a given system, the SSA presents computational difficulties especially when there is a vast disparity in the timescales of the reactions or in the number of molecules involved in these reactions. This is common in cellular networks, and many approximation algorithms have evolved to alleviate the computational burdens of the SSA. Here, we present a rigorously derived modified CME framework based on the partition of a biochemically reacting system into restricted and unrestricted reactions. Although this modified CME decomposition is as analytically difficult as the original CME, it can be naturally used to generate a hierarchy of approximations at different levels of accuracy. Most importantly, some previously derived algorithms are demonstrated to be limiting cases of our formulation. We apply our methods to biologically relevant test systems to demonstrate their accuracy and efficiency. PMID:19673546
Stochasticity in Colonial Growth Dynamics of Individual Bacterial Cells
Lianou, Alexandra
2013-01-01
Conventional bacterial growth studies rely on large bacterial populations without considering the individual cells. Individual cells, however, can exhibit marked behavioral heterogeneity. Here, we present experimental observations on the colonial growth of 220 individual cells of Salmonella enterica serotype Typhimurium using time-lapse microscopy videos. We found a highly heterogeneous behavior. Some cells did not grow, showing filamentation or lysis before division. Cells that were able to grow and form microcolonies showed highly diverse growth dynamics. The quality of the videos allowed for counting the cells over time and estimating the kinetic parameters lag time (λ) and maximum specific growth rate (μmax) for each microcolony originating from a single cell. To interpret the observations, the variability of the kinetic parameters was characterized using appropriate probability distributions and introduced to a stochastic model that allows for taking into account heterogeneity using Monte Carlo simulation. The model provides stochastic growth curves demonstrating that growth of single cells or small microbial populations is a pool of events each one of which has its own probability to occur. Simulations of the model illustrated how the apparent variability in population growth gradually decreases with increasing initial population size (N0). For bacterial populations with N0 of >100 cells, the variability is almost eliminated and the system seems to behave deterministically, even though the underlying law is stochastic. We also used the model to demonstrate the effect of the presence and extent of a nongrowing population fraction on the stochastic growth of bacterial populations. PMID:23354712
Stochastic optimization of multireservoir systems via reinforcement learning
NASA Astrophysics Data System (ADS)
Lee, Jin-Hee; Labadie, John W.
2007-11-01
Although several variants of stochastic dynamic programming have been applied to optimal operation of multireservoir systems, they have been plagued by a high-dimensional state space and the inability to accurately incorporate the stochastic environment as characterized by temporally and spatially correlated hydrologic inflows. Reinforcement learning has emerged as an effective approach to solving sequential decision problems by combining concepts from artificial intelligence, cognitive science, and operations research. A reinforcement learning system has a mathematical foundation similar to dynamic programming and Markov decision processes, with the goal of maximizing the long-term reward or returns as conditioned on the state of the system environment and the immediate reward obtained from operational decisions. Reinforcement learning can include Monte Carlo simulation where transition probabilities and rewards are not explicitly known a priori. The Q-Learning method in reinforcement learning is demonstrated on the two-reservoir Geum River system, South Korea, and is shown to outperform implicit stochastic dynamic programming and sampling stochastic dynamic programming methods.
Stochastic transitions in a bistable reaction system on the membrane
Kochańczyk, Marek; Jaruszewicz, Joanna; Lipniacki, Tomasz
2013-01-01
Transitions between steady states of a multi-stable stochastic system in the perfectly mixed chemical reactor are possible only because of stochastic switching. In realistic cellular conditions, where diffusion is limited, transitions between steady states can also follow from the propagation of travelling waves. Here, we study the interplay between the two modes of transition for a prototype bistable system of kinase–phosphatase interactions on the plasma membrane. Within microscopic kinetic Monte Carlo simulations on the hexagonal lattice, we observed that for finite diffusion the behaviour of the spatially extended system differs qualitatively from the behaviour of the same system in the well-mixed regime. Even when a small isolated subcompartment remains mostly inactive, the chemical travelling wave may propagate, leading to the activation of a larger compartment. The activating wave can be induced after a small subdomain is activated as a result of a stochastic fluctuation. Such a spontaneous onset of activity is radically more probable in subdomains characterized by slower diffusion. Our results show that a local immobilization of substrates can lead to the global activation of membrane proteins by the mechanism that involves stochastic fluctuations followed by the propagation of a semi-deterministic travelling wave. PMID:23635492
Key parameter optimization and analysis of stochastic seismic inversion
NASA Astrophysics Data System (ADS)
Huang, Zhe-Yuan; Gan, Li-Deng; Dai, Xiao-Feng; Li, Ling-Gao; Wang, Jun
2012-03-01
Stochastic seismic inversion is the combination of geostatistics and seismic inversion technology which integrates information from seismic records, well logs, and geostatistics into a posterior probability density function (PDF) of subsurface models. The Markov chain Monte Carlo (MCMC) method is used to sample the posterior PDF and the subsurface model characteristics can be inferred by analyzing a set of the posterior PDF samples. In this paper, we first introduce the stochastic seismic inversion theory, discuss and analyze the four key parameters: seismic data signal-to-noise ratio (S/N), variogram, the posterior PDF sample number, and well density, and propose the optimum selection of these parameters. The analysis results show that seismic data S/N adjusts the compromise between the influence of the seismic data and geostatistics on the inversion results, the variogram controls the smoothness of the inversion results, the posterior PDF sample number determines the reliability of the statistical characteristics derived from the samples, and well density influences the inversion uncertainty. Finally, the comparison between the stochastic seismic inversion and the deterministic model based seismic inversion indicates that the stochastic seismic inversion can provide more reliable information of the subsurface character.
Fast and Efficient Stochastic Optimization for Analytic Continuation
Bao, Feng; Zhang, Guannan; Webster, Clayton G; Tang, Yanfei; Scarola, Vito; Summers, Michael Stuart; Maier, Thomas A
2016-09-28
In this analytic continuation of imaginary-time quantum Monte Carlo data to extract real-frequency spectra remains a key problem in connecting theory with experiment. Here we present a fast and efficient stochastic optimization method (FESOM) as a more accessible variant of the stochastic optimization method introduced by Mishchenko et al. [Phys. Rev. B 62, 6317 (2000)], and we benchmark the resulting spectra with those obtained by the standard maximum entropy method for three representative test cases, including data taken from studies of the two-dimensional Hubbard model. Genearally, we find that our FESOM approach yields spectra similar to the maximum entropy results. In particular, while the maximum entropy method yields superior results when the quality of the data is strong, we find that FESOM is able to resolve fine structure with more detail when the quality of the data is poor. In addition, because of its stochastic nature, the method provides detailed information on the frequency-dependent uncertainty of the resulting spectra, while the maximum entropy method does so only for the spectral weight integrated over a finite frequency region. Therefore, we believe that this variant of the stochastic optimization approach provides a viable alternative to the routinely used maximum entropy method, especially for data of poor quality.
Fast and efficient stochastic optimization for analytic continuation
NASA Astrophysics Data System (ADS)
Bao, F.; Tang, Y.; Summers, M.; Zhang, G.; Webster, C.; Scarola, V.; Maier, T. A.
2016-09-01
The analytic continuation of imaginary-time quantum Monte Carlo data to extract real-frequency spectra remains a key problem in connecting theory with experiment. Here we present a fast and efficient stochastic optimization method (FESOM) as a more accessible variant of the stochastic optimization method introduced by Mishchenko et al. [Phys. Rev. B 62, 6317 (2000), 10.1103/PhysRevB.62.6317], and we benchmark the resulting spectra with those obtained by the standard maximum entropy method for three representative test cases, including data taken from studies of the two-dimensional Hubbard model. We generally find that our FESOM approach yields spectra similar to the maximum entropy results. In particular, while the maximum entropy method yields superior results when the quality of the data is strong, we find that FESOM is able to resolve fine structure with more detail when the quality of the data is poor. In addition, because of its stochastic nature, the method provides detailed information on the frequency-dependent uncertainty of the resulting spectra, while the maximum entropy method does so only for the spectral weight integrated over a finite frequency region. We therefore believe that this variant of the stochastic optimization approach provides a viable alternative to the routinely used maximum entropy method, especially for data of poor quality.
Multi-scenario modelling of uncertainty in stochastic chemical systems
Evans, R. David; Ricardez-Sandoval, Luis A.
2014-09-15
Uncertainty analysis has not been well studied at the molecular scale, despite extensive knowledge of uncertainty in macroscale systems. The ability to predict the effect of uncertainty allows for robust control of small scale systems such as nanoreactors, surface reactions, and gene toggle switches. However, it is difficult to model uncertainty in such chemical systems as they are stochastic in nature, and require a large computational cost. To address this issue, a new model of uncertainty propagation in stochastic chemical systems, based on the Chemical Master Equation, is proposed in the present study. The uncertain solution is approximated by a composite state comprised of the averaged effect of samples from the uncertain parameter distributions. This model is then used to study the effect of uncertainty on an isomerization system and a two gene regulation network called a repressilator. The results of this model show that uncertainty in stochastic systems is dependent on both the uncertain distribution, and the system under investigation. -- Highlights: •A method to model uncertainty on stochastic systems was developed. •The method is based on the Chemical Master Equation. •Uncertainty in an isomerization reaction and a gene regulation network was modelled. •Effects were significant and dependent on the uncertain input and reaction system. •The model was computationally more efficient than Kinetic Monte Carlo.
Fast and Efficient Stochastic Optimization for Analytic Continuation
Bao, Feng; Zhang, Guannan; Webster, Clayton G; ...
2016-09-28
In this analytic continuation of imaginary-time quantum Monte Carlo data to extract real-frequency spectra remains a key problem in connecting theory with experiment. Here we present a fast and efficient stochastic optimization method (FESOM) as a more accessible variant of the stochastic optimization method introduced by Mishchenko et al. [Phys. Rev. B 62, 6317 (2000)], and we benchmark the resulting spectra with those obtained by the standard maximum entropy method for three representative test cases, including data taken from studies of the two-dimensional Hubbard model. Genearally, we find that our FESOM approach yields spectra similar to the maximum entropy results.more » In particular, while the maximum entropy method yields superior results when the quality of the data is strong, we find that FESOM is able to resolve fine structure with more detail when the quality of the data is poor. In addition, because of its stochastic nature, the method provides detailed information on the frequency-dependent uncertainty of the resulting spectra, while the maximum entropy method does so only for the spectral weight integrated over a finite frequency region. Therefore, we believe that this variant of the stochastic optimization approach provides a viable alternative to the routinely used maximum entropy method, especially for data of poor quality.« less
Monte Carlo fluorescence microtomography
NASA Astrophysics Data System (ADS)
Cong, Alexander X.; Hofmann, Matthias C.; Cong, Wenxiang; Xu, Yong; Wang, Ge
2011-07-01
Fluorescence microscopy allows real-time monitoring of optical molecular probes for disease characterization, drug development, and tissue regeneration. However, when a biological sample is thicker than 1 mm, intense scattering of light would significantly degrade the spatial resolution of fluorescence microscopy. In this paper, we develop a fluorescence microtomography technique that utilizes the Monte Carlo method to image fluorescence reporters in thick biological samples. This approach is based on an l0-regularized tomography model and provides an excellent solution. Our studies on biomimetic tissue scaffolds have demonstrated that the proposed approach is capable of localizing and quantifying the distribution of optical molecular probe accurately and reliably.
LMC: Logarithmantic Monte Carlo
NASA Astrophysics Data System (ADS)
Mantz, Adam B.
2017-06-01
LMC is a Markov Chain Monte Carlo engine in Python that implements adaptive Metropolis-Hastings and slice sampling, as well as the affine-invariant method of Goodman & Weare, in a flexible framework. It can be used for simple problems, but the main use case is problems where expensive likelihood evaluations are provided by less flexible third-party software, which benefit from parallelization across many nodes at the sampling level. The parallel/adaptive methods use communication through MPI, or alternatively by writing/reading files, and mostly follow the approaches pioneered by CosmoMC (ascl:1106.025).
Christiansen, David W.; Karnesky, Richard A.; Leggett, Robert D.; Baker, Ronald B.
1989-10-03
A fuel pin for a liquid metal nuclear reactor is provided. The fuel pin includes a generally cylindrical cladding member with metallic fuel material disposed therein. At least a portion of the fuel material extends radially outwardly to the inner diameter of the cladding member to promote efficient transfer of heat to the reactor coolant system. The fuel material defines at least one void space therein to facilitate swelling of the fuel material during fission.
Christiansen, David W.; Karnesky, Richard A.; Leggett, Robert D.; Baker, Ronald B.
1989-01-01
A fuel pin for a liquid metal nuclear reactor is provided. The fuel pin includes a generally cylindrical cladding member with metallic fuel material disposed therein. At least a portion of the fuel material extends radially outwardly to the inner diameter of the cladding member to promote efficient transfer of heat to the reactor coolant system. The fuel material defines at least one void space therein to facilitate swelling of the fuel material during fission.
Christiansen, D.W.; Karnesky, R.A.; Leggett, R.D.; Baker, R.B.
1987-11-24
A fuel pin for a liquid metal nuclear reactor is provided. The fuel pin includes a generally cylindrical cladding member with metallic fuel material disposed therein. At least a portion of the fuel material extends radially outwardly to the inner diameter of the cladding member to promote efficient transfer of heat to the reactor coolant system. The fuel material defines at least one void space therein to facilitate swelling of the fuel material during fission.
Christiansen, D.W.; Karnesky, R.A.; Leggett, R.D.; Baker, R.B.
1987-11-24
A fuel pin for a liquid metal nuclear reactor is provided. The fuel pin includes a generally cylindrical cladding member with metallic fuel material disposed therein. At least a portion of the fuel material extends radially outwardly to the inner diameter of the cladding member to promote efficient transfer of heat to the reactor coolant system. The fuel material defines at least one void space therein to facilitate swelling of the fuel material during fission.
Stochastic Optimal Scheduling of Residential Appliances with Renewable Energy Sources
Wu, Hongyu; Pratt, Annabelle; Chakraborty, Sudipta
2015-07-03
This paper proposes a stochastic, multi-objective optimization model within a Model Predictive Control (MPC) framework, to determine the optimal operational schedules of residential appliances operating in the presence of renewable energy source (RES). The objective function minimizes the weighted sum of discomfort, energy cost, total and peak electricity consumption, and carbon footprint. A heuristic method is developed for combining different objective components. The proposed stochastic model utilizes Monte Carlo simulation (MCS) for representing uncertainties in electricity price, outdoor temperature, RES generation, water usage, and non-controllable loads. The proposed model is solved using a mixed integer linear programming (MILP) solver and numerical results show the validity of the model. Case studies show the benefit of using the proposed optimization model.
Semiparametric Stochastic Modeling of the Rate Function in Longitudinal Studies
Zhu, Bin; Taylor, Jeremy M.G.; Song, Peter X.-K.
2011-01-01
In longitudinal biomedical studies, there is often interest in the rate functions, which describe the functional rates of change of biomarker profiles. This paper proposes a semiparametric approach to model these functions as the realizations of stochastic processes defined by stochastic differential equations. These processes are dependent on the covariates of interest and vary around a specified parametric function. An efficient Markov chain Monte Carlo algorithm is developed for inference. The proposed method is compared with several existing methods in terms of goodness-of-fit and more importantly the ability to forecast future functional data in a simulation study. The proposed methodology is applied to prostate-specific antigen profiles for illustration. Supplementary materials for this paper are available online. PMID:22423170
System Design Support by Optimization Method Using Stochastic Process
NASA Astrophysics Data System (ADS)
Yoshida, Hiroaki; Yamaguchi, Katsuhito; Ishikawa, Yoshio
We proposed the new optimization method based on stochastic process. The characteristics of this method are to obtain the approximate solution of the optimum solution as an expected value. In numerical calculation, a kind of Monte Carlo method is used to obtain the solution because of stochastic process. Then, it can obtain the probability distribution of the design variable because it is generated in the probability that design variables were in proportion to the evaluation function value. This probability distribution shows the influence of design variables on the evaluation function value. This probability distribution is the information which is very useful for the system design. In this paper, it is shown the proposed method is useful for not only the optimization but also the system design. The flight trajectory optimization problem for the hang-glider is shown as an example of the numerical calculation.
Some variance reduction methods for numerical stochastic homogenization.
Blanc, X; Le Bris, C; Legoll, F
2016-04-28
We give an overview of a series of recent studies devoted to variance reduction techniques for numerical stochastic homogenization. Numerical homogenization requires that a set of problems is solved at the microscale, the so-called corrector problems. In a random environment, these problems are stochastic and therefore need to be repeatedly solved, for several configurations of the medium considered. An empirical average over all configurations is then performed using the Monte Carlo approach, so as to approximate the effective coefficients necessary to determine the macroscopic behaviour. Variance severely affects the accuracy and the cost of such computations. Variance reduction approaches, borrowed from other contexts in the engineering sciences, can be useful. Some of these variance reduction techniques are presented, studied and tested here.
NASA Astrophysics Data System (ADS)
Sabelfeld, K. K.; Kireeva, A. E.
2017-01-01
This paper describes the stochastic models of electron-hole recombination in inhomogeneous semiconductors in two-dimensional and three-dimensional cases, which were developed on the basis of discrete (cellular automation) and continuous (Monte Carlo method) approaches. The mathematical model of electron-hole recombination, constructed on the basis of a system of spatially inhomogeneous nonlinear integro-differential Smoluchowski equations, is illustrated. The continuous algorithm of the Monte Carlo method and the discrete cellular automation algorithm used for the simulation of particle recombination in semiconductors are shown.
Accelerated maximum likelihood parameter estimation for stochastic biochemical systems.
Daigle, Bernie J; Roh, Min K; Petzold, Linda R; Niemi, Jarad
2012-05-01
A prerequisite for the mechanistic simulation of a biochemical system is detailed knowledge of its kinetic parameters. Despite recent experimental advances, the estimation of unknown parameter values from observed data is still a bottleneck for obtaining accurate simulation results. Many methods exist for parameter estimation in deterministic biochemical systems; methods for discrete stochastic systems are less well developed. Given the probabilistic nature of stochastic biochemical models, a natural approach is to choose parameter values that maximize the probability of the observed data with respect to the unknown parameters, a.k.a. the maximum likelihood parameter estimates (MLEs). MLE computation for all but the simplest models requires the simulation of many system trajectories that are consistent with experimental data. For models with unknown parameters, this presents a computational challenge, as the generation of consistent trajectories can be an extremely rare occurrence. We have developed Monte Carlo Expectation-Maximization with Modified Cross-Entropy Method (MCEM(2)): an accelerated method for calculating MLEs that combines advances in rare event simulation with a computationally efficient version of the Monte Carlo expectation-maximization (MCEM) algorithm. Our method requires no prior knowledge regarding parameter values, and it automatically provides a multivariate parameter uncertainty estimate. We applied the method to five stochastic systems of increasing complexity, progressing from an analytically tractable pure-birth model to a computationally demanding model of yeast-polarization. Our results demonstrate that MCEM(2) substantially accelerates MLE computation on all tested models when compared to a stand-alone version of MCEM. Additionally, we show how our method identifies parameter values for certain classes of models more accurately than two recently proposed computationally efficient methods. This work provides a novel, accelerated version
Stochastic analysis of complex reaction networks using binomial moment equations.
Barzel, Baruch; Biham, Ofer
2012-09-01
The stochastic analysis of complex reaction networks is a difficult problem because the number of microscopic states in such systems increases exponentially with the number of reactive species. Direct integration of the master equation is thus infeasible and is most often replaced by Monte Carlo simulations. While Monte Carlo simulations are a highly effective tool, equation-based formulations are more amenable to analytical treatment and may provide deeper insight into the dynamics of the network. Here, we present a highly efficient equation-based method for the analysis of stochastic reaction networks. The method is based on the recently introduced binomial moment equations [Barzel and Biham, Phys. Rev. Lett. 106, 150602 (2011)]. The binomial moments are linear combinations of the ordinary moments of the probability distribution function of the population sizes of the interacting species. They capture the essential combinatorics of the reaction processes reflecting their stoichiometric structure. This leads to a simple and transparent form of the equations, and allows a highly efficient and surprisingly simple truncation scheme. Unlike ordinary moment equations, in which the inclusion of high order moments is prohibitively complicated, the binomial moment equations can be easily constructed up to any desired order. The result is a set of equations that enables the stochastic analysis of complex reaction networks under a broad range of conditions. The number of equations is dramatically reduced from the exponential proliferation of the master equation to a polynomial (and often quadratic) dependence on the number of reactive species in the binomial moment equations. The aim of this paper is twofold: to present a complete derivation of the binomial moment equations; to demonstrate the applicability of the moment equations for a representative set of example networks, in which stochastic effects play an important role.
Portfolio Optimization with Stochastic Dividends and Stochastic Volatility
ERIC Educational Resources Information Center
Varga, Katherine Yvonne
2015-01-01
We consider an optimal investment-consumption portfolio optimization model in which an investor receives stochastic dividends. As a first problem, we allow the drift of stock price to be a bounded function. Next, we consider a stochastic volatility model. In each problem, we use the dynamic programming method to derive the Hamilton-Jacobi-Bellman…
Portfolio Optimization with Stochastic Dividends and Stochastic Volatility
ERIC Educational Resources Information Center
Varga, Katherine Yvonne
2015-01-01
We consider an optimal investment-consumption portfolio optimization model in which an investor receives stochastic dividends. As a first problem, we allow the drift of stock price to be a bounded function. Next, we consider a stochastic volatility model. In each problem, we use the dynamic programming method to derive the Hamilton-Jacobi-Bellman…
Green function simulation of Hamiltonian lattice models with stochastic reconfiguration
NASA Astrophysics Data System (ADS)
Beccaria, M.
2000-03-01
We apply a recently proposed Green function Monte Carlo procedure to the study of Hamiltonian lattice gauge theories. This class of algorithms computes quantum vacuum expectation values by averaging over a set of suitable weighted random walkers. By means of a procedure called stochastic reconfiguration the long standing problem of keeping fixed the walker population without a priori knowledge of the ground state is completely solved. In the U(1)_2 model, which we choose as our theoretical laboratory, we evaluate the mean plaquette and the vacuum energy per plaquette. We find good agreement with previous works using model-dependent guiding functions for the random walkers.
A stochastic analysis of a solar heated and cooled house
NASA Astrophysics Data System (ADS)
Tanthapanichakoon, W.; Himmelblau, D. M.
1981-05-01
Monte Carlo simulation techniques have been used to characterize the stochastic responses of the components of a solar heated and cooled house. Random variables with specified ensemble means, standard deviations, and probability distributions were introduced as inputs and parameters into the model equations for the house, and the equations solved repeatedly to provide samples of the component outputs. The character of the frequency distributions of the outputs, their means and standard deviations, and time statistics are discussed as well as implications with respect to the design of similar systems.
Stochastic dynamics of strongly-bound magnetic vortex pairs
NASA Astrophysics Data System (ADS)
Bondarenko, A. V.; Holmgren, E.; Koop, B. C.; Descamps, T.; Ivanov, B. A.; Korenivski, V.
2017-05-01
We demonstrate that strongly-bound spin-vortex pairs exhibit pronounced stochastic behaviour. Such dynamics is due to collective magnetization states originating from purely dipolar interactions between the vortices. The resulting thermal noise exhibits telegraph-like behaviour, with random switching between different oscillation regimes observable at room temperature. The noise in the system is further studied by varying the external field and observing the related changes in the frequency of switching and the probability for different magnetic states and regimes. Monte Carlo simulations are used to replicate and explain the experimental observations.
Renormalization group and perfect operators for stochastic differential equations.
Hou, Q; Goldenfeld, N; McKane, A
2001-03-01
We develop renormalization group (RG) methods for solving partial and stochastic differential equations on coarse meshes. RG transformations are used to calculate the precise effect of small-scale dynamics on the dynamics at the mesh size. The fixed point of these transformations yields a perfect operator: an exact representation of physical observables on the mesh scale with minimal lattice artifacts. We apply the formalism to simple nonlinear models of critical dynamics, and show how the method leads to an improvement in the computational performance of Monte Carlo methods.
Efficient stochastic sensitivity analysis of discrete event systems
Plyasunov, Sergey . E-mail: teleserg@uclink.berkeley.edu; Arkin, Adam P. . E-mail: aparkin@lbl.gov
2007-02-10
Sensitivity analysis quantifies the dependence of a system's behavior on the parameters that could possibly affect the dynamics. Calculation of sensitivities of stochastic chemical systems using Kinetic Monte Carlo and finite-difference-based methods is not only computationally intensive, but direct calculation of sensitivities by finite-difference-based methods of parameter perturbations converges very poorly. In this paper we develop an approach to this issue using a method based on the Girsanov measure transformation for jump processes to smooth the estimate of the sensitivity coefficients and make this estimation more accurate. We demonstrate the method with simple examples and discuss its appropriate use.
Parallelized Stochastic Cutoff Method for Long-Range Interacting Systems
NASA Astrophysics Data System (ADS)
Endo, Eishin; Toga, Yuta; Sasaki, Munetaka
2015-07-01
We present a method of parallelizing the stochastic cutoff (SCO) method, which is a Monte-Carlo method for long-range interacting systems. After interactions are eliminated by the SCO method, we subdivide a lattice into noninteracting interpenetrating sublattices. This subdivision enables us to parallelize the Monte-Carlo calculation in the SCO method. Such subdivision is found by numerically solving the vertex coloring of a graph created by the SCO method. We use an algorithm proposed by Kuhn and Wattenhofer to solve the vertex coloring by parallel computation. This method was applied to a two-dimensional magnetic dipolar system on an L × L square lattice to examine its parallelization efficiency. The result showed that, in the case of L = 2304, the speed of computation increased about 102 times by parallel computation with 288 processors.
Stochastic FDTD accuracy improvement through correlation coefficient estimation
NASA Astrophysics Data System (ADS)
Masumnia Bisheh, Khadijeh; Zakeri Gatabi, Bijan; Andargoli, Seyed Mehdi Hosseini
2015-04-01
This paper introduces a new scheme to improve the accuracy of the stochastic finite difference time domain (S-FDTD) method. S-FDTD, reported recently by Smith and Furse, calculates the variations in the electromagnetic fields caused by variability or uncertainty in the electrical properties of the materials in the model. The accuracy of the S-FDTD method is controlled by the approximations for correlation coefficients between the electrical properties of the materials in the model and the fields propagating in them. In this paper, new approximations for these correlation coefficients are obtained using Monte Carlo method with a small number of runs, terming them as Monte Carlo correlation coefficients (MC-CC). Numerical results for two bioelectromagnetic simulation examples demonstrate that MC-CC can improve the accuracy of the S-FDTD method and yield more accurate results than previous approximations.
Bellis, P.D.; Nesselrode, F.
1991-04-16
This patent describes a fuel pump. It includes: a fuel reservoir member, the fuel reservoir member being formed with fuel chambers, the chambers comprising an inlet chamber and an outlet chamber, means to supply fuel to the inlet chamber, means to deliver fuel from the outlet chamber to a point of use, the fuel reservoir member chambers also including a bypass chamber, means interconnecting the bypass chamber with the outlet chamber; the fuel pump also comprising pump means interconnecting the inlet chamber and the outlet chamber and adapted to suck fuel from the fuel supply means into the inlet chamber, through the pump means, out the outlet chamber, and to the fuel delivery means; the bypass chamber and the pump means providing two substantially separate paths of fuel flow in the fuel reservoir member, bypass plunger means normally closing off the flow of fuel through the bypass chamber one of the substantially separate paths including the fuel supply means and the fuel delivery means when the bypass plunger means is closed, the second of the substantially separate paths including the bypass chamber when the bypass plunger means is open, and all of the chambers and the interconnecting means therebetween being configured so as to create turbulence in the flow of any fuel supplied to the outlet chamber by the pump means and bypassed through the bypass chamber and the interconnecting means.
Stochastic ontogenetic growth model
NASA Astrophysics Data System (ADS)
West, B. J.; West, D.
2012-02-01
An ontogenetic growth model (OGM) for a thermodynamically closed system is generalized to satisfy both the first and second law of thermodynamics. The hypothesized stochastic ontogenetic growth model (SOGM) is shown to entail the interspecies allometry relation by explicitly averaging the basal metabolic rate and the total body mass over the steady-state probability density for the total body mass (TBM). This is the first derivation of the interspecies metabolic allometric relation from a dynamical model and the asymptotic steady-state distribution of the TBM is fit to data and shown to be inverse power law.
Stochastic processes in cosmology
NASA Astrophysics Data System (ADS)
Cáceres, Manuel O.; Diaz, Mario C.; Pullin, Jorge A.
1987-08-01
The behavior of a radiation filled de Sitter universe in which the equation of state is perturbed by a stochastic term is studied. The corresponding two-dimensional Fokker-Planck equation is solved. The finiteness of the cosmological constant appears to be a necessary condition for the stability of the model which undergoes an exponentially expanding state. Present address: Facultad de Matemática Astronomía y Física, Universidad Nacional de Córdoba, Laprida 854, 5000 Códoba, Argentina.
Stochastic Coupled Cluster Theory
NASA Astrophysics Data System (ADS)
Thom, Alex J. W.
2010-12-01
We describe a stochastic coupled cluster theory which represents excitation amplitudes as discrete excitors in the space of excitation amplitudes. Reexpressing the coupled cluster (CC) equations as the dynamics of excitors in this space, we show that a simple set of rules suffices to evolve a distribution of excitors to sample the CC solution and correctly evaluate the CC energy. These rules are not truncation specific and this method can calculate CC solutions to an arbitrary level of truncation. We present results of calculation on the neon atom, and nitrogen and water molecules showing the ability to recover both truncated and full CC results.
Stochastic thermodynamics of resetting
NASA Astrophysics Data System (ADS)
Fuchs, Jaco; Goldt, Sebastian; Seifert, Udo
2016-03-01
Stochastic dynamics with random resetting leads to a non-equilibrium steady state. Here, we consider the thermodynamics of resetting by deriving the first and second law for resetting processes far from equilibrium. We identify the contributions to the entropy production of the system which arise due to resetting and show that they correspond to the rate with which information is either erased or created. Using Landauer's principle, we derive a bound on the amount of work that is required to maintain a resetting process. We discuss different regimes of resetting, including a Maxwell demon scenario where heat is extracted from a bath at constant temperature.
NASA Astrophysics Data System (ADS)
Hairer, Martin
2006-03-01
We consider a class of parabolic stochastic PDEs driven by white noise in time, and we are interested in showing ergodicity for some cases where the noise is degenerate, i.e., acts only on part of the equation. In some cases where the standard Strong Feller / Irreducibility argument fails, one can nevertheless implement a coupling construction that ensures uniqueness of the invariant measure. We focus on the example of the complex Ginzburg-Landau equation driven by real space-time white noise.
USDA-ARS?s Scientific Manuscript database
We developed a sequential Monte Carlo filter to estimate the states and the parameters in a stochastic model of Japanese Encephalitis (JE) spread in the Philippines. This method is particularly important for its adaptability to the availability of new incidence data. This method can also capture the...
Schilstra, Maria J; Martin, Stephen R
2009-01-01
Stochastic simulations may be used to describe changes with time of a reaction system in a way that explicitly accounts for the fact that molecules show a significant degree of randomness in their dynamic behavior. The stochastic approach is almost invariably used when small numbers of molecules or molecular assemblies are involved because this randomness leads to significant deviations from the predictions of the conventional deterministic (or continuous) approach to the simulation of biochemical kinetics. Advances in computational methods over the three decades that have elapsed since the publication of Daniel Gillespie's seminal paper in 1977 (J. Phys. Chem. 81, 2340-2361) have allowed researchers to produce highly sophisticated models of complex biological systems. However, these models are frequently highly specific for the particular application and their description often involves mathematical treatments inaccessible to the nonspecialist. For anyone completely new to the field to apply such techniques in their own work might seem at first sight to be a rather intimidating prospect. However, the fundamental principles underlying the approach are in essence rather simple, and the aim of this article is to provide an entry point to the field for a newcomer. It focuses mainly on these general principles, both kinetic and computational, which tend to be not particularly well covered in specialist literature, and shows that interesting information may even be obtained using very simple operations in a conventional spreadsheet.
Stochastic power flow modeling
Not Available
1980-06-01
The stochastic nature of customer demand and equipment failure on large interconnected electric power networks has produced a keen interest in the accurate modeling and analysis of the effects of probabilistic behavior on steady state power system operation. The principle avenue of approach has been to obtain a solution to the steady state network flow equations which adhere both to Kirchhoff's Laws and probabilistic laws, using either combinatorial or functional approximation techniques. Clearly the need of the present is to develop sound techniques for producing meaningful data to serve as input. This research has addressed this end and serves to bridge the gap between electric demand modeling, equipment failure analysis, etc., and the area of algorithm development. Therefore, the scope of this work lies squarely on developing an efficient means of producing sensible input information in the form of probability distributions for the many types of solution algorithms that have been developed. Two major areas of development are described in detail: a decomposition of stochastic processes which gives hope of stationarity, ergodicity, and perhaps even normality; and a powerful surrogate probability approach using proportions of time which allows the calculation of joint events from one dimensional probability spaces.
Variance decomposition in stochastic simulators
Le Maître, O. P.; Knio, O. M.; Moraes, A.
2015-06-28
This work aims at the development of a mathematical and computational approach that enables quantification of the inherent sources of stochasticity and of the corresponding sensitivities in stochastic simulations of chemical reaction networks. The approach is based on reformulating the system dynamics as being generated by independent standardized Poisson processes. This reformulation affords a straightforward identification of individual realizations for the stochastic dynamics of each reaction channel, and consequently a quantitative characterization of the inherent sources of stochasticity in the system. By relying on the Sobol-Hoeffding decomposition, the reformulation enables us to perform an orthogonal decomposition of the solution variance. Thus, by judiciously exploiting the inherent stochasticity of the system, one is able to quantify the variance-based sensitivities associated with individual reaction channels, as well as the importance of channel interactions. Implementation of the algorithms is illustrated in light of simulations of simplified systems, including the birth-death, Schlögl, and Michaelis-Menten models.
Richard V. Field, Jr.; Emery, John M.; Grigoriu, Mircea Dan
2015-05-19
The stochastic collocation (SC) and stochastic Galerkin (SG) methods are two well-established and successful approaches for solving general stochastic problems. A recently developed method based on stochastic reduced order models (SROMs) can also be used. Herein we provide a comparison of the three methods for some numerical examples; our evaluation only holds for the examples considered in the paper. The purpose of the comparisons is not to criticize the SC or SG methods, which have proven very useful for a broad range of applications, nor is it to provide overall ratings of these methods as compared to the SROM method.more » Furthermore, our objectives are to present the SROM method as an alternative approach to solving stochastic problems and provide information on the computational effort required by the implementation of each method, while simultaneously assessing their performance for a collection of specific problems.« less
Richard V. Field, Jr.; Emery, John M.; Grigoriu, Mircea Dan
2015-05-19
The stochastic collocation (SC) and stochastic Galerkin (SG) methods are two well-established and successful approaches for solving general stochastic problems. A recently developed method based on stochastic reduced order models (SROMs) can also be used. Herein we provide a comparison of the three methods for some numerical examples; our evaluation only holds for the examples considered in the paper. The purpose of the comparisons is not to criticize the SC or SG methods, which have proven very useful for a broad range of applications, nor is it to provide overall ratings of these methods as compared to the SROM method. Furthermore, our objectives are to present the SROM method as an alternative approach to solving stochastic problems and provide information on the computational effort required by the implementation of each method, while simultaneously assessing their performance for a collection of specific problems.
Stochastic simulation of transport phenomena
Wedgewood, L.E.; Geurts, K.R.
1995-10-01
In this paper, four examples are given to demonstrate how stochastic simulations can be used as a method to obtain numerical solutions to transport problems. The problems considered are two-dimensional heat conduction, mass diffusion with reaction, the start-up of Poiseuille flow, and Couette flow of a suspension of Hookean dumbbells. The first three examples are standard problems with well-known analytic solutions which can be used to verify the results of the stochastic simulation. The fourth example combines a Brownian dynamics simulation for Hookean dumbbells, a crude model of a dilute polymer suspension, and a stochastic simulation for the suspending, Newtonian fluid. These examples illustrate appropriate methods for handling source/sink terms and initial and boundary conditions. The stochastic simulation results compare well with the analytic solutions and other numerical solutions. The goal of this paper is to demonstrate the wide applicability of stochastic simulation as a numerical method for transport problems.
Marcus, Ryan C.
2012-07-25
MCMini is a proof of concept that demonstrates the possibility for Monte Carlo neutron transport using OpenCL with a focus on performance. This implementation, written in C, shows that tracing particles and calculating reactions on a 3D mesh can be done in a highly scalable fashion. These results demonstrate a potential path forward for MCNP or other Monte Carlo codes.
Kalos, M. H.; Pederiva, F.
1998-12-01
We review the fundamental challenge of fermion Monte Carlo for continuous systems, the "sign problem". We seek that eigenfunction of the many-body Schriodinger equation that is antisymmetric under interchange of the coordinates of pairs of particles. We describe methods that depend upon the use of correlated dynamics for pairs of correlated walkers that carry opposite signs. There is an algorithmic symmetry between such walkers that must be broken to create a method that is both exact and as effective as for symmetric functions, In our new method, it is broken by using different "guiding" functions for walkers of opposite signs, and a geometric correlation between steps of their walks, With a specific process of cancellation of the walkers, overlaps with antisymmetric test functions are preserved. Finally, we describe the progress in treating free-fermion systems and a fermion fluid with 14 ^{3}He atoms.
NASA Technical Reports Server (NTRS)
2003-01-01
MGS MOC Release No. MOC2-387, 10 June 2003
This is a Mars Global Surveyor (MGS) Mars Orbiter Camera (MOC) wide angle view of the Charitum Montes, south of Argyre Planitia, in early June 2003. The seasonal south polar frost cap, composed of carbon dioxide, has been retreating southward through this area since spring began a month ago. The bright features toward the bottom of this picture are surfaces covered by frost. The picture is located near 57oS, 43oW. North is at the top, south is at the bottom. Sunlight illuminates the scene from the upper left. The area shown is about 217 km (135 miles) wide.
A stochastic model of nanoparticle self-assembly on Cayley trees
NASA Astrophysics Data System (ADS)
Mazilu, I.; Schwen, E. M.; Banks, W. E.; Pope, B. K.; Mazilu, D. A.
2015-01-01
Nanomedicine is an emerging area of medical research that uses innovative nanotechnologies to improve the delivery of therapeutic and diagnostic agents with maximum clinical benefit. We present a versatile stochastic model that can be used to capture the basic features of drug encapsulation of nanoparticles on tree-like synthetic polymers called dendrimers. The geometry of a dendrimer is described mathematically as a Cayley tree. We use our stochastic model to study the dynamics of deposition and release of monomers (simulating the drug molecules) on Cayley trees (simulating dendrimers). We present analytical and Monte Carlo simulation results for the particle density on Cayley trees of coordination number three and four.
Stochastic Hard-Sphere Dynamics for Hydrodynamics of Non-Ideal Fluids
Donev, A; Alder, B J; Garcia, A L
2008-02-26
A novel stochastic fluid model is proposed with a nonideal structure factor consistent with compressibility, and adjustable transport coefficients. This stochastic hard-sphere dynamics (SHSD) algorithm is a modification of the direct simulation Monte Carlo algorithm and has several computational advantages over event-driven hard-sphere molecular dynamics. Surprisingly, SHSD results in an equation of state and a pair correlation function identical to that of a deterministic Hamiltonian system of penetrable spheres interacting with linear core pair potentials. The fluctuating hydrodynamic behavior of the SHSD fluid is verified for the Brownian motion of a nanoparticle suspended in a compressible solvent.
Myrskylä, Mikko; Goldstein, Joshua R
2013-02-01
In this article, we show how stochastic diffusion models can be used to forecast demographic cohort processes using the Hernes, Gompertz, and logistic models. Such models have been used deterministically in the past, but both behavioral theory and forecast utility are improved by introducing randomness and uncertainty into the standard differential equations governing population processes. Our approach is to add time-series stochasticity to linearized versions of each process. We derive both Monte Carlo and analytic methods for estimating forecast uncertainty. We apply our methods to several examples of marriage and fertility, extending them to simultaneous forecasting of multiple cohorts and to processes restricted by factors such as declining fecundity.
Stochastic and delayed stochastic models of gene expression and regulation.
Ribeiro, Andre S
2010-01-01
Gene expression and gene regulatory networks dynamics are stochastic. The noise in the temporal amounts of proteins and RNA molecules in cells arises from the stochasticity of transcription initiation and elongation (e.g., due to RNA polymerase pausing), translation, and post-transcriptional regulation mechanisms, such as reversible phosphorylation and splicing. This is further enhanced by the fact that most RNA molecules and proteins exist in cells in very small amounts. Recently, the time needed for transcription and translation to be completed once initiated were shown to affect the stochasticity in gene networks. This observation stressed the need of either introducing explicit delays in models of transcription and translation or to model processes such as elongation at the single nucleotide level. Here we review stochastic and delayed stochastic models of gene expression and gene regulatory networks. We first present stochastic non-delayed and delayed models of transcription, followed by models at the single nucleotide level. Next, we present models of gene regulatory networks, describe the dynamics of specific stochastic gene networks and available simulators to implement these models. Copyright 2009 Elsevier Inc. All rights reserved.
Stochastic Evaluation of Riparian Vegetation Dynamics in River Channels
NASA Astrophysics Data System (ADS)
Miyamoto, H.; Kimura, R.; Toshimori, N.
2013-12-01
Vegetation overgrowth in sand bars and floodplains has been a serious problem for river management in Japan. From the viewpoints of flood control and ecological conservation, it would be necessary to accurately predict the vegetation dynamics for a long period of time. In this study, we have developed a stochastic model for predicting the dynamics of trees in floodplains with emphasis on the interaction with flood impacts. The model consists of the following four processes in coupling ecohydrology with biogeomorphology: (i) stochastic behavior of flow discharge, (ii) hydrodynamics in a channel with vegetation, (iii) variation of riverbed topography and (iv) vegetation dynamics on the floodplain. In the model, the flood discharge is stochastically simulated using a Poisson process, one of the conventional approaches in hydrological time-series generation. The model for vegetation dynamics includes the effects of tree growth, mortality by flood impacts, and infant tree invasion. To determine the model parameters, vegetation conditions have been observed mainly before and after flood impacts since 2008 at a field site located between 23.2-24.0 km from the river mouth in Kako River, Japan. This site is one of the vegetation overgrowth locations in Kako River floodplains, where the predominant tree species are willows and bamboos. In this presentation, sensitivity of the vegetation overgrowth tendency is investigated in Kako River channels. Through the Monte Carlo simulation for several cross sections in Kako River, responses of the vegetated channels are stochastically evaluated in terms of the changes of discharge magnitude and channel geomorphology. The expectation and standard deviation of vegetation areal ratio are compared in the different channel cross sections for different river discharges and relative floodplain heights. The result shows that the vegetation status changes sensitively in the channels with larger discharge and insensitive in the lower floodplain
Adaptive hybrid simulations for multiscale stochastic reaction networks
Hepp, Benjamin; Gupta, Ankit; Khammash, Mustafa
2015-01-21
The probability distribution describing the state of a Stochastic Reaction Network (SRN) evolves according to the Chemical Master Equation (CME). It is common to estimate its solution using Monte Carlo methods such as the Stochastic Simulation Algorithm (SSA). In many cases, these simulations can take an impractical amount of computational time. Therefore, many methods have been developed that approximate sample paths of the underlying stochastic process and estimate the solution of the CME. A prominent class of these methods include hybrid methods that partition the set of species and the set of reactions into discrete and continuous subsets. Such a partition separates the dynamics into a discrete and a continuous part. Simulating such a stochastic process can be computationally much easier than simulating the exact discrete stochastic process with SSA. Moreover, the quasi-stationary assumption to approximate the dynamics of fast subnetworks can be applied for certain classes of networks. However, as the dynamics of a SRN evolves, these partitions may have to be adapted during the simulation. We develop a hybrid method that approximates the solution of a CME by automatically partitioning the reactions and species sets into discrete and continuous components and applying the quasi-stationary assumption on identifiable fast subnetworks. Our method does not require any user intervention and it adapts to exploit the changing timescale separation between reactions and/or changing magnitudes of copy-numbers of constituent species. We demonstrate the efficiency of the proposed method by considering examples from systems biology and showing that very good approximations to the exact probability distributions can be achieved in significantly less computational time. This is especially the case for systems with oscillatory dynamics, where the system dynamics change considerably throughout the time-period of interest.
Adaptive hybrid simulations for multiscale stochastic reaction networks.
Hepp, Benjamin; Gupta, Ankit; Khammash, Mustafa
2015-01-21
The probability distribution describing the state of a Stochastic Reaction Network (SRN) evolves according to the Chemical Master Equation (CME). It is common to estimate its solution using Monte Carlo methods such as the Stochastic Simulation Algorithm (SSA). In many cases, these simulations can take an impractical amount of computational time. Therefore, many methods have been developed that approximate sample paths of the underlying stochastic process and estimate the solution of the CME. A prominent class of these methods include hybrid methods that partition the set of species and the set of reactions into discrete and continuous subsets. Such a partition separates the dynamics into a discrete and a continuous part. Simulating such a stochastic process can be computationally much easier than simulating the exact discrete stochastic process with SSA. Moreover, the quasi-stationary assumption to approximate the dynamics of fast subnetworks can be applied for certain classes of networks. However, as the dynamics of a SRN evolves, these partitions may have to be adapted during the simulation. We develop a hybrid method that approximates the solution of a CME by automatically partitioning the reactions and species sets into discrete and continuous components and applying the quasi-stationary assumption on identifiable fast subnetworks. Our method does not require any user intervention and it adapts to exploit the changing timescale separation between reactions and/or changing magnitudes of copy-numbers of constituent species. We demonstrate the efficiency of the proposed method by considering examples from systems biology and showing that very good approximations to the exact probability distributions can be achieved in significantly less computational time. This is especially the case for systems with oscillatory dynamics, where the system dynamics change considerably throughout the time-period of interest.
Investigation of stochastic radiation transport methods in random heterogeneous mixtures
NASA Astrophysics Data System (ADS)
Reinert, Dustin Ray
Among the most formidable challenges facing our world is the need for safe, clean, affordable energy sources. Growing concerns over global warming induced climate change and the rising costs of fossil fuels threaten conventional means of electricity production and are driving the current nuclear renaissance. One concept at the forefront of international development efforts is the High Temperature Gas-Cooled Reactor (HTGR). With numerous passive safety features and a meltdown-proof design capable of attaining high thermodynamic efficiencies for electricity generation as well as high temperatures useful for the burgeoning hydrogen economy, the HTGR is an extremely promising technology. Unfortunately, the fundamental understanding of neutron behavior within HTGR fuels lags far behind that of more conventional water-cooled reactors. HTGRs utilize a unique heterogeneous fuel element design consisting of thousands of tiny fissile fuel kernels randomly mixed with a non-fissile graphite matrix. Monte Carlo neutron transport simulations of the HTGR fuel element geometry in its full complexity are infeasible and this has motivated the development of more approximate computational techniques. A series of MATLAB codes was written to perform Monte Carlo simulations within HTGR fuel pebbles to establish a comprehensive understanding of the parameters under which the accuracy of the approximate techniques diminishes. This research identified the accuracy of the chord length sampling method to be a function of the matrix scattering optical thickness, the kernel optical thickness, and the kernel packing density. Two new Monte Carlo methods designed to focus the computational effort upon the parameter conditions shown to contribute most strongly to the overall computational error were implemented and evaluated. An extended memory chord length sampling routine that recalls a neutron's prior material traversals was demonstrated to be effective in fixed source calculations containing
Calculating Pi Using the Monte Carlo Method
NASA Astrophysics Data System (ADS)
Williamson, Timothy
2013-11-01
During the summer of 2012, I had the opportunity to participate in a research experience for teachers at the center for sustainable energy at Notre Dame University (RET @ cSEND) working with Professor John LoSecco on the problem of using antineutrino detection to accurately determine the fuel makeup and operating power of nuclear reactors. During full power operation, a reactor may produce 1021 antineutrinos per second with approximately 100 per day being detected. While becoming familiar with the design and operation of the detectors, and how total antineutrino flux could be obtained from such a small sample, I read about a simulation program called Monte Carlo. Further investigation led me to the Monte Carlo method page of Wikipedia2 where I saw an example of approximating pi using this simulation. Other examples where this method was applied were typically done with computer simulations2 or purely mathematical.3 It is my belief that this method may be easily related to the students by performing the simple activity of sprinkling rice on an arc drawn in a square. The activity that follows was inspired by those simulations and was used by my AP Physics class last year with very good results.
Stochastic dynamics for reinfection by transmitted diseases
NASA Astrophysics Data System (ADS)
Barros, Alessandro S.; Pinho, Suani T. R.
2017-06-01
The use of stochastic models to study the dynamics of infectious diseases is an important tool to understand the epidemiological process. For several directly transmitted diseases, reinfection is a relevant process, which can be expressed by endogenous reactivation of the pathogen or by exogenous reinfection due to direct contact with an infected individual (with smaller reinfection rate σ β than infection rate β ). In this paper, we examine the stochastic susceptible, infected, recovered, infected (SIRI) model simulating the endogenous reactivation by a spontaneous reaction, while exogenous reinfection by a catalytic reaction. Analyzing the mean-field approximations of a site and pairs of sites, and Monte Carlo (MC) simulations for the particular case of exogenous reinfection, we obtained continuous phase transitions involving endemic, epidemic, and no transmission phases for the simple approach; the approach of pairs is better to describe the phase transition from endemic phase (susceptible, infected, susceptible (SIS)-like model) to epidemic phase (susceptible, infected, and removed or recovered (SIR)-like model) considering the comparison with MC results; the reinfection increases the peaks of outbreaks until the system reaches endemic phase. For the particular case of endogenous reactivation, the approach of pairs leads to a continuous phase transition from endemic phase (SIS-like model) to no transmission phase. Finally, there is no phase transition when both effects are taken into account. We hope the results of this study can be generalized for the susceptible, exposed, infected, and removed or recovered (SEIRIE) model, for which the state exposed (infected but not infectious), describing more realistically transmitted diseases such as tuberculosis. In future work, we also intend to investigate the effect of network topology on phase transitions when the SIRI model describes both transmitted diseases (σ <1 ) and social contagions (σ >1 ).
Stochastization in gravitating systems
NASA Astrophysics Data System (ADS)
Ovod, D. V.; Ossipkov, L. P.
2013-10-01
We discuss the effective stochastization time τ_e for gravitating systems in terms of the Krylov and Gurzadyan-Savvidi paradigm. The truncated Holtsmark distribution for a random force proposed by Rastorguev and Sementsov implies {τ_e/τ_c ∝ N0.20}, where τ_c is the crossing time. We find in the case of the Petrovskaya distribution for a random force {τ_e/τ_c ∝ Nk}, where {k=0.27}-0.31, depending on the oblateness and rotation of the system, and {τ_e/τ_c ∝ N1/3/(ln N)1/2} when N≫ 1. The latter result agrees with those of Genkin (1969) and Gurzadyan & Kocharyan (2009) (k=1/3). Dedicated to Igor L'vovich Genkin (1931-2011)
Bunched beam stochastic cooling
Wei, Jie.
1992-01-01
The scaling laws for bunched-beam stochastic cooling has been derived in terms of the optimum cooling rate and the mixing condition. In the case that particles occupy the entire sinusoidal rf bucket, the optimum cooling rate of the bunched beam is shown to be similar to that predicted from the coasting-beam theory using a beam of the same average density and mixing factor. However, in the case that particles occupy only the center of the bucket, the optimum rate decrease in proportion to the ratio of the bunch area to the bucket area. The cooling efficiency can be significantly improved if the synchrotron side-band spectrum is effectively broadened, e.g. by the transverse tune spread or by using a double rf system.
Bunched beam stochastic cooling
Wei, Jie
1992-09-01
The scaling laws for bunched-beam stochastic cooling has been derived in terms of the optimum cooling rate and the mixing condition. In the case that particles occupy the entire sinusoidal rf bucket, the optimum cooling rate of the bunched beam is shown to be similar to that predicted from the coasting-beam theory using a beam of the same average density and mixing factor. However, in the case that particles occupy only the center of the bucket, the optimum rate decrease in proportion to the ratio of the bunch area to the bucket area. The cooling efficiency can be significantly improved if the synchrotron side-band spectrum is effectively broadened, e.g. by the transverse tune spread or by using a double rf system.
Enhanced physics design with hexagonal repeated structure tools using Monte Carlo methods
Carter, L L; Lan, J S; Schwarz, R A
1991-01-01
This report discusses proposed new missions for the Fast Flux Test Facility (FFTF) reactor which involve the use of target assemblies containing local hydrogenous moderation within this otherwise fast reactor. Parametric physics design studies with Monte Carlo methods are routinely utilized to analyze the rapidly changing neutron spectrum. An extensive utilization of the hexagonal lattice within lattice capabilities of the Monte Carlo Neutron Photon (MCNP) continuous energy Monte Carlo computer code is applied here to solving such problems. Simpler examples that use the lattice capability to describe fuel pins within a brute force'' description of the hexagonal assemblies are also given.
Novel Quantum Monte Carlo Approaches for Quantum Liquids
NASA Astrophysics Data System (ADS)
Rubenstein, Brenda M.
Quantum Monte Carlo methods are a powerful suite of techniques for solving the quantum many-body problem. By using random numbers to stochastically sample quantum properties, QMC methods are capable of studying low-temperature quantum systems well beyond the reach of conventional deterministic techniques. QMC techniques have likewise been indispensible tools for augmenting our current knowledge of superfluidity and superconductivity. In this thesis, I present two new quantum Monte Carlo techniques, the Monte Carlo Power Method and Bose-Fermi Auxiliary-Field Quantum Monte Carlo, and apply previously developed Path Integral Monte Carlo methods to explore two new phases of quantum hard spheres and hydrogen. I lay the foundation for a subsequent description of my research by first reviewing the physics of quantum liquids in Chapter One and the mathematics behind Quantum Monte Carlo algorithms in Chapter Two. I then discuss the Monte Carlo Power Method, a stochastic way of computing the first several extremal eigenvalues of a matrix too memory-intensive to be stored and therefore diagonalized. As an illustration of the technique, I demonstrate how it can be used to determine the second eigenvalues of the transition matrices of several popular Monte Carlo algorithms. This information may be used to quantify how rapidly a Monte Carlo algorithm is converging to the equilibrium probability distribution it is sampling. I next present the Bose-Fermi Auxiliary-Field Quantum Monte Carlo algorithm. This algorithm generalizes the well-known Auxiliary-Field Quantum Monte Carlo algorithm for fermions to bosons and Bose-Fermi mixtures. Despite some shortcomings, the Bose-Fermi Auxiliary-Field Quantum Monte Carlo algorithm represents the first exact technique capable of studying Bose-Fermi mixtures of any size in any dimension. In Chapter Six, I describe a new Constant Stress Path Integral Monte Carlo algorithm for the study of quantum mechanical systems under high pressures. While
Noise-induced instability in self-consistent Monte Carlo calculations
Lemons, D.S.; Lackman, J.; Jones, M.E.; Winske, D.
1995-12-01
We identify, analyze, and propose remedies for a numerical instability responsible for the growth or decay of sums that should be conserved in Monte Carlo simulations of stochastically interacting particles. ``Noisy`` sums with fluctuations proportional to 1/ {radical}{ital n} , where {ital n} is the number of particles in the simulation, provide feedback that drives the instability. Numerical illustrations of an energy loss or ``cooling`` instability in an Ornstein-Uhlenbeck process support our analysis. (c) 1995 The American Physical Society
A continuation multilevel Monte Carlo algorithm
Collier, Nathan; Haji-Ali, Abdul-Lateef; Nobile, Fabio; von Schwerin, Erik; Tempone, Raúl
2014-09-05
Here, we propose a novel Continuation Multi Level Monte Carlo (CMLMC) algorithm for weak approximation of stochastic models. The CMLMC algorithm solves the given approximation problem for a sequence of decreasing tolerances, ending when the required error tolerance is satisfied. CMLMC assumes discretization hierarchies that are defined a priori for each level and are geometrically refined across levels. Moreover, the actual choice of computational work across levels is based on parametric models for the average cost per sample and the corresponding variance and weak error. These parameters are calibrated using Bayesian estimation, taking particular notice of the deepest levels of the discretization hierarchy, where only few realizations are available to produce the estimates. The resulting CMLMC estimator exhibits a non-trivial splitting between bias and statistical contributions. We also show the asymptotic normality of the statistical error in the MLMC estimator and justify in this way our error estimate that allows prescribing both required accuracy and confidence in the final result. Our numerical results substantiate the above results and illustrate the corresponding computational savings in examples that are described in terms of differential equations either driven by random measures or with random coefficients.
Stochastic study of solute transport in a nonstationary medium.
Hu, Bill X
2006-01-01
A Lagrangian stochastic approach is applied to develop a method of moment for solute transport in a physically and chemically nonstationary medium. Stochastic governing equations for mean solute flux and solute covariance are analytically obtained in the first-order accuracy of log conductivity and/or chemical sorption variances and solved numerically using the finite-difference method. The developed method, the numerical method of moments (NMM), is used to predict radionuclide solute transport processes in the saturated zone below the Yucca Mountain project area. The mean, variance, and upper bound of the radionuclide mass flux through a control plane 5 km downstream of the footprint of the repository are calculated. According to their chemical sorption capacities, the various radionuclear chemicals are grouped as nonreactive, weakly sorbing, and strongly sorbing chemicals. The NMM method is used to study their transport processes and influence factors. To verify the method of moments, a Monte Carlo simulation is conducted for nonreactive chemical transport. Results indicate the results from the two methods are consistent, but the NMM method is computationally more efficient than the Monte Carlo method. This study adds to the ongoing debate in the literature on the effect of heterogeneity on solute transport prediction, especially on prediction uncertainty, by showing that the standard derivation of solute flux is larger than the mean solute flux even when the hydraulic conductivity within each geological layer is mild. This study provides a method that may become an efficient calculation tool for many environmental projects.
Stochastic reinforcement benefits skill acquisition.
Dayan, Eran; Averbeck, Bruno B; Richmond, Barry J; Cohen, Leonardo G
2014-02-14
Learning complex skills is driven by reinforcement, which facilitates both online within-session gains and retention of the acquired skills. Yet, in ecologically relevant situations, skills are often acquired when mapping between actions and rewarding outcomes is unknown to the learning agent, resulting in reinforcement schedules of a stochastic nature. Here we trained subjects on a visuomotor learning task, comparing reinforcement schedules with higher, lower, or no stochasticity. Training under higher levels of stochastic reinforcement benefited skill acquisition, enhancing both online gains and long-term retention. These findings indicate that the enhancing effects of reinforcement on skill acquisition depend on reinforcement schedules.
Quantum Gibbs ensemble Monte Carlo
Fantoni, Riccardo; Moroni, Saverio
2014-09-21
We present a path integral Monte Carlo method which is the full quantum analogue of the Gibbs ensemble Monte Carlo method of Panagiotopoulos to study the gas-liquid coexistence line of a classical fluid. Unlike previous extensions of Gibbs ensemble Monte Carlo to include quantum effects, our scheme is viable even for systems with strong quantum delocalization in the degenerate regime of temperature. This is demonstrated by an illustrative application to the gas-superfluid transition of {sup 4}He in two dimensions.
Quantum Monte Carlo for Molecules.
1984-11-01
AD-A148 159 QUANTUM MONTE CARLO FOR MOLECULES(U) CALIFORNIA UNIV Y BERKELEY LAWRENCE BERKELEY LAB W A LESTER ET AL. Si NOV 84 NOSUi4-83-F-Oifi...ORG. REPORT NUMBER 00 QUANTUM MONTE CARLO FOR MOLECULES ’Ids 7. AUTHOR(e) S. CONTRACT Or GRANT NUMER(e) William A. Lester, Jr. and Peter J. Reynolds...unlimited. ..’.- • p. . ° 18I- SUPPLEMENTARY NOTES IS. KEY WORDS (Cent/Rue an "Worse aide If noeesean d entlt by block fmamabr) Quantum Monte Carlo importance
Stochastic Physicochemical Dynamics
NASA Astrophysics Data System (ADS)
Tsekov, R.
2001-02-01
Thermodynamic Relaxation in Quantum Systems: A new approach to quantum Markov processes is developed and the corresponding Fokker-Planck equation is derived. The latter is examined to reproduce known results from classical and quantum physics. It was also applied to the phase-space description of a mechanical system thus leading to a new treatment of this problem different from the Wigner presentation. The equilibrium probability density obtained in the mixed coordinate-momentum space is a reasonable extension of the Gibbs canonical distribution. The validity of the Einstein fluctuation-dissipation relation is discussed in respect to the type of relaxation in an isothermal system. The first model, presuming isothermic fluctuations, leads to the Einstein formula. The second model supposes adiabatic fluctuations and yields another relation between the diffusion coefficient and mobility of a Brownian particle. A new approach to relaxations in quantum systems is also proposed that demonstrates applicability only of the adiabatic model for description of the quantum Brownian dynamics. Stochastic Dynamics of Gas Molecules: A stochastic Langevin equation is derived, describing the thermal motion of a molecule immersed in a rested fluid of identical molecules. The fluctuation-dissipation theorem is proved and a number of correlation characteristics of the molecular Brownian motion are obtained. A short review of the classical theory of Brownian motion is presented. A new method is proposed for derivation of the Fokker-Planck equations, describing the probability density evolution, from stochastic differential equations. It is also proven via the central limit theorem that the white noise is only Gaussian. The applicability of stochastic differential equations to thermodynamics is considered and a new form, different from the classical Ito and Stratonovich forms, is introduced. It is shown that the new presentation is more appropriate for the description of thermodynamic
Wormhole Hamiltonian Monte Carlo
Lan, Shiwei; Streets, Jeffrey; Shahbaba, Babak
2015-01-01
In machine learning and statistics, probabilistic inference involving multimodal distributions is quite difficult. This is especially true in high dimensional problems, where most existing algorithms cannot easily move from one mode to another. To address this issue, we propose a novel Bayesian inference approach based on Markov Chain Monte Carlo. Our method can effectively sample from multimodal distributions, especially when the dimension is high and the modes are isolated. To this end, it exploits and modifies the Riemannian geometric properties of the target distribution to create wormholes connecting modes in order to facilitate moving between them. Further, our proposed method uses the regeneration technique in order to adapt the algorithm by identifying new modes and updating the network of wormholes without affecting the stationary distribution. To find new modes, as opposed to redis-covering those previously identified, we employ a novel mode searching algorithm that explores a residual energy function obtained by subtracting an approximate Gaussian mixture density (based on previously discovered modes) from the target density function. PMID:25861551
Wormhole Hamiltonian Monte Carlo.
Lan, Shiwei; Streets, Jeffrey; Shahbaba, Babak
2014-07-31
In machine learning and statistics, probabilistic inference involving multimodal distributions is quite difficult. This is especially true in high dimensional problems, where most existing algorithms cannot easily move from one mode to another. To address this issue, we propose a novel Bayesian inference approach based on Markov Chain Monte Carlo. Our method can effectively sample from multimodal distributions, especially when the dimension is high and the modes are isolated. To this end, it exploits and modifies the Riemannian geometric properties of the target distribution to create wormholes connecting modes in order to facilitate moving between them. Further, our proposed method uses the regeneration technique in order to adapt the algorithm by identifying new modes and updating the network of wormholes without affecting the stationary distribution. To find new modes, as opposed to redis-covering those previously identified, we employ a novel mode searching algorithm that explores a residual energy function obtained by subtracting an approximate Gaussian mixture density (based on previously discovered modes) from the target density function.
NASA Astrophysics Data System (ADS)
Fasnacht, Marc
We develop adaptive Monte Carlo methods for the calculation of the free energy as a function of a parameter of interest. The methods presented are particularly well-suited for systems with complex energy landscapes, where standard sampling techniques have difficulties. The Adaptive Histogram Method uses a biasing potential derived from histograms recorded during the simulation to achieve uniform sampling in the parameter of interest. The Adaptive Integration method directly calculates an estimate of the free energy from the average derivative of the Hamiltonian with respect to the parameter of interest and uses it as a biasing potential. We compare both methods to a state of the art method, and demonstrate that they compare favorably for the calculation of potentials of mean force of dense Lennard-Jones fluids. We use the Adaptive Integration Method to calculate accurate potentials of mean force for different types of simple particles in a Lennard-Jones fluid. Our approach allows us to separate the contributions of the solvent to the potential of mean force from the effect of the direct interaction between the particles. With contributions of the solvent determined, we can find the potential of mean force directly for any other direct interaction without additional simulations. We also test the accuracy of the Adaptive Integration Method on a thermodynamic cycle, which allows us to perform a consistency check between potentials of mean force and chemical potentials calculated using the Adaptive Integration Method. The results demonstrate a high degree of consistency of the method.
Essays on the Bayesian estimation of stochastic cost frontier
NASA Astrophysics Data System (ADS)
Zhao, Xia
This dissertation consists of three essays that focus on a Bayesian estimation of stochastic cost frontiers for electric generation plants. This research gives insight into the changing development of the electric generation market and could serve to inform both private investment and public policy decisions. The main contributions to the growing literature on stochastic cost frontier analysis are to (1) Empirically estimate the possible efficiency gain of power plants due to deregulation. (2) Estimate the cost of electric power generating plants using coal as a fuel taking into account both regularity restrictions and sulfur dioxide emissions. (3) Compare costs of plants using coal to those who use natural gas. (4) Apply the Bayesian stochastic frontier model to estimate a single cost frontier and allow firm type to vary across regulated and deregulated plants. The average group efficiency for two different types of plants is estimated. (5) Use a fixed effects and random effects model on an unbalanced panel to estimated group efficiency for regulated and deregulated plants. The first essay focuses on the possible efficiency gain of 136 U.S. electric power generation coal-fired plants in 1996. Results favor the constrained model over the unconstrained model. SO2 is also included in the model to provide more accurate estimates of plant efficiency and returns to scale. The second essay focuses on the predicted costs and returns to scale of coal generation to natural gas generation at plants where the cost of both fuels could be obtained. It is found that, for power plants switching fuel from natural gas to coal in 1996, on average, the expected fuel cost would fall and returns to scale would increase. The third essay first uses pooled unbalanced panel data to analyze the differences in plant efficiency across plant types---regulated and deregulated. The application of a Bayesian stochastic frontier model enables us to apply different mean plant inefficiency terms by
Technical notes and correspondence: Stochastic robustness of linear time-invariant control systems
NASA Technical Reports Server (NTRS)
Stengel, Robert F.; Ray, Laura R.
1991-01-01
A simple numerical procedure for estimating the stochastic robustness of a linear time-invariant system is described. Monte Carlo evaluations of the system's eigenvalues allows the probability of instability and the related stochastic root locus to be estimated. This analysis approach treats not only Gaussian parameter uncertainties but non-Gaussian cases, including uncertain-but-bounded variation. Confidence intervals for the scalar probability of instability address computational issues inherent in Monte Carlo simulation. Trivial extensions of the procedure admit consideration of alternate discriminants; thus, the probabilities that stipulated degrees of instability will be exceeded or that closed-loop roots will leave desirable regions can also be estimated. Results are particularly amenable to graphical presentation.
Zhang, Zhongqiang; Yang, Xiu; Lin, Guang; Karniadakis, George Em
2013-03-01
We consider a piston with a velocity perturbed by Brownian motion moving into a straight tube filled with a perfect gas at rest. The shock generated ahead of the piston can be located by solving the one-dimensional Euler equations driven by white noise using the Stratonovich or Ito formulations. We approximate the Brownian motion with its spectral truncation and subsequently apply stochastic collocation using either sparse grid or the quasi-Monte Carlo (QMC) method. In particular, we first transform the Euler equations with an unsteady stochastic boundary into stochastic Euler equations over a fixed domain with a time-dependent stochastic source term. We then solve the transformed equations by splitting them up into two parts, i.e., a ‘deterministic part’ and a ‘stochastic part’. Numerical results verify the Stratonovich–Euler and Ito–Euler models against stochastic perturbation results, and demonstrate the efficiency of sparse grid and QMC for small and large random piston motions, respectively. The variance of shock location of the piston grows cubically in the case of white noise in contrast to colored noise reported in [1], where the variance of shock location grows quadratically with time for short times and linearly for longer times.
Stochastic dynamic causal modelling of FMRI data with multiple-model Kalman filters.
Osório, P; Rosa, P; Silvestre, C; Figueiredo, P
2015-01-01
This article is part of the Focus Theme of Methods of Information in Medicine on "Biosignal Interpretation: Advanced Methods for Neural Signals and Images". Dynamic Causal Modelling (DCM) is a generic formalism to study effective brain connectivity based on neuroimaging data, particularly functional Magnetic Resonance Imaging (fMRI). Recently, there have been attempts at modifying this model to allow for stochastic disturbances in the states of the model. This paper proposes the Multiple-Model Kalman Filtering (MMKF) technique as a stochastic identification model discriminating among different hypothetical connectivity structures in the DCM framework; moreover, the performance compared to a similar deterministic identification model is assessed. The integration of the stochastic DCM equations is first presented, and a MMKF algorithm is then developed to perform model selection based on these equations. Monte Carlo simulations are performed in order to investigate the ability of MMKF to distinguish between different connectivity structures and to estimate hidden states under both deterministic and stochastic DCM. The simulations show that the proposed MMKF algorithm was able to successfully select the correct connectivity model structure from a set of pre-specified plausible alternatives. Moreover, the stochastic approach by MMKF was more effective compared to its deterministic counterpart, both in the selection of the correct connectivity structure and in the estimation of the hidden states. These results demonstrate the applicability of a MMKF approach to the study of effective brain connectivity using DCM, particularly when a stochastic formulation is desirable.
Nonlinear Stochastic PDEs: Analysis and Approximations
2016-05-23
3.4.1 Nonlinear Stochastic PDEs: Analysis and Approximations We compare Wiener chaos and stochastic collocation methods for linear advection-reaction...ADDRESS (ES) U.S. Army Research Office P.O. Box 12211 Research Triangle Park, NC 27709-2211 nonlinear stochastic PDEs (SPDEs), nonlocal SPDEs, Navier...3.4.1 Nonlinear Stochastic PDEs: Analysis and Approximations Report Title We compare Wiener chaos and stochastic collocation methods for linear
Idaho National Laboratory - Steve Herring, Jim O'Brien, Carl Stoots
2008-03-26
Two global energy priorities today are finding environmentally friendly alternatives to fossil fuels, and reducing greenhouse gass Two global energy priorities today are finding environmentally friendly alternatives to fossil fuels, and reducing greenhous
Idaho National Laboratory - Steve Herring, Jim O'Brien, Carl Stoots
2016-07-12
Two global energy priorities today are finding environmentally friendly alternatives to fossil fuels, and reducing greenhouse gass Two global energy priorities today are finding environmentally friendly alternatives to fossil fuels, and reducing greenhous
NASA Astrophysics Data System (ADS)
1984-12-01
The US Department of Energy (DOE), Office of Fossil Energy, has supported and managed a fuel cell research and development (R and D) program since 1976. Responsibility for implementing DOE's fuel cell program, which includes activities related to both fuel cells and fuel cell systems, has been assigned to the Morgantown Energy Technology Center (METC) in Morgantown, West Virginia. The total United States effort of the private and public sectors in developing fuel cell technology is referred to as the National Fuel Cell Program (NFCP). The goal of the NFCP is to develop fuel cell power plants for base-load and dispersed electric utility systems, industrial cogeneration, and on-site applications. To achieve this goal, the fuel cell developers, electric and gas utilities, research institutes, and Government agencies are working together. Four organized groups are coordinating the diversified activities of the NFCP. The status of the overall program is reviewed in detail.
Statistical validation of stochastic models
Hunter, N.F.; Barney, P.; Paez, T.L.; Ferregut, C.; Perez, L.
1996-12-31
It is common practice in structural dynamics to develop mathematical models for system behavior, and the authors are now capable of developing stochastic models, i.e., models whose parameters are random variables. Such models have random characteristics that are meant to simulate the randomness in characteristics of experimentally observed systems. This paper suggests a formal statistical procedure for the validation of mathematical models of stochastic systems when data taken during operation of the stochastic system are available. The statistical characteristics of the experimental system are obtained using the bootstrap, a technique for the statistical analysis of non-Gaussian data. The authors propose a procedure to determine whether or not a mathematical model is an acceptable model of a stochastic system with regard to user-specified measures of system behavior. A numerical example is presented to demonstrate the application of the technique.
Isotropic Monte Carlo Grain Growth
Mason, J.
2013-04-25
IMCGG performs Monte Carlo simulations of normal grain growth in metals on a hexagonal grid in two dimensions with periodic boundary conditions. This may be performed with either an isotropic or a misorientation - and incliantion-dependent grain boundary energy.
Adaptive and Optimal Control of Stochastic Dynamical Systems
2015-09-14
control and stochastic differential games . Stochastic linear-quadratic, continuous time, stochastic control problems are solved for systems with noise...control problems for systems with arbitrary correlated n 15. SUBJECT TERMS Adaptive control, optimal control, stochastic differential games 16. SECURITY...explicit results have been obtained for problems of stochastic control and stochastic differential games . Stochastic linear- quadratic, continuous time
2005-10-04
tactical ground mobility and increasing operational reach • Identify, review, and assess – Technologies for reducing fuel consumption, including...T I O N S A C T I O N S TOR Focus - Tactical ground mobility - Operational reach - Not A/C, Ships, or troops Hybrid Electric Vehicle Fuel Management...Fuel Management During Combat Operations Energy Fundamentals • Energy Density • Tactical Mobility • Petroleum Use • Fuel Usage (TWV) • TWV OP TEMPO TOR
2009-06-11
JP-8 BACK-UP SLIDES Unclassified 19 What Are Biofuels ? Cellulose “first generation”“second generation” C18:0 C16:1 Triglycerides (fats, oils ...equipment when supplying jet fuel not practicable or cost effective Unclassified 5 erna ve ue s ocus Petroleum Crude Oil (declining discovery / production...on Jet A/A-1 Approved fuels, DXXXX Unclassified 6 JP-8/5 (Commercial Jet Fuel, ASTM Spec) DARPA Alternative Jet Fuels • Agricultural crop oils
The Stochastic Gradient Approximation: An application to lithium nanoclusters
NASA Astrophysics Data System (ADS)
Nissenbaum, Daniel
The Stochastic Gradient Approximation (SGA) is the natural extension of Quantum Monte Carlo (QMC) methods to the variational optimization of quantum wave function parameters. While many deterministic applications impose stochasticity, the SGA fruitfully takes advantage of the natural stochasticity already present in QMC in order to utilize a small number of QMC samples and approach the minimum more quickly by averaging out the random noise in the samples. The increasing efficiency of the method for systems with larger numbers of particles, and its nearly ideal scaling when running on parallelized processors, is evidence that the SGA is well suited for the study of nanoclusters. In this thesis, I discuss the SGA algorithm in detail. I also describe its application to both quantum dots, and to the Resonating Valence Bond wave function (RVB). The RVB is a sophisticated model of electronic systems that captures electronic correlation effects directly and that improves the nodal structure of quantum wave functions. The RVB is receiving renewed attention in the study of nanoclusters due to the fact that calculations of RVB wave functions have become feasible with recent advances in computer hardware and software.
Fluorescence Correlation Spectroscopy and Nonlinear Stochastic Reaction-Diffusion
Del Razo, Mauricio; Pan, Wenxiao; Qian, Hong; Lin, Guang
2014-05-30
The currently existing theory of fluorescence correlation spectroscopy (FCS) is based on the linear fluctuation theory originally developed by Einstein, Onsager, Lax, and others as a phenomenological approach to equilibrium fluctuations in bulk solutions. For mesoscopic reaction-diffusion systems with nonlinear chemical reactions among a small number of molecules, a situation often encountered in single-cell biochemistry, it is expected that FCS time correlation functions of a reaction-diffusion system can deviate from the classic results of Elson and Magde [Biopolymers (1974) 13:1-27]. We first discuss this nonlinear effect for reaction systems without diffusion. For nonlinear stochastic reaction-diffusion systems there are no closed solutions; therefore, stochastic Monte-Carlo simulations are carried out. We show that the deviation is small for a simple bimolecular reaction; the most significant deviations occur when the number of molecules is small and of the same order. Extending Delbrück-Gillespie’s theory for stochastic nonlinear reactions with rapidly stirring to reaction-diffusion systems provides a mesoscopic model for chemical and biochemical reactions at nanometric and mesoscopic level such as a single biological cell.
A stochastic model for solute transport in macroporous soils
Bruggeman, A.C.; Mostaghimi, S.; Brannan, K.M.
1999-12-01
A stochastic, physically based, finite element model for simulating flow and solute transport in soils with macropores (MICMAC) was developed. The MICMAC model simulates preferential movement of water and solutes using a cylindrical macropore located in the center of a soil column. MICMAC uses Monte Carlo simulation to represent the stochastic processes inherent to the soil-water system. The model simulates a field as a collection of non-interacting soil columns. The random soil properties are assumed to be stationary in the horizontal direction, and ergodic over the field. A routine for the generation of correlated, non-normal random variates was developed for MICMAC's stochastic component. The model was applied to fields located in Nomini Creek Watershed, Virginia. Extensive field data were collected in fields that use either conventional or no-tillage for the evaluation of the MICMAC model. The field application suggested that the model underestimated the fast leaching of water and solutes from the root zone. However, the computed results were substantially better than the results obtained when no preferential flow component was included in the model.
A stochastic transcriptional switch model for single cell imaging data.
Hey, Kirsty L; Momiji, Hiroshi; Featherstone, Karen; Davis, Julian R E; White, Michael R H; Rand, David A; Finkenstädt, Bärbel
2015-10-01
Gene expression is made up of inherently stochastic processes within single cells and can be modeled through stochastic reaction networks (SRNs). In particular, SRNs capture the features of intrinsic variability arising from intracellular biochemical processes. We extend current models for gene expression to allow the transcriptional process within an SRN to follow a random step or switch function which may be estimated using reversible jump Markov chain Monte Carlo (MCMC). This stochastic switch model provides a generic framework to capture many different dynamic features observed in single cell gene expression. Inference for such SRNs is challenging due to the intractability of the transition densities. We derive a model-specific birth-death approximation and study its use for inference in comparison with the linear noise approximation where both approximations are considered within the unifying framework of state-space models. The methodology is applied to synthetic as well as experimental single cell imaging data measuring expression of the human prolactin gene in pituitary cells.
Zhang, Fan; Gao, Yan; Luo, Yazhi; Chen, Zhangyuan; Xu, Anshi
2010-04-26
We propose a stochastic bit error ratio estimation approach based on a statistical analysis of the retrieved signal phase for coherent optical QPSK systems with digital carrier phase recovery. A family of the generalized exponential function is applied to fit the probability density function of the signal samples. The method provides reasonable performance estimation in presence of both linear and nonlinear transmission impairments while reduces the computational intensity greatly compared to Monte Carlo simulation.
Network Analysis with Stochastic Grammars
2015-09-17
a variety of ways on a lower level. For a grammar , each phase is essentially a Task and a network attack is, at the highest level, a five Task...NETIVORK ANALYSIS \\\\’ITH STOCHASTIC GRAMMARS DISSERTATION Alan C. Lin, Maj , USAF AFIT-ENG-DS-15-S-014 DEPARTMENT OF THE AIR FORCE AIR...subject to copyright protection in the United States. AFIT-ENG-DS-15-S-014 NETWORK ANALYSIS WITH STOCHASTIC GRAMMARS DISSERTATION Presented
Stochastic roots of growth phenomena
NASA Astrophysics Data System (ADS)
De Lauro, E.; De Martino, S.; De Siena, S.; Giorno, V.
2014-05-01
We show that the Gompertz equation describes the evolution in time of the median of a geometric stochastic process. Therefore, we induce that the process itself generates the growth. This result allows us further to exploit a stochastic variational principle to take account of self-regulation of growth through feedback of relative density variations. The conceptually well defined framework so introduced shows its usefulness by suggesting a form of control of growth by exploiting external actions.
Some Topics in Stochastic Control
2010-10-14
Flows of Diffeomorphisms , (viii)Feller and Stability Properties of the Nonlinear Filter, (ix) Particle filter methods for Atmospheric and Oceanic data... Diffeomorphisms , Bernoulli, 16 (2010), no. 1, 91- -113. 5. A. Budhiraja, P. Dupuis and V. Maroulas. Variational Representations for Continuous Time...treat a setting with state dependent rates. 16 C.III. Large Deviations for Stochastic Flows of Diffeomorphisms [11]. Stochastic flows of diffeomorphisms
Stochastic Models of Polymer Systems
2016-01-01
Distribution Unlimited Final Report: Stochastic Models of Polymer Systems The views, opinions and/or findings contained in this report are those of the...peer-reviewed journals: Number of Papers published in non peer-reviewed journals: Final Report: Stochastic Models of Polymer Systems Report Title...field limit of a dynamical model for polymer systems, Science China Mathematics, (11 2012): 0. doi: TOTAL: 1 Number of Non Peer-Reviewed Conference
Stochastic superparameterization in quasigeostrophic turbulence
Grooms, Ian; Majda, Andrew J.
2014-08-15
In this article we expand and develop the authors' recent proposed methodology for efficient stochastic superparameterization algorithms for geophysical turbulence. Geophysical turbulence is characterized by significant intermittent cascades of energy from the unresolved to the resolved scales resulting in complex patterns of waves, jets, and vortices. Conventional superparameterization simulates large scale dynamics on a coarse grid in a physical domain, and couples these dynamics to high-resolution simulations on periodic domains embedded in the coarse grid. Stochastic superparameterization replaces the nonlinear, deterministic eddy equations on periodic embedded domains by quasilinear stochastic approximations on formally infinite embedded domains. The result is a seamless algorithm which never uses a small scale grid and is far cheaper than conventional SP, but with significant success in difficult test problems. Various design choices in the algorithm are investigated in detail here, including decoupling the timescale of evolution on the embedded domains from the length of the time step used on the coarse grid, and sensitivity to certain assumed properties of the eddies (e.g. the shape of the assumed eddy energy spectrum). We present four closures based on stochastic superparameterization which elucidate the properties of the underlying framework: a ‘null hypothesis’ stochastic closure that uncouples the eddies from the mean, a stochastic closure with nonlinearly coupled eddies and mean, a nonlinear deterministic closure, and a stochastic closure based on energy conservation. The different algorithms are compared and contrasted on a stringent test suite for quasigeostrophic turbulence involving two-layer dynamics on a β-plane forced by an imposed background shear. The success of the algorithms developed here suggests that they may be fruitfully applied to more realistic situations. They are expected to be particularly useful in providing accurate and
Shipping Cask Studies with MOX Fuel
Pavlovichev, A.M.
2001-05-17
Tasks of nuclear safety assurance for storage and transport of fresh mixed uranium-plutonium fuel of the VVER-1000 reactor are considered in the view of 3 MOX LTAs introduction into the core. The precise code MCU that realizes the Monte Carlo method is used for calculations.
ERIC Educational Resources Information Center
Crank, Ron
This instructional unit is one of 10 developed by students on various energy-related areas that deals specifically with fossil fuels. Some topics covered are historic facts, development of fuels, history of oil production, current and future trends of the oil industry, refining fossil fuels, and environmental problems. Material in each unit may…
ERIC Educational Resources Information Center
Crank, Ron
This instructional unit is one of 10 developed by students on various energy-related areas that deals specifically with fossil fuels. Some topics covered are historic facts, development of fuels, history of oil production, current and future trends of the oil industry, refining fossil fuels, and environmental problems. Material in each unit may…
Phenomenology of stochastic exponential growth
NASA Astrophysics Data System (ADS)
Pirjol, Dan; Jafarpour, Farshid; Iyer-Biswas, Srividya
2017-06-01
Stochastic exponential growth is observed in a variety of contexts, including molecular autocatalysis, nuclear fission, population growth, inflation of the universe, viral social media posts, and financial markets. Yet literature on modeling the phenomenology of these stochastic dynamics has predominantly focused on one model, geometric Brownian motion (GBM), which can be described as the solution of a Langevin equation with linear drift and linear multiplicative noise. Using recent experimental results on stochastic exponential growth of individual bacterial cell sizes, we motivate the need for a more general class of phenomenological models of stochastic exponential growth, which are consistent with the observation that the mean-rescaled distributions are approximately stationary at long times. We show that this behavior is not consistent with GBM, instead it is consistent with power-law multiplicative noise with positive fractional powers. Therefore, we consider this general class of phenomenological models for stochastic exponential growth, provide analytical solutions, and identify the important dimensionless combination of model parameters, which determines the shape of the mean-rescaled distribution. We also provide a prescription for robustly inferring model parameters from experimentally observed stochastic growth trajectories.
Filippone, W.L.; Baker, R.S.
1990-12-31
The neutron transport equation is solved by a hybrid method that iteratively couples regions where deterministic (S{sub N}) and stochastic (Monte Carlo) methods are applied. Unlike previous hybrid methods, the Monte Carlo and S{sub N} regions are fully coupled in the sense that no assumption is made about geometrical separation or decoupling. The hybrid method provides a new means of solving problems involving both optically thick and optically thin regions that neither Monte Carlo nor S{sub N} is well suited for by themselves. The fully coupled Monte Carlo/S{sub N} technique consists of defining spatial and/or energy regions of a problem in which either a Monte Carlo calculation or an S{sub N} calculation is to be performed. The Monte Carlo region may comprise the entire spatial region for selected energy groups, or may consist of a rectangular area that is either completely or partially embedded in an arbitrary S{sub N} region. The Monte Carlo and S{sub N} regions are then connected through the common angular boundary fluxes, which are determined iteratively using the response matrix technique, and volumetric sources. The hybrid method has been implemented in the S{sub N} code TWODANT by adding special-purpose Monte Carlo subroutines to calculate the response matrices and volumetric sources, and linkage subrountines to carry out the interface flux iterations. The common angular boundary fluxes are included in the S{sub N} code as interior boundary sources, leaving the logic for the solution of the transport flux unchanged, while, with minor modifications, the diffusion synthetic accelerator remains effective in accelerating S{sub N} calculations. The special-purpose Monte Carlo routines used are essentially analog, with few variance reduction techniques employed. However, the routines have been successfully vectorized, with approximately a factor of five increase in speed over the non-vectorized version.
Stochastic uncertainty analysis for unconfined flow systems
Liu, Gaisheng; Zhang, Dongxiao; Lu, Zhiming
2006-01-01
A new stochastic approach proposed by Zhang and Lu (2004), called the Karhunen-Loeve decomposition-based moment equation (KLME), has been extended to solving nonlinear, unconfined flow problems in randomly heterogeneous aquifers. This approach is on the basis of an innovative combination of Karhunen-Loeve decomposition, polynomial expansion, and perturbation methods. The random log-transformed hydraulic conductivity field (InKS) is first expanded into a series in terms of orthogonal Gaussian standard random variables with their coefficients obtained as the eigenvalues and eigenfunctions of the covariance function of InKS- Next, head h is decomposed as a perturbation expansion series ??A(m), where A(m) represents the mth-order head term with respect to the standard deviation of InKS. Then A(m) is further expanded into a polynomial series of m products of orthogonal Gaussian standard random variables whose coefficients Ai1,i2(m)...,im are deterministic and solved sequentially from low to high expansion orders using MODFLOW-2000. Finally, the statistics of head and flux are computed using simple algebraic operations on Ai1,i2(m)...,im. A series of numerical test results in 2-D and 3-D unconfined flow systems indicated that the KLME approach is effective in estimating the mean and (co)variance of both heads and fluxes and requires much less computational effort as compared to the traditional Monte Carlo simulation technique. Copyright 2006 by the American Geophysical Union.
Parallel stochastic systems biology in the cloud.
Aldinucci, Marco; Torquati, Massimo; Spampinato, Concetto; Drocco, Maurizio; Misale, Claudia; Calcagno, Cristina; Coppo, Mario
2014-09-01
The stochastic modelling of biological systems, coupled with Monte Carlo simulation of models, is an increasingly popular technique in bioinformatics. The simulation-analysis workflow may result computationally expensive reducing the interactivity required in the model tuning. In this work, we advocate the high-level software design as a vehicle for building efficient and portable parallel simulators for the cloud. In particular, the Calculus of Wrapped Components (CWC) simulator for systems biology, which is designed according to the FastFlow pattern-based approach, is presented and discussed. Thanks to the FastFlow framework, the CWC simulator is designed as a high-level workflow that can simulate CWC models, merge simulation results and statistically analyse them in a single parallel workflow in the cloud. To improve interactivity, successive phases are pipelined in such a way that the workflow begins to output a stream of analysis results immediately after simulation is started. Performance and effectiveness of the CWC simulator are validated on the Amazon Elastic Compute Cloud.
Stochastic models for cell motion and taxis.
Ionides, Edward L; Fang, Kathy S; Isseroff, R Rivkah; Oster, George F
2004-01-01
Certain biological experiments investigating cell motion result in time lapse video microscopy data which may be modeled using stochastic differential equations. These models suggest statistics for quantifying experimental results and testing relevant hypotheses, and carry implications for the qualitative behavior of cells and for underlying biophysical mechanisms. Directional cell motion in response to a stimulus, termed taxis, has previously been modeled at a phenomenological level using the Keller-Segel diffusion equation. The Keller-Segel model cannot distinguish certain modes of taxis, and this motivates the introduction of a richer class of models which is nevertheless still amenable to statistical analysis. A state space model formulation is used to link models proposed for cell velocity to observed data. Sequential Monte Carlo methods enable parameter estimation via maximum likelihood for a range of applicable models. One particular experimental situation, involving the effect of an electric field on cell behavior, is considered in detail. In this case, an Ornstein- Uhlenbeck model for cell velocity is found to compare favorably with a nonlinear diffusion model.
GPU-powered Shotgun Stochastic Search for Dirichlet process mixtures of Gaussian Graphical Models
Mukherjee, Chiranjit; Rodriguez, Abel
2016-01-01
Gaussian graphical models are popular for modeling high-dimensional multivariate data with sparse conditional dependencies. A mixture of Gaussian graphical models extends this model to the more realistic scenario where observations come from a heterogenous population composed of a small number of homogeneous sub-groups. In this paper we present a novel stochastic search algorithm for finding the posterior mode of high-dimensional Dirichlet process mixtures of decomposable Gaussian graphical models. Further, we investigate how to harness the massive thread-parallelization capabilities of graphical processing units to accelerate computation. The computational advantages of our algorithms are demonstrated with various simulated data examples in which we compare our stochastic search with a Markov chain Monte Carlo algorithm in moderate dimensional data examples. These experiments show that our stochastic search largely outperforms the Markov chain Monte Carlo algorithm in terms of computing-times and in terms of the quality of the posterior mode discovered. Finally, we analyze a gene expression dataset in which Markov chain Monte Carlo algorithms are too slow to be practically useful. PMID:28626348
GPU-powered Shotgun Stochastic Search for Dirichlet process mixtures of Gaussian Graphical Models.
Mukherjee, Chiranjit; Rodriguez, Abel
2016-01-01
Gaussian graphical models are popular for modeling high-dimensional multivariate data with sparse conditional dependencies. A mixture of Gaussian graphical models extends this model to the more realistic scenario where observations come from a heterogenous population composed of a small number of homogeneous sub-groups. In this paper we present a novel stochastic search algorithm for finding the posterior mode of high-dimensional Dirichlet process mixtures of decomposable Gaussian graphical models. Further, we investigate how to harness the massive thread-parallelization capabilities of graphical processing units to accelerate computation. The computational advantages of our algorithms are demonstrated with various simulated data examples in which we compare our stochastic search with a Markov chain Monte Carlo algorithm in moderate dimensional data examples. These experiments show that our stochastic search largely outperforms the Markov chain Monte Carlo algorithm in terms of computing-times and in terms of the quality of the posterior mode discovered. Finally, we analyze a gene expression dataset in which Markov chain Monte Carlo algorithms are too slow to be practically useful.
Brennan J. M.; Blaskiewicz, M.; Mernick, K.
2012-05-20
The full 6-dimensional [x,x'; y,y'; z,z'] stochastic cooling system for RHIC was completed and operational for the FY12 Uranium-Uranium collider run. Cooling enhances the integrated luminosity of the Uranium collisions by a factor of 5, primarily by reducing the transverse emittances but also by cooling in the longitudinal plane to preserve the bunch length. The components have been deployed incrementally over the past several runs, beginning with longitudinal cooling, then cooling in the vertical planes but multiplexed between the Yellow and Blue rings, next cooling both rings simultaneously in vertical (the horizontal plane was cooled by betatron coupling), and now simultaneous horizontal cooling has been commissioned. The system operated between 5 and 9 GHz and with 3 x 10{sup 8} Uranium ions per bunch and produces a cooling half-time of approximately 20 minutes. The ultimate emittance is determined by the balance between cooling and emittance growth from Intra-Beam Scattering. Specific details of the apparatus and mathematical techniques for calculating its performance have been published elsewhere. Here we report on: the method of operation, results with beam, and comparison of results to simulations.
NASA Astrophysics Data System (ADS)
McDonnell, Mark D.; Amblard, Pierre-Olivier; Stocks, Nigel G.
2009-01-01
We introduce and define the concept of a stochastic pooling network (SPN), as a model for sensor systems where redundancy and two forms of 'noise'—lossy compression and randomness—interact in surprising ways. Our approach to analysing SPNs is information theoretic. We define an SPN as a network with multiple nodes that each produce noisy and compressed measurements of the same information. An SPN must combine all these measurements into a single further compressed network output, in a way dictated solely by naturally occurring physical properties—i.e. pooling—and yet cause no (or negligible) reduction in mutual information. This means that SPNs exhibit redundancy reduction as an emergent property of pooling. The SPN concept is applicable to examples in biological neural coding, nanoelectronics, distributed sensor networks, digital beamforming arrays, image processing, multiaccess communication networks and social networks. In most cases the randomness is assumed to be unavoidably present rather than deliberately introduced. We illustrate the central properties of SPNs for several case studies, where pooling occurs by summation, including nodes that are noisy scalar quantizers, and nodes with conditionally Poisson statistics. Other emergent properties of SPNs and some unsolved problems are also briefly discussed.
Stochastic processes in gravitropism.
Meroz, Yasmine; Bastien, Renaud
2014-01-01
In this short review we focus on the role of noise in gravitropism of plants - the reorientation of plants according to the direction of gravity. We briefly introduce the conventional picture of static gravisensing in cells specialized in sensing. This model hinges on the sedimentation of statoliths (high in density and mass relative to other organelles) to the lowest part of the sensing cell. We then present experimental observations that cannot currently be understood within this framework. Lastly we introduce some current alternative models and directions that attempt to incorporate and interpret these experimental observations, including: (i) dynamic sensing, where gravisensing is suggested to be enhanced by stochastic events due to thermal and mechanical noise. These events both effectively lower the threshold of response, and lead to small-distance sedimentation, allowing amplification, and integration of the signal. (ii) The role of the cytoskeleton in signal-to-noise modulation and (iii) in signal transduction. In closing, we discuss directions that seem to either not have been explored, or that are still poorly understood.
A Stochastic-Variational Model for Soft Mumford-Shah Segmentation
2006-01-01
In contemporary image and vision analysis, stochastic approaches demonstrate great flexibility in representing and modeling complex phenomena, while variational-PDE methods gain enormous computational advantages over Monte Carlo or other stochastic algorithms. In combination, the two can lead to much more powerful novel models and efficient algorithms. In the current work, we propose a stochastic-variational model for soft (or fuzzy) Mumford-Shah segmentation of mixture image patterns. Unlike the classical hard Mumford-Shah segmentation, the new model allows each pixel to belong to each image pattern with some probability. Soft segmentation could lead to hard segmentation, and hence is more general. The modeling procedure, mathematical analysis on the existence of optimal solutions, and computational implementation of the new model are explored in detail, and numerical examples of both synthetic and natural images are presented. PMID:23165059
NASA Astrophysics Data System (ADS)
Chushak, Yaroslav; Foy, Brent; Frazier, John
2008-03-01
At the functional level, all biological processes in cells can be represented as a series of biochemical reactions that are stochastic in nature. We have developed a software package called Biomolecular Network Simulator (BNS) that uses a stochastic approach to model and simulate complex biomolecular reaction networks. Two simulation algorithms - the exact Gillespie stochastic simulation algorithm and the approximate adaptive tau-leaping algorithm - are implemented for generating Monte Carlo trajectories that describe the evolution of a system of biochemical reactions. The software uses a combination of MATLAB and C-coded functions and is parallelized with the Message Passing Interface (MPI) library to run on multiprocessor architectures. We will present a brief description of the Biomolecular Network Simulator software along with some examples.
NASA Technical Reports Server (NTRS)
Grobman, J. S.; Butze, H. F.; Friedman, R.; Antoine, A. C.; Reynolds, T. W.
1977-01-01
Potential problems related to the use of alternative aviation turbine fuels are discussed and both ongoing and required research into these fuels is described. This discussion is limited to aviation turbine fuels composed of liquid hydrocarbons. The advantages and disadvantages of the various solutions to the problems are summarized. The first solution is to continue to develop the necessary technology at the refinery to produce specification jet fuels regardless of the crude source. The second solution is to minimize energy consumption at the refinery and keep fuel costs down by relaxing specifications.
Mekonen, K.
1989-10-31
This patent describes a hydrosol fuel. It comprises: from about 67% to 94% by weight of a hydrocarbon combustible fuel selected from the group consisting of the gasolines, diesel fuels and heavy fuel oils, from 5 to 25% by weight of water, at least one surfactant operable to create a hydrosol with the fuel and water present in the range of 0.1 up to about 3.4% by weight of an additive selected from the group consisting of alpha (mono) olefins and alkyl benzenes, each of the former having 7 to 15 recurring C{sub 2} monomers therein.
Liang, Faming; Cheng, Yichen; Lin, Guang
2014-06-13
Simulated annealing has been widely used in the solution of optimization problems. As known by many researchers, the global optima cannot be guaranteed to be located by simulated annealing unless a logarithmic cooling schedule is used. However, the logarithmic cooling schedule is so slow that no one can afford to have such a long CPU time. This paper proposes a new stochastic optimization algorithm, the so-called simulated stochastic approximation annealing algorithm, which is a combination of simulated annealing and the stochastic approximation Monte Carlo algorithm. Under the framework of stochastic approximation Markov chain Monte Carlo, it is shown that the new algorithm can work with a cooling schedule in which the temperature can decrease much faster than in the logarithmic cooling schedule, e.g., a square-root cooling schedule, while guaranteeing the global optima to be reached when the temperature tends to zero. The new algorithm has been tested on a few benchmark optimization problems, including feed-forward neural network training and protein-folding. The numerical results indicate that the new algorithm can significantly outperform simulated annealing and other competitors.
A stochastic hybrid systems based framework for modeling dependent failure processes.
Fan, Mengfei; Zeng, Zhiguo; Zio, Enrico; Kang, Rui; Chen, Ying
2017-01-01
In this paper, we develop a framework to model and analyze systems that are subject to dependent, competing degradation processes and random shocks. The degradation processes are described by stochastic differential equations, whereas transitions between the system discrete states are triggered by random shocks. The modeling is, then, based on Stochastic Hybrid Systems (SHS), whose state space is comprised of a continuous state determined by stochastic differential equations and a discrete state driven by stochastic transitions and reset maps. A set of differential equations are derived to characterize the conditional moments of the state variables. System reliability and its lower bounds are estimated from these conditional moments, using the First Order Second Moment (FOSM) method and Markov inequality, respectively. The developed framework is applied to model three dependent failure processes from literature and a comparison is made to Monte Carlo simulations. The results demonstrate that the developed framework is able to yield an accurate estimation of reliability with less computational costs compared to traditional Monte Carlo-based methods.
A stochastic hybrid systems based framework for modeling dependent failure processes
Fan, Mengfei; Zeng, Zhiguo; Zio, Enrico; Kang, Rui; Chen, Ying
2017-01-01
In this paper, we develop a framework to model and analyze systems that are subject to dependent, competing degradation processes and random shocks. The degradation processes are described by stochastic differential equations, whereas transitions between the system discrete states are triggered by random shocks. The modeling is, then, based on Stochastic Hybrid Systems (SHS), whose state space is comprised of a continuous state determined by stochastic differential equations and a discrete state driven by stochastic transitions and reset maps. A set of differential equations are derived to characterize the conditional moments of the state variables. System reliability and its lower bounds are estimated from these conditional moments, using the First Order Second Moment (FOSM) method and Markov inequality, respectively. The developed framework is applied to model three dependent failure processes from literature and a comparison is made to Monte Carlo simulations. The results demonstrate that the developed framework is able to yield an accurate estimation of reliability with less computational costs compared to traditional Monte Carlo-based methods. PMID:28231313
Numerical treatment of stochastic river quality models driven by colored noise
NASA Astrophysics Data System (ADS)
Stijnen, J. W.; Heemink, A. W.; Ponnambalam, K.
2003-03-01
Monte Carlo simulation is a popular method of risk and uncertainty analysis in oceanographic, atmospheric, and environmental applications. It is common practice to introduce a stochastic part to an already existing deterministic model and, after many simulations, to provide the user with statistics of the model outcome. The underlying deterministic model is often a discretization of a set of partial differential equations describing physical processes such as transport, turbulence, buoyancy effects, and continuity. Much effort is also put into deriving numerically efficient schemes for the time integration. The resulting model is often quite large and complex. In sharp contrast the stochastic extension used for Monte Carlo experiments is usually achieved by adding white noise. Unfortunately, the order of time integration in the stochastic model is reduced compared to the deterministic model because white noise is not a smooth process. Instead of completely replacing the old numerical scheme and implementing a higher-order scheme for stochastic differential equations, we suggest a different approach that is able to use existing numerical schemes. The method uses a smooth colored noise process as the driving force, resulting in a higher order of convergence. We show promising results from numerical experiments, including parametric uncertainty.
NASA Astrophysics Data System (ADS)
Täuber, Uwe C.
2013-03-01
Field theory tools are applied to analytically study fluctuation and correlation effects in spatially extended stochastic predator-prey systems. In the mean-field rate equation approximation, the classic Lotka-Volterra model is characterized by neutral cycles in phase space, describing undamped oscillations for both predator and prey populations. In contrast, Monte Carlo simulations for stochastic two-species predator-prey reaction systems on regular lattices display complex spatio-temporal structures associated with persistent erratic population oscillations. The Doi-Peliti path integral representation of the master equation for stochastic particle interaction models is utilized to arrive at a field theory action for spatial Lotka-Volterra models in the continuum limit. In the species coexistence phase, a perturbation expansion with respect to the nonlinear predation rate is employed to demonstrate that spatial degrees of freedom and stochastic noise induce instabilities toward structure formation, and to compute the fluctuation corrections for the oscillation frequency and diffusion coefficient. The drastic downward renormalization of the frequency and the enhanced diffusivity are in excellent qualitative agreement with Monte Carlo simulation data.
Crossing the mesoscale no-mans land via parallel kinetic Monte Carlo.
Garcia Cardona, Cristina; Webb, Edmund Blackburn, III; Wagner, Gregory John; Tikare, Veena; Holm, Elizabeth Ann; Plimpton, Steven James; Thompson, Aidan Patrick; Slepoy, Alexander; Zhou, Xiao Wang; Battaile, Corbett Chandler; Chandross, Michael Evan
2009-10-01
The kinetic Monte Carlo method and its variants are powerful tools for modeling materials at the mesoscale, meaning at length and time scales in between the atomic and continuum. We have completed a 3 year LDRD project with the goal of developing a parallel kinetic Monte Carlo capability and applying it to materials modeling problems of interest to Sandia. In this report we give an overview of the methods and algorithms developed, and describe our new open-source code called SPPARKS, for Stochastic Parallel PARticle Kinetic Simulator. We also highlight the development of several Monte Carlo models in SPPARKS for specific materials modeling applications, including grain growth, bubble formation, diffusion in nanoporous materials, defect formation in erbium hydrides, and surface growth and evolution.
Kilinc, Deniz; Demir, Alper
2017-08-01
The brain is extremely energy efficient and remarkably robust in what it does despite the considerable variability and noise caused by the stochastic mechanisms in neurons and synapses. Computational modeling is a powerful tool that can help us gain insight into this important aspect of brain mechanism. A deep understanding and computational design tools can help develop robust neuromorphic electronic circuits and hybrid neuroelectronic systems. In this paper, we present a general modeling framework for biological neuronal circuits that systematically captures the nonstationary stochastic behavior of ion channels and synaptic processes. In this framework, fine-grained, discrete-state, continuous-time Markov chain models of both ion channels and synaptic processes are treated in a unified manner. Our modeling framework features a mechanism for the automatic generation of the corresponding coarse-grained, continuous-state, continuous-time stochastic differential equation models for neuronal variability and noise. Furthermore, we repurpose non-Monte Carlo noise analysis techniques, which were previously developed for analog electronic circuits, for the stochastic characterization of neuronal circuits both in time and frequency domain. We verify that the fast non-Monte Carlo analysis methods produce results with the same accuracy as computationally expensive Monte Carlo simulations. We have implemented the proposed techniques in a prototype simulator, where both biological neuronal and analog electronic circuits can be simulated together in a coupled manner.
Stochastic generation of hourly rainstorm events in Johor
Nojumuddin, Nur Syereena; Yusof, Fadhilah; Yusop, Zulkifli
2015-02-03
Engineers and researchers in water-related studies are often faced with the problem of having insufficient and long rainfall record. Practical and effective methods must be developed to generate unavailable data from limited available data. Therefore, this paper presents a Monte-Carlo based stochastic hourly rainfall generation model to complement the unavailable data. The Monte Carlo simulation used in this study is based on the best fit of storm characteristics. Hence, by using the Maximum Likelihood Estimation (MLE) and Anderson Darling goodness-of-fit test, lognormal appeared to be the best rainfall distribution. Therefore, the Monte Carlo simulation based on lognormal distribution was used in the study. The proposed model was verified by comparing the statistical moments of rainstorm characteristics from the combination of the observed rainstorm events under 10 years and simulated rainstorm events under 30 years of rainfall records with those under the entire 40 years of observed rainfall data based on the hourly rainfall data at the station J1 in Johor over the period of 1972–2011. The absolute percentage error of the duration-depth, duration-inter-event time and depth-inter-event time will be used as the accuracy test. The results showed the first four product-moments of the observed rainstorm characteristics were close with the simulated rainstorm characteristics. The proposed model can be used as a basis to derive rainfall intensity-duration frequency in Johor.
Riedewald, Frank; Byrne, Edmond; Cronin, Kevin
2011-01-01
This work presents a deterministic and a stochastic model for the simulation of industrial-size deionized water and water for injection (DI/WFI) systems. The objective of the simulations is to determine if additional DI/WFI demand from future production processes can be supported by an existing DI/WFI system. The models utilize discrete event simulation to compute the demand profile from the distribution system; they also use a continuous simulation to calculate the variation of the water level in the storage tank. Whereas the deterministic model ignores uncertainties, the stochastic model allows for both volume and schedule uncertainties. The Monte Carlo method is applied to solve the stochastic method. This paper compares the deterministic and stochastic models and shows that the deterministic model may be suitable for most applications and that the stochastic model should only be used if found necessary by the deterministic simulation. The models are programmed within Excel 2003 and are available for download as open public domain software (1), allowing for public modifications and improvements of the model. The proposed models may also be utilized to determine size or analyze the performance of other utilities, such as heat transfer media, drinking water, etc. Water for injection (WFI) and other pharmaceutical water distribution systems are notoriously difficult to analyze analytically due to the highly dynamic variable demand that is drawn from these systems. Discrete event simulation may provide an answer where the typical engineering approach of utilizing a diversity factor fails. This paper develops an Excel based deterministic and stochastic model for a WFI system with the latter allowing for the modeling of offtake volume and schedule uncertainty. The paper also compares the deterministic and stochastic models and shows that the deterministic model may be suitable for most applications while the stochastic model should only be used if found necessary. The
Segmentation of stochastic images with a stochastic random walker method.
Pätz, Torben; Preusser, Tobias
2012-05-01
We present an extension of the random walker segmentation to images with uncertain gray values. Such gray-value uncertainty may result from noise or other imaging artifacts or more general from measurement errors in the image acquisition process. The purpose is to quantify the influence of the gray-value uncertainty onto the result when using random walker segmentation. In random walker segmentation, a weighted graph is built from the image, where the edge weights depend on the image gradient between the pixels. For given seed regions, the probability is evaluated for a random walk on this graph starting at a pixel to end in one of the seed regions. Here, we extend this method to images with uncertain gray values. To this end, we consider the pixel values to be random variables (RVs), thus introducing the notion of stochastic images. We end up with stochastic weights for the graph in random walker segmentation and a stochastic partial differential equation (PDE) that has to be solved. We discretize the RVs and the stochastic PDE by the method of generalized polynomial chaos, combining the recent developments in numerical methods for the discretization of stochastic PDEs and an interactive segmentation algorithm. The resulting algorithm allows for the detection of regions where the segmentation result is highly influenced by the uncertain pixel values. Thus, it gives a reliability estimate for the resulting segmentation, and it furthermore allows determining the probability density function of the segmented object volume.
Modeling Bacterial Population Growth from Stochastic Single-Cell Dynamics
Molina, Ignacio; Theodoropoulos, Constantinos
2014-01-01
A few bacterial cells may be sufficient to produce a food-borne illness outbreak, provided that they are capable of adapting and proliferating on a food matrix. This is why any quantitative health risk assessment policy must incorporate methods to accurately predict the growth of bacterial populations from a small number of pathogens. In this aim, mathematical models have become a powerful tool. Unfortunately, at low cell concentrations, standard deterministic models fail to predict the fate of the population, essentially because the heterogeneity between individuals becomes relevant. In this work, a stochastic differential equation (SDE) model is proposed to describe variability within single-cell growth and division and to simulate population growth from a given initial number of individuals. We provide evidence of the model ability to explain the observed distributions of times to division, including the lag time produced by the adaptation to the environment, by comparing model predictions with experiments from the literature for Escherichia coli, Listeria innocua, and Salmonella enterica. The model is shown to accurately predict experimental growth population dynamics for both small and large microbial populations. The use of stochastic models for the estimation of parameters to successfully fit experimental data is a particularly challenging problem. For instance, if Monte Carlo methods are employed to model the required distributions of times to division, the parameter estimation problem can become numerically intractable. We overcame this limitation by converting the stochastic description to a partial differential equation (backward Kolmogorov) instead, which relates to the distribution of division times. Contrary to previous stochastic formulations based on random parameters, the present model is capable of explaining the variability observed in populations that result from the growth of a small number of initial cells as well as the lack of it compared to
Modeling bacterial population growth from stochastic single-cell dynamics.
Alonso, Antonio A; Molina, Ignacio; Theodoropoulos, Constantinos
2014-09-01
A few bacterial cells may be sufficient to produce a food-borne illness outbreak, provided that they are capable of adapting and proliferating on a food matrix. This is why any quantitative health risk assessment policy must incorporate methods to accurately predict the growth of bacterial populations from a small number of pathogens. In this aim, mathematical models have become a powerful tool. Unfortunately, at low cell concentrations, standard deterministic models fail to predict the fate of the population, essentially because the heterogeneity between individuals becomes relevant. In this work, a stochastic differential equation (SDE) model is proposed to describe variability within single-cell growth and division and to simulate population growth from a given initial number of individuals. We provide evidence of the model ability to explain the observed distributions of times to division, including the lag time produced by the adaptation to the environment, by comparing model predictions with experiments from the literature for Escherichia coli, Listeria innocua, and Salmonella enterica. The model is shown to accurately predict experimental growth population dynamics for both small and large microbial populations. The use of stochastic models for the estimation of parameters to successfully fit experimental data is a particularly challenging problem. For instance, if Monte Carlo methods are employed to model the required distributions of times to division, the parameter estimation problem can become numerically intractable. We overcame this limitation by converting the stochastic description to a partial differential equation (backward Kolmogorov) instead, which relates to the distribution of division times. Contrary to previous stochastic formulations based on random parameters, the present model is capable of explaining the variability observed in populations that result from the growth of a small number of initial cells as well as the lack of it compared to
Distributional monte carlo methods for the boltzmann equation
NASA Astrophysics Data System (ADS)
Schrock, Christopher R.
Stochastic particle methods (SPMs) for the Boltzmann equation, such as the Direct Simulation Monte Carlo (DSMC) technique, have gained popularity for the prediction of flows in which the assumptions behind the continuum equations of fluid mechanics break down; however, there are still a number of issues that make SPMs computationally challenging for practical use. In traditional SPMs, simulated particles may possess only a single velocity vector, even though they may represent an extremely large collection of actual particles. This limits the method to converge only in law to the Boltzmann solution. This document details the development of new SPMs that allow the velocity of each simulated particle to be distributed. This approach has been termed Distributional Monte Carlo (DMC). A technique is described which applies kernel density estimation to Nanbu's DSMC algorithm. It is then proven that the method converges not just in law, but also in solution for Linfinity(R 3) solutions of the space homogeneous Boltzmann equation. This provides for direct evaluation of the velocity density function. The derivation of a general Distributional Monte Carlo method is given which treats collision interactions between simulated particles as a relaxation problem. The framework is proven to converge in law to the solution of the space homogeneous Boltzmann equation, as well as in solution for Linfinity(R3) solutions. An approach based on the BGK simplification is presented which computes collision outcomes deterministically. Each technique is applied to the well-studied Bobylev-Krook-Wu solution as a numerical test case. Accuracy and variance of the solutions are examined as functions of various simulation parameters. Significantly improved accuracy and reduced variance are observed in the normalized moments for the Distributional Monte Carlo technique employing discrete BGK collision modeling.
Stacking with stochastic cooling
NASA Astrophysics Data System (ADS)
Caspers, Fritz; Möhl, Dieter
2004-10-01
Accumulation of large stacks of antiprotons or ions with the aid of stochastic cooling is more delicate than cooling a constant intensity beam. Basically the difficulty stems from the fact that the optimized gain and the cooling rate are inversely proportional to the number of particles 'seen' by the cooling system. Therefore, to maintain fast stacking, the newly injected batch has to be strongly 'protected' from the Schottky noise of the stack. Vice versa the stack has to be efficiently 'shielded' against the high gain cooling system for the injected beam. In the antiproton accumulators with stacking ratios up to 105 the problem is solved by radial separation of the injection and the stack orbits in a region of large dispersion. An array of several tapered cooling systems with a matched gain profile provides a continuous particle flux towards the high-density stack core. Shielding of the different systems from each other is obtained both through the spatial separation and via the revolution frequencies (filters). In the 'old AA', where the antiproton collection and stacking was done in one single ring, the injected beam was further shielded during cooling by means of a movable shutter. The complexity of these systems is very high. For more modest stacking ratios, one might use azimuthal rather than radial separation of stack and injected beam. Schematically half of the circumference would be used to accept and cool new beam and the remainder to house the stack. Fast gating is then required between the high gain cooling of the injected beam and the low gain stack cooling. RF-gymnastics are used to merge the pre-cooled batch with the stack, to re-create free space for the next injection, and to capture the new batch. This scheme is less demanding for the storage ring lattice, but at the expense of some reduction in stacking rate. The talk reviews the 'radial' separation schemes and also gives some considerations to the 'azimuthal' schemes.
A Stochastic Collocation Algorithm for Uncertainty Analysis
NASA Technical Reports Server (NTRS)
Mathelin, Lionel; Hussaini, M. Yousuff; Zang, Thomas A. (Technical Monitor)
2003-01-01
This report describes a stochastic collocation method to adequately handle a physically intrinsic uncertainty in the variables of a numerical simulation. For instance, while the standard Galerkin approach to Polynomial Chaos requires multi-dimensional summations over the stochastic basis functions, the stochastic collocation method enables to collapse those summations to a one-dimensional summation only. This report furnishes the essential algorithmic details of the new stochastic collocation method and provides as a numerical example the solution of the Riemann problem with the stochastic collocation method used for the discretization of the stochastic parameters.
Stamatakis, Michail; Vlachos, Dionisios G
2011-12-14
Well-mixed and lattice-based descriptions of stochastic chemical kinetics have been extensively used in the literature. Realizations of the corresponding stochastic processes are obtained by the Gillespie stochastic simulation algorithm and lattice kinetic Monte Carlo algorithms, respectively. However, the two frameworks have remained disconnected. We show the equivalence of these frameworks whereby the stochastic lattice kinetics reduces to effective well-mixed kinetics in the limit of fast diffusion. In the latter, the lattice structure appears implicitly, as the lumped rate of bimolecular reactions depends on the number of neighbors of a site on the lattice. Moreover, we propose a mapping between the stochastic propensities and the deterministic rates of the well-mixed vessel and lattice dynamics that illustrates the hierarchy of models and the key parameters that enable model reduction.
Application of stochastic radiative transfer to remote sensing of vegetation
NASA Astrophysics Data System (ADS)
Shabanov, Nikolay V.
2002-01-01
The availability of high quality remote sensing data during the past decade provides an impetus for the development of methods that facilitate accurate retrieval of structural and optical properties of vegetation required for the study of global vegetation dynamics. Empirical and statistical methods have proven to be quite useful in many applications, but they often do not shed light on the underlying physical processes. Approaches based on radiative transfer and the physics of matter-energy interaction are therefore required to gain insight into the mechanisms responsible for signal generation. The goal of this dissertation is the development of advanced methods based on radiative transfer for the retrieval of biophysical information from satellite data. Classical radiative transfer theory is applicable to homogeneous vegetation and is generally inaccurate in characterizing the radiation regime in natural vegetation communities, such as forests or woodlands. A stochastic approach to radiative transfer was introduced in this dissertation to describe the radiation regime in discontinuous vegetation canopies. The resulting stochastic model was implemented and tested with field data and Monte Carlo simulations. The effect of gaps on radiation fluxes in vegetation canopies was quantified analytically and compared to classical representations. Next, the stochastic theory was applied to vegetation remote sensing in two case studies. First, the radiative transfer principles underlying an algorithm for leaf area index (LAI) retrieval were studied with data from Harvard Forest. The classical expression for uncollided radiation was modified according to stochastic principles to explain radiometric measurements and vegetation structure. In the second case study, vegetation dynamics in the northern latitudes inferred from the Pathfinder Advanced Very High-Resolution Radiometer Land data were investigated. The signatures of interannual and seasonal variation recorded in the
Enhanced algorithms for stochastic programming
Krishna, A.S.
1993-09-01
In this dissertation, we present some of the recent advances made in solving two-stage stochastic linear programming problems of large size and complexity. Decomposition and sampling are two fundamental components of techniques to solve stochastic optimization problems. We describe improvements to the current techniques in both these areas. We studied different ways of using importance sampling techniques in the context of Stochastic programming, by varying the choice of approximation functions used in this method. We have concluded that approximating the recourse function by a computationally inexpensive piecewise-linear function is highly efficient. This reduced the problem from finding the mean of a computationally expensive functions to finding that of a computationally inexpensive function. Then we implemented various variance reduction techniques to estimate the mean of a piecewise-linear function. This method achieved similar variance reductions in orders of magnitude less time than, when we directly applied variance-reduction techniques directly on the given problem. In solving a stochastic linear program, the expected value problem is usually solved before a stochastic solution and also to speed-up the algorithm by making use of the information obtained from the solution of the expected value problem. We have devised a new decomposition scheme to improve the convergence of this algorithm.
Stochastic simulation in systems biology.
Székely, Tamás; Burrage, Kevin
2014-11-01
Natural systems are, almost by definition, heterogeneous: this can be either a boon or an obstacle to be overcome, depending on the situation. Traditionally, when constructing mathematical models of these systems, heterogeneity has typically been ignored, despite its critical role. However, in recent years, stochastic computational methods have become commonplace in science. They are able to appropriately account for heterogeneity; indeed, they are based around the premise that systems inherently contain at least one source of heterogeneity (namely, intrinsic heterogeneity). In this mini-review, we give a brief introduction to theoretical modelling and simulation in systems biology and discuss the three different sources of heterogeneity in natural systems. Our main topic is an overview of stochastic simulation methods in systems biology. There are many different types of stochastic methods. We focus on one group that has become especially popular in systems biology, biochemistry, chemistry and physics. These discrete-state stochastic methods do not follow individuals over time; rather they track only total populations. They also assume that the volume of interest is spatially homogeneous. We give an overview of these methods, with a discussion of the advantages and disadvantages of each, and suggest when each is more appropriate to use. We also include references to software implementations of them, so that beginners can quickly start using stochastic methods for practical problems of interest.
Stochastic simulation in systems biology
Székely, Tamás; Burrage, Kevin
2014-01-01
Natural systems are, almost by definition, heterogeneous: this can be either a boon or an obstacle to be overcome, depending on the situation. Traditionally, when constructing mathematical models of these systems, heterogeneity has typically been ignored, despite its critical role. However, in recent years, stochastic computational methods have become commonplace in science. They are able to appropriately account for heterogeneity; indeed, they are based around the premise that systems inherently contain at least one source of heterogeneity (namely, intrinsic heterogeneity). In this mini-review, we give a brief introduction to theoretical modelling and simulation in systems biology and discuss the three different sources of heterogeneity in natural systems. Our main topic is an overview of stochastic simulation methods in systems biology. There are many different types of stochastic methods. We focus on one group that has become especially popular in systems biology, biochemistry, chemistry and physics. These discrete-state stochastic methods do not follow individuals over time; rather they track only total populations. They also assume that the volume of interest is spatially homogeneous. We give an overview of these methods, with a discussion of the advantages and disadvantages of each, and suggest when each is more appropriate to use. We also include references to software implementations of them, so that beginners can quickly start using stochastic methods for practical problems of interest. PMID:25505503
Monte Carlo Methods in ICF (LIRPP Vol. 13)
NASA Astrophysics Data System (ADS)
Zimmerman, George B.
2016-10-01
Monte Carlo methods appropriate to simulate the transport of x-rays, neutrons, ions and electrons in Inertial Confinement Fusion targets are described and analyzed. The Implicit Monte Carlo method of x-ray transport handles symmetry within indirect drive ICF hohlraums well, but can be improved SOX in efficiency by angular biasing the x-rays towards the fuel capsule. Accurate simulation of thermonuclear burn and burn diagnostics involves detailed particle source spectra, charged particle ranges, inflight reaction kinematics, corrections for bulk and thermal Doppler effects and variance reduction to obtain adequate statistics for rare events. It is found that the effects of angular Coulomb scattering must be included in models of charged particle transport through heterogeneous materials.
Matthew Ellis; Derek Gaston; Benoit Forget; Kord Smith
2011-07-01
In recent years the use of Monte Carlo methods for modeling reactors has become feasible due to the increasing availability of massively parallel computer systems. One of the primary challenges yet to be fully resolved, however, is the efficient and accurate inclusion of multiphysics feedback in Monte Carlo simulations. The research in this paper presents a preliminary coupling of the open source Monte Carlo code OpenMC with the open source Multiphysics Object-Oriented Simulation Environment (MOOSE). The coupling of OpenMC and MOOSE will be used to investigate efficient and accurate numerical methods needed to include multiphysics feedback in Monte Carlo codes. An investigation into the sensitivity of Doppler feedback to fuel temperature approximations using a two dimensional 17x17 PWR fuel assembly is presented in this paper. The results show a functioning multiphysics coupling between OpenMC and MOOSE. The coupling utilizes Functional Expansion Tallies to accurately and efficiently transfer pin power distributions tallied in OpenMC to unstructured finite element meshes used in MOOSE. The two dimensional PWR fuel assembly case also demonstrates that for a simplified model the pin-by-pin doppler feedback can be adequately replicated by scaling a representative pin based on pin relative powers.
Zou, Yonghong; Christensen, Erik R; Zheng, Wei; Wei, Hua; Li, An
2014-11-01
A stochastic process was developed to simulate the stepwise debromination pathways for polybrominated diphenyl ethers (PBDEs). The stochastic process uses an analogue Markov Chain Monte Carlo (AMCMC) algorithm to generate PBDE debromination profiles. The acceptance or rejection of the randomly drawn stepwise debromination reactions was determined by a maximum likelihood function. The experimental observations at certain time points were used as target profiles; therefore, the stochastic processes are capable of presenting the effects of reaction conditions on the selection of debromination pathways. The application of the model is illustrated by adopting the experimental results of decabromodiphenyl ether (BDE209) in hexane exposed to sunlight. Inferences that were not obvious from experimental data were suggested by model simulations. For example, BDE206 has much higher accumulation at the first 30 min of sunlight exposure. By contrast, model simulation suggests that, BDE206 and BDE207 had comparable yields from BDE209. The reason for the higher BDE206 level is that BDE207 has the highest depletion in producing octa products. Compared to a previous version of the stochastic model based on stochastic reaction sequences (SRS), the AMCMC approach was determined to be more efficient and robust. Due to the feature of only requiring experimental observations as input, the AMCMC model is expected to be applicable to a wide range of PBDE debromination processes, e.g. microbial, photolytic, or joint effects in natural environments.
Fingerprints of determinism in an apparently stochastic corrosion process.
Rivera, M; Uruchurtu-Chavarín, J; Parmananda, P
2003-05-02
We detect hints of determinism in an apparently stochastic corrosion problem. This experimental system has industrial relevance as it mimics the corrosion processes of pipelines transporting water, hydrocarbons, or other fuels to remote destinations. We subject this autonomous system to external periodic perturbations. Keeping the amplitude of the superimposed perturbations constant and varying the frequency, the system's response is analyzed. It reveals the presence of an optimal forcing frequency for which maximal response is achieved. These results are consistent with those for a deterministic system and indicate a classical resonance between the forcing signal and the autonomous dynamics. Numerical studies using a generic corrosion model are carried out to complement the experimental findings.
Simulation on reactor TRIGA Puspati core kinetics fueled with thorium (Th) based fuel element
Mohammed, Abdul Aziz Rahman, Shaik Mohmmed Haikhal Abdul; Pauzi, Anas Muhamad Zin, Muhamad Rawi Muhammad; Jamro, Rafhayudi; Idris, Faridah Mohamad
2016-01-22
In confronting global energy requirement and the search for better technologies, there is a real case for widening the range of potential variations in the design of nuclear power plants. Smaller and simpler reactors are attractive, provided they can meet safety and security standards and non-proliferation issues. On fuel cycle aspect, thorium fuel cycles produce much less plutonium and other radioactive transuranic elements than uranium fuel cycles. Although not fissile itself, Th-232 will absorb slow neutrons to produce uranium-233 ({sup 233}U), which is fissile. By introducing Thorium, the numbers of highly enriched uranium fuel element can be reduced while maintaining the core neutronic performance. This paper describes the core kinetic of a small research reactor core like TRIGA fueled with a Th filled fuel element matrix using a general purpose Monte Carlo N-Particle (MCNP) code.
Simulation on reactor TRIGA Puspati core kinetics fueled with thorium (Th) based fuel element
NASA Astrophysics Data System (ADS)
Mohammed, Abdul Aziz; Pauzi, Anas Muhamad; Rahman, Shaik Mohmmed Haikhal Abdul; Zin, Muhamad Rawi Muhammad; Jamro, Rafhayudi; Idris, Faridah Mohamad
2016-01-01
In confronting global energy requirement and the search for better technologies, there is a real case for widening the range of potential variations in the design of nuclear power plants. Smaller and simpler reactors are attractive, provided they can meet safety and security standards and non-proliferation issues. On fuel cycle aspect, thorium fuel cycles produce much less plutonium and other radioactive transuranic elements than uranium fuel cycles. Although not fissile itself, Th-232 will absorb slow neutrons to produce uranium-233 (233U), which is fissile. By introducing Thorium, the numbers of highly enriched uranium fuel element can be reduced while maintaining the core neutronic performance. This paper describes the core kinetic of a small research reactor core like TRIGA fueled with a Th filled fuel element matrix using a general purpose Monte Carlo N-Particle (MCNP) code.
Zaweski, E.F.; Niebylski, L.M.
1986-08-05
This patent describes distillate fuel for indirect injection compression ignition engines containing, in an amount sufficient to minimize coking, especially throttling nozzle coking in the prechambers or swirl chambers of indirect injection compression ignition engines operated on such fuel, at least the combination of (i) organic nitrate ignition accelerator and (ii) an esterified cycle dehydration product of sorbitol which, when added to the fuel in combination with the organic nitrate ignition accelerator minimizes the coking.
Axial grading of inert matrix fuels
Recktenwald, G. D.; Deinert, M. R.
2012-07-01
Burning actinides in an inert matrix fuel to 750 MWd/kg IHM results in a significant reduction in transuranic isotopes. However, achieving this level of burnup in a standard light water reactor would require residence times that are twice that of uranium dioxide fuels. The reactivity of an inert matrix assembly at the end of life is less than 1/3 of its beginning of life reactivity leading to undesirable radial and axial power peaking in the reactor core. Here we show that axial grading of the inert matrix fuel rods can reduce peaking significantly. Monte Carlo simulations are used to model the assembly level power distributions in both ungraded and graded fuel rods. The results show that an axial grading of uranium dioxide and inert matrix fuels with erbium can reduces power peaking by more than 50% in the axial direction. The reduction in power peaking enables the core to operate at significantly higher power. (authors)
Stochastic determination of matrix determinants.
Dorn, Sebastian; Ensslin, Torsten A
2015-07-01
Matrix determinants play an important role in data analysis, in particular when Gaussian processes are involved. Due to currently exploding data volumes, linear operations-matrices-acting on the data are often not accessible directly but are only represented indirectly in form of a computer routine. Such a routine implements the transformation a data vector undergoes under matrix multiplication. While efficient probing routines to estimate a matrix's diagonal or trace, based solely on such computationally affordable matrix-vector multiplications, are well known and frequently used in signal inference, there is no stochastic estimate for its determinant. We introduce a probing method for the logarithm of a determinant of a linear operator. Our method rests upon a reformulation of the log-determinant by an integral representation and the transformation of the involved terms into stochastic expressions. This stochastic determinant determination enables large-size applications in Bayesian inference, in particular evidence calculations, model comparison, and posterior determination.
Intrinsic optimization using stochastic nanomagnets
NASA Astrophysics Data System (ADS)
Sutton, Brian; Camsari, Kerem Yunus; Behin-Aein, Behtash; Datta, Supriyo
2017-03-01
This paper draws attention to a hardware system which can be engineered so that its intrinsic physics is described by the generalized Ising model and can encode the solution to many important NP-hard problems as its ground state. The basic constituents are stochastic nanomagnets which switch randomly between the ±1 Ising states and can be monitored continuously with standard electronics. Their mutual interactions can be short or long range, and their strengths can be reconfigured as needed to solve specific problems and to anneal the system at room temperature. The natural laws of statistical mechanics guide the network of stochastic nanomagnets at GHz speeds through the collective states with an emphasis on the low energy states that represent optimal solutions. As proof-of-concept, we present simulation results for standard NP-complete examples including a 16-city traveling salesman problem using experimentally benchmarked models for spin-transfer torque driven stochastic nanomagnets.
Nonlinear optimization for stochastic simulations.
Johnson, Michael M.; Yoshimura, Ann S.; Hough, Patricia Diane; Ammerlahn, Heidi R.
2003-12-01
This report describes research targeting development of stochastic optimization algorithms and their application to mission-critical optimization problems in which uncertainty arises. The first section of this report covers the enhancement of the Trust Region Parallel Direct Search (TRPDS) algorithm to address stochastic responses and the incorporation of the algorithm into the OPT++ optimization library. The second section describes the Weapons of Mass Destruction Decision Analysis Center (WMD-DAC) suite of systems analysis tools and motivates the use of stochastic optimization techniques in such non-deterministic simulations. The third section details a batch programming interface designed to facilitate criteria-based or algorithm-driven execution of system-of-system simulations. The fourth section outlines the use of the enhanced OPT++ library and batch execution mechanism to perform systems analysis and technology trade-off studies in the WMD detection and response problem domain.
Intrinsic optimization using stochastic nanomagnets
Sutton, Brian; Camsari, Kerem Yunus; Behin-Aein, Behtash; Datta, Supriyo
2017-01-01
This paper draws attention to a hardware system which can be engineered so that its intrinsic physics is described by the generalized Ising model and can encode the solution to many important NP-hard problems as its ground state. The basic constituents are stochastic nanomagnets which switch randomly between the ±1 Ising states and can be monitored continuously with standard electronics. Their mutual interactions can be short or long range, and their strengths can be reconfigured as needed to solve specific problems and to anneal the system at room temperature. The natural laws of statistical mechanics guide the network of stochastic nanomagnets at GHz speeds through the collective states with an emphasis on the low energy states that represent optimal solutions. As proof-of-concept, we present simulation results for standard NP-complete examples including a 16-city traveling salesman problem using experimentally benchmarked models for spin-transfer torque driven stochastic nanomagnets. PMID:28295053
Stochastic excitation of stellar oscillations
NASA Astrophysics Data System (ADS)
Samadi, Reza
2001-05-01
Since more than about thirty years, solar oscillations are thought to be excited stochastically by the turbulent motions in the solar convective zone. It is currently believed that oscillations of stars lower than 2 solar masses - which possess an upper convective zone - are excited stochastically by turbulent convection in their outer layers. Providing that accurate measurements of the oscillation amplitudes and damping rates are available it is possible to evaluate the power injected into the modes and thus - by comparison with the observations - to constrain current theories. A recent theoretical work (Samadi & Goupil, 2001; Samadi et al., 2001) supplements and reinforces the theory of stochastic excitation of star vibrations. This process was generalized to a global description of the turbulent state of their convective zone. The comparison between observation and theory, thus generalized, will allow to better know the turbulent spectrum of stars, and this in particular thanks to the COROT mission.
Principal axes for stochastic dynamics
NASA Astrophysics Data System (ADS)
Vasconcelos, V. V.; Raischel, F.; Haase, M.; Peinke, J.; Wächter, M.; Lind, P. G.; Kleinhans, D.
2011-09-01
We introduce a general procedure for directly ascertaining how many independent stochastic sources exist in a complex system modeled through a set of coupled Langevin equations of arbitrary dimension. The procedure is based on the computation of the eigenvalues and the corresponding eigenvectors of local diffusion matrices. We demonstrate our algorithm by applying it to two examples of systems showing Hopf bifurcation. We argue that computing the eigenvectors associated to the eigenvalues of the diffusion matrix at local mesh points in the phase space enables one to define vector fields of stochastic eigendirections. In particular, the eigenvector associated to the lowest eigenvalue defines the path of minimum stochastic forcing in phase space, and a transform to a new coordinate system aligned with the eigenvectors can increase the predictability of the system.
Hirschenhofer, J.H.
1999-07-01
This paper discusses the various types of fuel cells, the importance of cell voltage, fuel processing for natural gas, cell stacking, fuel cell plant description, advantages and disadvantages of the types of fuel cells, and applications. The types covered include: polymer electrolyte fuel cell, alkaline fuel cell, phosphoric acid fuel cell; molten carbonate fuel cell, and solid oxide fuel cell.
Burns, L.D.
1982-07-13
Liquid hydrocarbon fuel compositions are provided containing antiknock quantities of ashless antiknock agents comprising selected furyl compounds including furfuryl alcohol, furfuryl amine, furfuryl esters, and alkyl furoates.
Lyons, W.R.
1986-03-01
Hazy fuels can be caused by the emulsification of water into the fuel during refining, blending, or transportation operations. Detergent additive packages used in gasoline tend to emulsify water into the fuel. Fuels containing water haze can cause corrosion and contamination, and support microbiological growth. This results in problems. As the result of these problems, refiners, marketers, and product pipeline companies customarily have haze specifications. The haze specification may be a specific maximum water content or simply ''bright and clear'' at a specified temperature.
Not Available
1991-07-01
This paper presents the preliminary results of a review, of the experiences of Brazil, Canada, and New Zealand, which have implemented programs to encourage the use of alternative motor fuels. It will also discuss the results of a separate completed review of the Department of Energy's (DOE) progress in implementing the Alternative Motor Fuels Act of 1988. The act calls for, among other things, the federal government to use alternative-fueled vehicles in its fleet. The Persian Gulf War, environmental concerns, and the administration's National Energy Strategy have greatly heightened interest in the use of alternative fuels in this country.
NASA Technical Reports Server (NTRS)
Lacksonen, Thomas A.
1994-01-01
Small space flight project design at NASA Langley Research Center goes through a multi-phase process from preliminary analysis to flight operations. The process insures that each system achieves its technical objectives with demonstrated quality and within planned budgets and schedules. A key technical component of early phases is decision analysis, which is a structure procedure for determining the best of a number of feasible concepts based upon project objectives. Feasible system concepts are generated by the designers and analyzed for schedule, cost, risk, and technical measures. Each performance measure value is normalized between the best and worst values and a weighted average score of all measures is calculated for each concept. The concept(s) with the highest scores are retained, while others are eliminated from further analysis. This project automated and enhanced the decision analysis process. Automation of the decision analysis process was done by creating a user-friendly, menu-driven, spreadsheet macro based decision analysis software program. The program contains data entry dialog boxes, automated data and output report generation, and automated output chart generation. The enhancements to the decision analysis process permit stochastic data entry and analysis. Rather than enter single measure values, the designers enter the range and most likely value for each measure and concept. The data can be entered at the system or subsystem level. System level data can be calculated as either sum, maximum, or product functions of the subsystem data. For each concept, the probability distributions are approximated for each measure and the total score for each concept as either constant, triangular, normal, or log-normal distributions. Based on these distributions, formulas are derived for the probability that the concept meets any given constraint, the probability that the concept meets all constraints, and the probability that the concept is within a given
Proton Upset Monte Carlo Simulation
NASA Technical Reports Server (NTRS)
O'Neill, Patrick M.; Kouba, Coy K.; Foster, Charles C.
2009-01-01
The Proton Upset Monte Carlo Simulation (PROPSET) program calculates the frequency of on-orbit upsets in computer chips (for given orbits such as Low Earth Orbit, Lunar Orbit, and the like) from proton bombardment based on the results of heavy ion testing alone. The software simulates the bombardment of modern microelectronic components (computer chips) with high-energy (.200 MeV) protons. The nuclear interaction of the proton with the silicon of the chip is modeled and nuclear fragments from this interaction are tracked using Monte Carlo techniques to produce statistically accurate predictions.
Quantum Monte Carlo for Molecules.
1986-12-01
AD-Ml?? Ml SITNEt MNOTE CARLO FOR OLEC ILES U) CALIFORNIA INEZY 1/ BERWLEY LRIWENCE BERKELEY LAB NI A LESTER ET AL UKLff~j~~lD61 DEC 66 MSW14-6 .3...SUMMARY REPORT 4. PERFORMING ORG. REPORT NUMBER S QUANTUM MONTE CARLO FOR MOLECULES ___ IU . AUTHOR(@) S. CONTRACT OR GRANT NUMSKR(.) S William A...DISTRIGUTION STATIEMEN4T (at the abstract entered in Block 20. it different from Report) - Quantum Monte Carlo importance functions molchuiner eqaio
NASA Astrophysics Data System (ADS)
Miyamoto, Hitoshi; Kimura, Ryo
2016-09-01
This paper proposes a stochastic evaluation method for examining tree population states in a river cross section using an integrated model with Monte Carlo simulation. The integrated model consists of four processes as submodels, i.e., tree population dynamics, flow discharge stochasticity, stream hydraulics, and channel geomorphology. A floodplain of the Kako River in Japan was examined as a test site, which is currently well vegetated and features many willows that have been growing in both individual size and overall population over the last several decades. The model was used to stochastically evaluate the effects of hydrologic and geomorphologic changes on tree population dynamics through the Monte Carlo simulation. The effects including the magnitude of flood impacts and the relative change in the floodplain level are examined using very simple scenarios for flow regulation, climate change, and channel form changes. The stochastic evaluation method revealed a tradeoff point in floodplain levels, at which the tendency of a fully vegetated state switches to that of a bare floodplain under small impacts of flood. It is concluded from these results that the states of tree population in a floodplain can be determined by the mutual interactions among flood impacts, seedling recruitment, tree growth, and channel geomorphology. These interactions make it difficult to obtain a basic understanding of tree population dynamics from a field study of a specific floodplain. The stochastic approach used in this paper could constitute an effective method for evaluating fundamental channel characteristics for a vegetated floodplain.
Partial ASL extensions for stochastic programming.
Gay, David
2010-03-31
partially completed extensions for stochastic programming to the AMPL/solver interface library (ASL).modeling and experimenting with stochastic recourse problems. This software is not primarily for military applications
Markov Chain Monte Carlo Bayesian Learning for Neural Networks
NASA Technical Reports Server (NTRS)
Goodrich, Michael S.
2011-01-01
Conventional training methods for neural networks involve starting al a random location in the solution space of the network weights, navigating an error hyper surface to reach a minimum, and sometime stochastic based techniques (e.g., genetic algorithms) to avoid entrapment in a local minimum. It is further typically necessary to preprocess the data (e.g., normalization) to keep the training algorithm on course. Conversely, Bayesian based learning is an epistemological approach concerned with formally updating the plausibility of competing candidate hypotheses thereby obtaining a posterior distribution for the network weights conditioned on the available data and a prior distribution. In this paper, we developed a powerful methodology for estimating the full residual uncertainty in network weights and therefore network predictions by using a modified Jeffery's prior combined with a Metropolis Markov Chain Monte Carlo method.
Accelerating particle-in-cell simulations using multilevel Monte Carlo
NASA Astrophysics Data System (ADS)
Ricketson, Lee
2015-11-01
Particle-in-cell (PIC) simulations have been an important tool in understanding plasmas since the dawn of the digital computer. Much more recently, the multilevel Monte Carlo (MLMC) method has accelerated particle-based simulations of a variety of systems described by stochastic differential equations (SDEs), from financial portfolios to porous media flow. The fundamental idea of MLMC is to perform correlated particle simulations using a hierarchy of different time steps, and to use these correlations for variance reduction on the fine-step result. This framework is directly applicable to the Langevin formulation of Coulomb collisions, as demonstrated in previous work, but in order to apply to PIC simulations of realistic scenarios, MLMC must be generalized to incorporate self-consistent evolution of the electromagnetic fields. We present such a generalization, with rigorous results concerning its accuracy and efficiency. We present examples of the method in the collisionless, electrostatic context, and discuss applications and extensions for the future.
Applying diffusion-based Markov chain Monte Carlo
Paul, Rajib; Berliner, L. Mark
2017-01-01
We examine the performance of a strategy for Markov chain Monte Carlo (MCMC) developed by simulating a discrete approximation to a stochastic differential equation (SDE). We refer to the approach as diffusion MCMC. A variety of motivations for the approach are reviewed in the context of Bayesian analysis. In particular, implementation of diffusion MCMC is very simple to set-up, even in the presence of nonlinear models and non-conjugate priors. Also, it requires comparatively little problem-specific tuning. We implement the algorithm and assess its performance for both a test case and a glaciological application. Our results demonstrate that in some settings, diffusion MCMC is a faster alternative to a general Metropolis-Hastings algorithm. PMID:28301529
Reactive Monte Carlo sampling with an ab initio potential
Leiding, Jeff; Coe, Joshua D.
2016-05-04
Here, we present the first application of reactive Monte Carlo in a first-principles context. The algorithm samples in a modified NVT ensemble in which the volume, temperature, and total number of atoms of a given type are held fixed, but molecular composition is allowed to evolve through stochastic variation of chemical connectivity. We also discuss general features of the method, as well as techniques needed to enhance the efficiency of Boltzmann sampling. Finally, we compare the results of simulation of NH3 to those of ab initio molecular dynamics (AIMD). Furthermore, we find that there are regions of state space formore » which RxMC sampling is much more efficient than AIMD due to the “rare-event” character of chemical reactions.« less
Lagged average forecasting, an alternative to Monte Carlo forecasting
NASA Technical Reports Server (NTRS)
Hoffman, R. N.; Kalnay, E.
1983-01-01
A 'lagged average forecast' (LAF) model is developed for stochastic dynamic weather forecasting and used for predictions in comparison with the results of a Monte Carlo forecast (MCF). The technique involves the calculation of sample statistics from an ensemble of forecasts, with each ensemble member being an ordinary dynamical forecast (ODF). Initial conditions at a time lagging the start of the forecast period are used, with varying amounts of time for the lags. Forcing by asymmetric Newtonian heating of the lower layer is used in a two-layer, f-plane, highly truncated spectral model in a test forecasting run. Both the LAF and MCF are found to be more accurate than the ODF due to ensemble averaging with the MCF and the LAF. When a regression filter is introduced, all models become more accurate, with the LAF model giving the best results. The possibility of generating monthly or seasonal forecasts with the LAF is discussed.
Neutron monitor generated data distributions in quantum variational Monte Carlo
NASA Astrophysics Data System (ADS)
Kussainov, A. S.; Pya, N.
2016-08-01
We have assessed the potential applications of the neutron monitor hardware as random number generator for normal and uniform distributions. The data tables from the acquisition channels with no extreme changes in the signal level were chosen as the retrospective model. The stochastic component was extracted by fitting the raw data with splines and then subtracting the fit. Scaling the extracted data to zero mean and variance of one is sufficient to obtain a stable standard normal random variate. Distributions under consideration pass all available normality tests. Inverse transform sampling is suggested to use as a source of the uniform random numbers. Variational Monte Carlo method for quantum harmonic oscillator was used to test the quality of our random numbers. If the data delivery rate is of importance and the conventional one minute resolution neutron count is insufficient, we could always settle for an efficient seed generator to feed into the faster algorithmic random number generator or create a buffer.
Kinetic Monte Carlo simulation of the classical nucleation process
NASA Astrophysics Data System (ADS)
Filipponi, A.; Giammatteo, P.
2016-12-01
We implemented a kinetic Monte Carlo computer simulation of the nucleation process in the framework of the coarse grained scenario of the Classical Nucleation Theory (CNT). The computational approach is efficient for a wide range of temperatures and sample sizes and provides a reliable simulation of the stochastic process. The results for the nucleation rate are in agreement with the CNT predictions based on the stationary solution of the set of differential equations for the continuous variables representing the average population distribution of nuclei size. Time dependent nucleation behavior can also be simulated with results in agreement with previous approaches. The method, here established for the case in which the excess free-energy of a crystalline nucleus is a smooth-function of the size, can be particularly useful when more complex descriptions are required.
Directed-Loop Quantum Monte Carlo Method for Retarded Interactions
NASA Astrophysics Data System (ADS)
Weber, Manuel; Assaad, Fakher F.; Hohenadler, Martin
2017-09-01
The directed-loop quantum Monte Carlo method is generalized to the case of retarded interactions. Using the path integral, fermion-boson or spin-boson models are mapped to actions with retarded interactions by analytically integrating out the bosons. This yields an exact algorithm that combines the highly efficient loop updates available in the stochastic series expansion representation with the advantages of avoiding a direct sampling of the bosons. The application to electron-phonon models reveals that the method overcomes the previously detrimental issues of long autocorrelation times and exponentially decreasing acceptance rates. For example, the resulting dramatic speedup allows us to investigate the Peierls quantum phase transition on chains of up to 1282 sites.
Quantum Monte Carlo with directed loops.
Syljuåsen, Olav F; Sandvik, Anders W
2002-10-01
We introduce the concept of directed loops in stochastic series expansion and path-integral quantum Monte Carlo methods. Using the detailed balance rules for directed loops, we show that it is possible to smoothly connect generally applicable simulation schemes (in which it is necessary to include backtracking processes in the loop construction) to more restricted loop algorithms that can be constructed only for a limited range of Hamiltonians (where backtracking can be avoided). The "algorithmic discontinuities" between general and special points (or regions) in parameter space can hence be eliminated. As a specific example, we consider the anisotropic S=1/2 Heisenberg antiferromagnet in an external magnetic field. We show that directed-loop simulations are very efficient for the full range of magnetic fields (zero to the saturation point) and anisotropies. In particular, for weak fields and anisotropies, the autocorrelations are significantly reduced relative to those of previous approaches. The back-tracking probability vanishes continuously as the isotropic Heisenberg point is approached. For the XY model, we show that back tracking can be avoided for all fields extending up to the saturation field. The method is hence particularly efficient in this case. We use directed-loop simulations to study the magnetization process in the two-dimensional Heisenberg model at very low temperatures. For LxL lattices with L up to 64, we utilize the step structure in the magnetization curve to extract gaps between different spin sectors. Finite-size scaling of the gaps gives an accurate estimate of the transverse susceptibility in the thermodynamic limit: chi( perpendicular )=0.0659+/-0.0002.
Comments on optical stochastic cooling
K.Y. Ng, S.Y. Lee and Y.K. Zhang
2002-10-08
An important necessary condition for transverse phase space damping in the optical stochastic cooling with transit-time method is derived. The longitudinal and transverse damping dynamics for the optical stochastic cooling is studied. The authors also obtain an optimal laser focusing condition for laser-beam interaction in the correction undulator. The amplification factor and the output peak power of the laser amplifier are found to differ substantially from earlier publications. The required power is large for hadron colliders at very high energy.
Stochastic Kinetics of Nascent RNA
NASA Astrophysics Data System (ADS)
Xu, Heng; Skinner, Samuel O.; Sokac, Anna Marie; Golding, Ido
2016-09-01
The stochastic kinetics of transcription is typically inferred from the distribution of RNA numbers in individual cells. However, cellular RNA reflects additional processes downstream of transcription, hampering this analysis. In contrast, nascent (actively transcribed) RNA closely reflects the kinetics of transcription. We present a theoretical model for the stochastic kinetics of nascent RNA, which we solve to obtain the probability distribution of nascent RNA per gene. The model allows us to evaluate the kinetic parameters of transcription from single-cell measurements of nascent RNA. The model also predicts surprising discontinuities in the distribution of nascent RNA, a feature which we verify experimentally.
Bar shapes and orbital stochasticity
Athanassoula, E. )
1990-06-01
Several independent lines of evidence suggest that the isophotes or isodensities of bars in barred galaxies are not really elliptical in shape but more rectangular. The effect this might have on the orbits in two different types of bar potentials is studied, and it is found that in both cases the percentage of stochastic orbits is much larger when the shapes are more rectangularlike or, equivalently, when the m = 4 components are more important. This can be understood with the help of the Chirikov criterion, which can predict the limit for the onset of global stochasticity. 9 refs.
Modeling of hydrocarbon fueling
Hogan, J.T. ); Pospieszczyk, A. . Inst. fuer Plasmaphysik)
1990-07-01
We have compared a database of rate coefficients for CH{sub 4} with experiments on PISCES-A to understand the role of carbon-based impurities in determining the fueling profile of carbon-dominated machines. A three-dimensional Monte Carlo model that embodies the Ehrhardt-Langer CH{sub 4} breakup scheme has been developed. The model has been compared with spectroscopic observations of the spatial variation of the hydrocarbon product decay rates, and reasonable agreement has been found. The comparison is sensitive to the non-Maxwellian electron distribution and to observed spatial inhomogeneities in the electron density and temperature profiles. Applications of the model to parameters characteristic of the tokamak scrape-off layer are presented.
The Hamiltonian Mechanics of Stochastic Acceleration
Burby, J. W.
2013-07-17
We show how to nd the physical Langevin equation describing the trajectories of particles un- dergoing collisionless stochastic acceleration. These stochastic di erential equations retain not only one-, but two-particle statistics, and inherit the Hamiltonian nature of the underlying microscopic equations. This opens the door to using stochastic variational integrators to perform simulations of stochastic interactions such as Fermi acceleration. We illustrate the theory by applying it to two example problems.
MontePython: Implementing Quantum Monte Carlo using Python
NASA Astrophysics Data System (ADS)
Nilsen, Jon Kristian
2007-11-01
We present a cross-language C++/Python program for simulations of quantum mechanical systems with the use of Quantum Monte Carlo (QMC) methods. We describe a system for which to apply QMC, the algorithms of variational Monte Carlo and diffusion Monte Carlo and we describe how to implement theses methods in pure C++ and C++/Python. Furthermore we check the efficiency of the implementations in serial and parallel cases to show that the overhead using Python can be negligible. Program summaryProgram title: MontePython Catalogue identifier: ADZP_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/ADZP_v1_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: Standard CPC licence, http://cpc.cs.qub.ac.uk/licence/licence.html No. of lines in distributed program, including test data, etc.: 49 519 No. of bytes in distributed program, including test data, etc.: 114 484 Distribution format: tar.gz Programming language: C++, Python Computer: PC, IBM RS6000/320, HP, ALPHA Operating system: LINUX Has the code been vectorised or parallelized?: Yes, parallelized with MPI Number of processors used: 1-96 RAM: Depends on physical system to be simulated Classification: 7.6; 16.1 Nature of problem: Investigating ab initio quantum mechanical systems, specifically Bose-Einstein condensation in dilute gases of 87Rb Solution method: Quantum Monte Carlo Running time: 225 min with 20 particles (with 4800 walkers moved in 1750 time steps) on 1 AMD Opteron TM Processor 2218 processor; Production run for, e.g., 200 particles takes around 24 hours on 32 such processors.
NASA Astrophysics Data System (ADS)
Garniron, Yann; Scemama, Anthony; Loos, Pierre-François; Caffarel, Michel
2017-07-01
A hybrid stochastic-deterministic approach for computing the second-order perturbative contribution E(2) within multireference perturbation theory (MRPT) is presented. The idea at the heart of our hybrid scheme—based on a reformulation of E(2) as a sum of elementary contributions associated with each determinant of the MR wave function—is to split E(2) into a stochastic and a deterministic part. During the simulation, the stochastic part is gradually reduced by dynamically increasing the deterministic part until one reaches the desired accuracy. In sharp contrast with a purely stochastic Monte Carlo scheme where the error decreases indefinitely as t-1/2 (where t is the computational time), the statistical error in our hybrid algorithm displays a polynomial decay ˜t-n with n = 3-4 in the examples considered here. If desired, the calculation can be carried on until the stochastic part entirely vanishes. In that case, the exact result is obtained with no error bar and no noticeable computational overhead compared to the fully deterministic calculation. The method is illustrated on the F2 and Cr2 molecules. Even for the largest case corresponding to the Cr2 molecule treated with the cc-pVQZ basis set, very accurate results are obtained for E(2) for an active space of (28e, 176o) and a MR wave function including up to 2 ×1 07 determinants.
A method for solving stochastic equations by reduced order models and local approximations
Grigoriu, M.
2012-08-01
A method is proposed for solving equations with random entries, referred to as stochastic equations (SEs). The method is based on two recent developments. The first approximates the response surface giving the solution of a stochastic equation as a function of its random parameters by a finite set of hyperplanes tangent to it at expansion points selected by geometrical arguments. The second approximates the vector of random parameters in the definition of a stochastic equation by a simple random vector, referred to as stochastic reduced order model (SROM), and uses it to construct a SROM for the solution of this equation. The proposed method is a direct extension of these two methods. It uses SROMs to select expansion points, rather than selecting these points by geometrical considerations, and represents the solution by linear and/or higher order local approximations. The implementation and the performance of the method are illustrated by numerical examples involving random eigenvalue problems and stochastic algebraic/differential equations. The method is conceptually simple, non-intrusive, efficient relative to classical Monte Carlo simulation, accurate, and guaranteed to converge to the exact solution.
NASA Astrophysics Data System (ADS)
Jeanmairet, Guillaume; Sharma, Sandeep; Alavi, Ali
2017-01-01
In this article we report a stochastic evaluation of the recently proposed multireference linearized coupled cluster theory [S. Sharma and A. Alavi, J. Chem. Phys. 143, 102815 (2015)]. In this method, both the zeroth-order and first-order wavefunctions are sampled stochastically by propagating simultaneously two populations of signed walkers. The sampling of the zeroth-order wavefunction follows a set of stochastic processes identical to the one used in the full configuration interaction quantum Monte Carlo (FCIQMC) method. To sample the first-order wavefunction, the usual FCIQMC algorithm is augmented with a source term that spawns walkers in the sampled first-order wavefunction from the zeroth-order wavefunction. The second-order energy is also computed stochastically but requires no additional overhead outside of the added cost of sampling the first-order wavefunction. This fully stochastic method opens up the possibility of simultaneously treating large active spaces to account for static correlation and recovering the dynamical correlation using perturbation theory. The method is used to study a few benchmark systems including the carbon dimer and aromatic molecules. We have computed the singlet-triplet gaps of benzene and m-xylylene. For m-xylylene, which has proved difficult for standard complete active space self consistent field theory with perturbative correction, we find the singlet-triplet gap to be in good agreement with the experimental values.
Buckling analysis of imperfect I-section beam-columns with stochastic shell finite elements
NASA Astrophysics Data System (ADS)
Schillinger, Dominik; Papadopoulos, Vissarion; Bischoff, Manfred; Papadrakakis, Manolis
2010-08-01
Buckling loads of thin-walled I-section beam-columns exhibit a wide stochastic scattering due to the uncertainty of imperfections. The present paper proposes a finite element based methodology for the stochastic buckling simulation of I-sections, which uses random fields to accurately describe the fluctuating size and spatial correlation of imperfections. The stochastic buckling behaviour is evaluated by crude Monte-Carlo simulation, based on a large number of I-section samples, which are generated by spectral representation and subsequently analyzed by non-linear shell finite elements. The application to an example I-section beam-column demonstrates that the simulated buckling response is in good agreement with experiments and follows key concepts of imperfection triggered buckling. The derivation of the buckling load variability and the stochastic interaction curve for combined compression and major axis bending as well as stochastic sensitivity studies for thickness and geometric imperfections illustrate potential benefits of the proposed methodology in buckling related research and applications.
NASA Astrophysics Data System (ADS)
Michta, Mariusz
2017-02-01
In the paper we study properties of solutions to stochastic differential inclusions and set-valued stochastic differential equations with respect to semimartingale integrators. We present new connections between their solutions. In particular, we show that attainable sets of solutions to stochastic inclusions are subsets of values of multivalued solutions of certain set-valued stochastic equations. We also show that every solution to stochastic inclusion is a continuous selection of a multivalued solution of an associated set-valued stochastic equation. The results obtained in the paper generalize results dealing with this topic known both in deterministic and stochastic cases.
EDITORIAL: Stochasticity in fusion plasmas Stochasticity in fusion plasmas
NASA Astrophysics Data System (ADS)
Unterberg, Bernhard
2010-03-01
Structure formation and transport in stochastic plasmas is a topic of growing importance in many fields of plasma physics from astrophysics to fusion research. In particular, the possibility to control transport in the boundary of confined fusion plasmas by resonant magnetic perturbations has been investigated extensively during recent years. A major research achievement was finding that the intense transient particle and heat fluxes associated with edge localized modes (here type-I ELMs) in magnetically confined fusion plasmas can be mitigated or even suppressed by resonant magnetic perturbation fields. This observation opened up a possible scheme to avoid too large erosion and material damage by such transients in future fusion devices such as ITER. However, it is widely recognized that a more basic understanding is needed to extrapolate the results obtained in present experiments to future fusion devices. The 4th workshop on Stochasticity in Fusion Plasmas was held in Jülich, Germany, from 2 to 4 March 2009. This series of workshops aims at gathering fusion experts from various plasma configurations such as tokamaks, stellarators and reversed field pinches to exchange knowledge on structure formation and transport in stochastic fusion plasmas. The workshops have attracted colleagues from both experiment and theory and stimulated fruitful discussions about the basics of stochastic fusion plasmas. Important papers from the first three workshops in 2003, 2005 and 2007 have been published in previous special issues of Nuclear Fusion (stacks.iop.org/NF/44/i=6, stacks.iop.org/NF/46/i=4 and stacks.iop.org/NF/48/i=2). This special issue comprises contributions presented at the 4th SFP workshop, dealing with the main subjects such as formation of stochastic magnetic layers, energy and particle transport in stochastic magnetic fields, plasma response to external, non-axis-symmetric perturbations and last but not least application of resonant magnetic perturbations for
Neutronic calculations for CANDU thorium systems using Monte Carlo techniques
NASA Astrophysics Data System (ADS)
Saldideh, M.; Shayesteh, M.; Eshghi, M.
2014-08-01
In this paper, we have investigated the prospects of exploiting the rich world thorium reserves using Canada Deuterium Uranium (CANDU) reactors. The analysis is performed using the Monte Carlo MCNP code in order to understand how much time the reactor is in criticality conduction. Four different fuel compositions have been selected for analysis. We have obtained the infinite multiplication factor, k∞, under full power operation of the reactor over 8 years. The neutronic flux distribution in the full core reactor has already been investigated.
Metrics for Diagnosing Undersampling in Monte Carlo Tally Estimates
Perfetti, Christopher M.; Rearden, Bradley T.
2015-01-01
This study explored the potential of using Markov chain convergence diagnostics to predict the prevalence and magnitude of biases due to undersampling in Monte Carlo eigenvalue and flux tally estimates. Five metrics were applied to two models of pressurized water reactor fuel assemblies and their potential for identifying undersampling biases was evaluated by comparing the calculated test metrics with known biases in the tallies. Three of the five undersampling metrics showed the potential to accurately predict the behavior of undersampling biases in the responses examined in this study.
Current status of the PSG Monte Carlo neutron transport code
Leppaenen, J.
2006-07-01
PSG is a new Monte Carlo neutron transport code, developed at the Technical Research Centre of Finland (VTT). The code is mainly intended for fuel assembly-level reactor physics calculations, such as group constant generation for deterministic reactor simulator codes. This paper presents the current status of the project and the essential capabilities of the code. Although the main application of PSG is in lattice calculations, the geometry is not restricted in two dimensions. This paper presents the validation of PSG against the experimental results of the three-dimensional MOX fuelled VENUS-2 reactor dosimetry benchmark. (authors)
Lemmens, D; Wouters, M; Tempere, J; Foulon, S
2008-07-01
We present a path integral method to derive closed-form solutions for option prices in a stochastic volatility model. The method is explained in detail for the pricing of a plain vanilla option. The flexibility of our approach is demonstrated by extending the realm of closed-form option price formulas to the case where both the volatility and interest rates are stochastic. This flexibility is promising for the treatment of exotic options. Our analytical formulas are tested with numerical Monte Carlo simulations.
NASA Astrophysics Data System (ADS)
Lemmens, D.; Wouters, M.; Tempere, J.; Foulon, S.
2008-07-01
We present a path integral method to derive closed-form solutions for option prices in a stochastic volatility model. The method is explained in detail for the pricing of a plain vanilla option. The flexibility of our approach is demonstrated by extending the realm of closed-form option price formulas to the case where both the volatility and interest rates are stochastic. This flexibility is promising for the treatment of exotic options. Our analytical formulas are tested with numerical Monte Carlo simulations.
Stochastic Energy Deployment System
2011-11-30
SEDS is an economy-wide energy model of the U.S. The model captures dynamics between supply, demand, and pricing of the major energy types consumed and produced within the U.S. These dynamics are captured by including: the effects of macroeconomics; the resources and costs of primary energy types such as oil, natural gas, coal, and biomass; the conversion of primary fuels into energy products like petroleum products, electricity, biofuels, and hydrogen; and lastly the end- use consumption attributable to residential and commercial buildings, light and heavy transportation, and industry. Projections from SEDS extend to the year 2050 by one-year time steps and are generally projected at the national level. SEDS differs from other economy-wide energy models in that it explicitly accounts for uncertainty in technology, markets, and policy. SEDS has been specifically developed to avoid the computational burden, and sometimes fruitless labor, that comes from modeling significantly low-level details. Instead, SEDS focuses on the major drivers within the energy economy and evaluates the impact of uncertainty around those drivers.
Variational principles for stochastic soliton dynamics
Holm, Darryl D.; Tyranowski, Tomasz M.
2016-01-01
We develop a variational method of deriving stochastic partial differential equations whose solutions follow the flow of a stochastic vector field. As an example in one spatial dimension, we numerically simulate singular solutions (peakons) of the stochastically perturbed Camassa–Holm (CH) equation derived using this method. These numerical simulations show that peakon soliton solutions of the stochastically perturbed CH equation persist and provide an interesting laboratory for investigating the sensitivity and accuracy of adding stochasticity to finite dimensional solutions of stochastic partial differential equations. In particular, some choices of stochastic perturbations of the peakon dynamics by Wiener noise (canonical Hamiltonian stochastic deformations, CH-SD) allow peakons to interpenetrate and exchange order on the real line in overtaking collisions, although this behaviour does not occur for other choices of stochastic perturbations which preserve the Euler–Poincaré structure of the CH equation (parametric stochastic deformations, P-SD), and it also does not occur for peakon solutions of the unperturbed deterministic CH equation. The discussion raises issues about the science of stochastic deformations of finite-dimensional approximations of evolutionary partial differential equation and the sensitivity of the resulting solutions to the choices made in stochastic modelling. PMID:27118922
Variational principles for stochastic soliton dynamics.
Holm, Darryl D; Tyranowski, Tomasz M
2016-03-01
We develop a variational method of deriving stochastic partial differential equations whose solutions follow the flow of a stochastic vector field. As an example in one spatial dimension, we numerically simulate singular solutions (peakons) of the stochastically perturbed Camassa-Holm (CH) equation derived using this method. These numerical simulations show that peakon soliton solutions of the stochastically perturbed CH equation persist and provide an interesting laboratory for investigating the sensitivity and accuracy of adding stochasticity to finite dimensional solutions of stochastic partial differential equations. In particular, some choices of stochastic perturbations of the peakon dynamics by Wiener noise (canonical Hamiltonian stochastic deformations, CH-SD) allow peakons to interpenetrate and exchange order on the real line in overtaking collisions, although this behaviour does not occur for other choices of stochastic perturbations which preserve the Euler-Poincaré structure of the CH equation (parametric stochastic deformations, P-SD), and it also does not occur for peakon solutions of the unperturbed deterministic CH equation. The discussion raises issues about the science of stochastic deformations of finite-dimensional approximations of evolutionary partial differential equation and the sensitivity of the resulting solutions to the choices made in stochastic modelling.
Forward Stochastic Nonlinear Adaptive Control Method
NASA Technical Reports Server (NTRS)
Bayard, David S.
1990-01-01
New method of computation for optimal stochastic nonlinear and adaptive control undergoing development. Solves systematically stochastic dynamic programming equations forward in time, using nested-stochastic-approximation technique. Main advantage, simplicity of programming and reduced complexity with clear performance/computation trade-offs.
Lambeth, Malcolm David Dick
2001-02-27
A fuel injector comprises first and second housing parts, the first housing part being located within a bore or recess formed in the second housing part, the housing parts defining therebetween an inlet chamber, a delivery chamber axially spaced from the inlet chamber, and a filtration flow path interconnecting the inlet and delivery chambers to remove particulate contaminants from the flow of fuel therebetween.
2006-04-01
financing, push technology and help motivate the building of the necessary manufacturing and distribution infrastructure. Hybrid Electric Vehicles , Tether...conclusions in three major areas: Hybrid Electric Vehicles (HEVs), fuel management during combat operations and manufactured fuels to address the...payoffs in the relatively near term, are: • Hybrid Electric Vehicles : The development of and commitment to hybrid electric architecture for TWVs
Multilevel sequential Monte Carlo samplers
Beskos, Alexandros; Jasra, Ajay; Law, Kody; Tempone, Raul; Zhou, Yan
2016-08-24
Here, we study the approximation of expectations w.r.t. probability distributions associated to the solution of partial differential equations (PDEs); this scenario appears routinely in Bayesian inverse problems. In practice, one often has to solve the associated PDE numerically, using, for instance finite element methods and leading to a discretisation bias, with the step-size level h_{L}. In addition, the expectation cannot be computed analytically and one often resorts to Monte Carlo methods. In the context of this problem, it is known that the introduction of the multilevel Monte Carlo (MLMC) method can reduce the amount of computational effort to estimate expectations, for a given level of error. This is achieved via a telescoping identity associated to a Monte Carlo approximation of a sequence of probability distributions with discretisation levels ${\\infty}$ >h_{0}>h_{1 }...>h_{L}. In many practical problems of interest, one cannot achieve an i.i.d. sampling of the associated sequence of probability distributions. A sequential Monte Carlo (SMC) version of the MLMC method is introduced to deal with this problem. In conclusion, it is shown that under appropriate assumptions, the attractive property of a reduction of the amount of computational effort to estimate expectations, for a given level of error, can be maintained within the SMC context.
Multilevel sequential Monte Carlo samplers
Beskos, Alexandros; Jasra, Ajay; Law, Kody; ...
2016-08-24
Here, we study the approximation of expectations w.r.t. probability distributions associated to the solution of partial differential equations (PDEs); this scenario appears routinely in Bayesian inverse problems. In practice, one often has to solve the associated PDE numerically, using, for instance finite element methods and leading to a discretisation bias, with the step-size level hL. In addition, the expectation cannot be computed analytically and one often resorts to Monte Carlo methods. In the context of this problem, it is known that the introduction of the multilevel Monte Carlo (MLMC) method can reduce the amount of computational effort to estimate expectations, for a given level of error. This is achieved via a telescoping identity associated to a Monte Carlo approximation of a sequence of probability distributions with discretisation levelsmore » $${\\infty}$$ >h0>h1 ...>hL. In many practical problems of interest, one cannot achieve an i.i.d. sampling of the associated sequence of probability distributions. A sequential Monte Carlo (SMC) version of the MLMC method is introduced to deal with this problem. In conclusion, it is shown that under appropriate assumptions, the attractive property of a reduction of the amount of computational effort to estimate expectations, for a given level of error, can be maintained within the SMC context.« less
Multilevel sequential Monte Carlo samplers
Beskos, Alexandros; Jasra, Ajay; Law, Kody; Tempone, Raul; Zhou, Yan
2016-08-24
Here, we study the approximation of expectations w.r.t. probability distributions associated to the solution of partial differential equations (PDEs); this scenario appears routinely in Bayesian inverse problems. In practice, one often has to solve the associated PDE numerically, using, for instance finite element methods and leading to a discretisation bias, with the step-size level h_{L}. In addition, the expectation cannot be computed analytically and one often resorts to Monte Carlo methods. In the context of this problem, it is known that the introduction of the multilevel Monte Carlo (MLMC) method can reduce the amount of computational effort to estimate expectations, for a given level of error. This is achieved via a telescoping identity associated to a Monte Carlo approximation of a sequence of probability distributions with discretisation levels ${\\infty}$ >h_{0}>h_{1 }...>h_{L}. In many practical problems of interest, one cannot achieve an i.i.d. sampling of the associated sequence of probability distributions. A sequential Monte Carlo (SMC) version of the MLMC method is introduced to deal with this problem. In conclusion, it is shown that under appropriate assumptions, the attractive property of a reduction of the amount of computational effort to estimate expectations, for a given level of error, can be maintained within the SMC context.
Monte Carlo Particle Lists: MCPL
NASA Astrophysics Data System (ADS)
Kittelmann, T.; Klinkby, E.; Knudsen, E. B.; Willendrup, P.; Cai, X. X.; Kanaki, K.
2017-09-01
A binary format with lists of particle state information, for interchanging particles between various Monte Carlo simulation applications, is presented. Portable C code for file manipulation is made available to the scientific community, along with converters and plugins for several popular simulation packages.
Stochastically forced zonal flows
NASA Astrophysics Data System (ADS)
Srinivasan, Kaushik
an approximate equation for the vorticity correlation function that is then solved perturbatively. The Reynolds stress of the pertubative solution can then be expressed as a function of the mean-flow and its y-derivatives. In particular, it is shown that as long as the forcing breaks mirror-symmetry, the Reynolds stress has a wave-like term, as a result of which the mean-flow is governed by a dispersive wave equation. In a separate study, Reynolds stress induced by an anisotropically forced unbounded Couette flow with uniform shear gamma, on a beta-plane, is calculated in conjunction with the eddy diffusivity of a co-evolving passive tracer. The flow is damped by linear drag on a time scale mu--1. The stochastic forcing is controlled by a parameter alpha, that characterizes whether eddies are elongated along the zonal direction (alpha < 0), the meridional direction (alpha > 0) or are isotropic (alpha = 0). The Reynolds stress varies linearly with alpha and non-linearly and non-monotonically with gamma; but the Reynolds stress is independent of beta. For positive values of alpha, the Reynolds stress displays an "anti-frictional" effect (energy is transferred from the eddies to the mean flow) and a frictional effect for negative values of alpha. With gamma = beta =0, the meridional tracer eddy diffusivity is v'2/(2mu), where v' is the meridional eddy velocity. In general, beta and gamma suppress the diffusivity below v'2/(2mu).
Bulk characterization of (U, Pu) mixed carbide fuel for distribution of plutonium
Devi, K. V. Vrinda Khan, K. B.; Biju, K.; Kumar, Arun
2015-06-24
Homogeneous distribution of plutonium in (U, Pu) mixed fuels is important from fuel performance as well as reprocessing point of view. Radiation imaging and assay techniques are employed for the detection of Pu rich agglomerates in the fuel. A simulation study of radiation transport was carried out to analyse the technique of autoradiography so as to estimate the minimum detectability of Pu agglomerates in MC fuel with nominal PuC content of 70% using Monte Carlo simulations.
MONTE CARLO ANALYSES OF THE YALINA THERMAL FACILITY WITH SERPENT STEREOLITHOGRAPHY GEOMETRY MODEL
Talamo, A.; Gohar, Y.
2015-01-01
This paper analyzes the YALINA Thermal subcritical assembly of Belarus using two different Monte Carlo transport programs, SERPENT and MCNP. The MCNP model is based on combinatorial geometry and universes hierarchy, while the SERPENT model is based on Stereolithography geometry. The latter consists of unstructured triangulated surfaces defined by the normal and vertices. This geometry format is used by 3D printers and it has been created by: the CUBIT software, MATLAB scripts, and C coding. All the Monte Carlo simulations have been performed using the ENDF/B-VII.0 nuclear data library. Both MCNP and SERPENT share the same geometry specifications, which describe the facility details without using any material homogenization. Three different configurations have been studied with different number of fuel rods. The three fuel configurations use 216, 245, or 280 fuel rods, respectively. The numerical simulations show that the agreement between SERPENT and MCNP results is within few tens of pcms.
A higher-order numerical framework for stochastic simulation of chemical reaction systems.
Székely, Tamás; Burrage, Kevin; Erban, Radek; Zygalakis, Konstantinos C
2012-07-15
In this paper, we present a framework for improving the accuracy of fixed-step methods for Monte Carlo simulation of discrete stochastic chemical kinetics. Stochasticity is ubiquitous in many areas of cell biology, for example in gene regulation, biochemical cascades and cell-cell interaction. However most discrete stochastic simulation techniques are slow. We apply Richardson extrapolation to the moments of three fixed-step methods, the Euler, midpoint and θ-trapezoidal τ-leap methods, to demonstrate the power of stochastic extrapolation. The extrapolation framework can increase the order of convergence of any fixed-step discrete stochastic solver and is very easy to implement; the only condition for its use is knowledge of the appropriate terms of the global error expansion of the solver in terms of its stepsize. In practical terms, a higher-order method with a larger stepsize can achieve the same level of accuracy as a lower-order method with a smaller one, potentially reducing the computational time of the system. By obtaining a global error expansion for a general weak first-order method, we prove that extrapolation can increase the weak order of convergence for the moments of the Euler and the midpoint τ-leap methods, from one to two. This is supported by numerical simulations of several chemical systems of biological importance using the Euler, midpoint and θ-trapezoidal τ-leap methods. In almost all cases, extrapolation results in an improvement of accuracy. As in the case of ordinary and stochastic differential equations, extrapolation can be repeated to obtain even higher-order approximations. Extrapolation is a general framework for increasing the order of accuracy of any fixed-step stochastic solver. This enables the simulation of complicated systems in less time, allowing for more realistic biochemical problems to be solved.
A higher-order numerical framework for stochastic simulation of chemical reaction systems
2012-01-01
Background In this paper, we present a framework for improving the accuracy of fixed-step methods for Monte Carlo simulation of discrete stochastic chemical kinetics. Stochasticity is ubiquitous in many areas of cell biology, for example in gene regulation, biochemical cascades and cell-cell interaction. However most discrete stochastic simulation techniques are slow. We apply Richardson extrapolation to the moments of three fixed-step methods, the Euler, midpoint and θ-trapezoidal τ-leap methods, to demonstrate the power of stochastic extrapolation. The extrapolation framework can increase the order of convergence of any fixed-step discrete stochastic solver and is very easy to implement; the only condition for its use is knowledge of the appropriate terms of the global error expansion of the solver in terms of its stepsize. In practical terms, a higher-order method with a larger stepsize can achieve the same level of accuracy as a lower-order method with a smaller one, potentially reducing the computational time of the system. Results By obtaining a global error expansion for a general weak first-order method, we prove that extrapolation can increase the weak order of convergence for the moments of the Euler and the midpoint τ-leap methods, from one to two. This is supported by numerical simulations of several chemical systems of biological importance using the Euler, midpoint and θ-trapezoidal τ-leap methods. In almost all cases, extrapolation results in an improvement of accuracy. As in the case of ordinary and stochastic differential equations, extrapolation can be repeated to obtain even higher-order approximations. Conclusions Extrapolation is a general framework for increasing the order of accuracy of any fixed-step stochastic solver. This enables the simulation of complicated systems in less time, allowing for more realistic biochemical problems to be solved. PMID:23256696
Suitable Candidates for Monte Carlo Solutions.
ERIC Educational Resources Information Center
Lewis, Jerome L.
1998-01-01
Discusses Monte Carlo methods, powerful and useful techniques that rely on random numbers to solve deterministic problems whose solutions may be too difficult to obtain using conventional mathematics. Reviews two excellent candidates for the application of Monte Carlo methods. (ASK)
A Classroom Note on Monte Carlo Integration.
ERIC Educational Resources Information Center
Kolpas, Sid
1998-01-01
The Monte Carlo method provides approximate solutions to a variety of mathematical problems by performing random sampling simulations with a computer. Presents a program written in Quick BASIC simulating the steps of the Monte Carlo method. (ASK)
Applications of Monte Carlo Methods in Calculus.
ERIC Educational Resources Information Center
Gordon, Sheldon P.; Gordon, Florence S.
1990-01-01
Discusses the application of probabilistic ideas, especially Monte Carlo simulation, to calculus. Describes some applications using the Monte Carlo method: Riemann sums; maximizing and minimizing a function; mean value theorems; and testing conjectures. (YP)
Universality in Stochastic Exponential Growth
NASA Astrophysics Data System (ADS)
Iyer-Biswas, Srividya; Crooks, Gavin E.; Scherer, Norbert F.; Dinner, Aaron R.
2014-07-01
Recent imaging data for single bacterial cells reveal that their mean sizes grow exponentially in time and that their size distributions collapse to a single curve when rescaled by their means. An analogous result holds for the division-time distributions. A model is needed to delineate the minimal requirements for these scaling behaviors. We formulate a microscopic theory of stochastic exponential growth as a Master Equation that accounts for these observations, in contrast to existing quantitative models of stochastic exponential growth (e.g., the Black-Scholes equation or geometric Brownian motion). Our model, the stochastic Hinshelwood cycle (SHC), is an autocatalytic reaction cycle in which each molecular species catalyzes the production of the next. By finding exact analytical solutions to the SHC and the corresponding first passage time problem, we uncover universal signatures of fluctuations in exponential growth and division. The model makes minimal assumptions, and we describe how more complex reaction networks can reduce to such a cycle. We thus expect similar scalings to be discovered in stochastic processes resulting in exponential growth that appear in diverse contexts such as cosmology, finance, technology, and population growth.
Universality in stochastic exponential growth.
Iyer-Biswas, Srividya; Crooks, Gavin E; Scherer, Norbert F; Dinner, Aaron R
2014-07-11
Recent imaging data for single bacterial cells reveal that their mean sizes grow exponentially in time and that their size distributions collapse to a single curve when rescaled by their means. An analogous result holds for the division-time distributions. A model is needed to delineate the minimal requirements for these scaling behaviors. We formulate a microscopic theory of stochastic exponential growth as a Master Equation that accounts for these observations, in contrast to existing quantitative models of stochastic exponential growth (e.g., the Black-Scholes equation or geometric Brownian motion). Our model, the stochastic Hinshelwood cycle (SHC), is an autocatalytic reaction cycle in which each molecular species catalyzes the production of the next. By finding exact analytical solutions to the SHC and the corresponding first passage time problem, we uncover universal signatures of fluctuations in exponential growth and division. The model makes minimal assumptions, and we describe how more complex reaction networks can reduce to such a cycle. We thus expect similar scalings to be discovered in stochastic processes resulting in exponential growth that appear in diverse contexts such as cosmology, finance, technology, and population growth.
STOCHASTIC POINT PROCESSES: LIMIT THEOREMS.
A stochastic point process in R(n) is a triple (M,B,P) where M is the class of all countable sets in R(n) having no limit points, B is the smallest...converge to a mixture of Poisson processes. These results are established via a generalization of a classical limit theorem for Bernoulli trials. (Author)
Birch regeneration: a stochastic model
William B. Leak
1968-01-01
The regeneration of a clearcutting with paper or yellow birch is expressed as an elementary stochastic (probabalistic) model that is computationally similar to an absorbing Markov chain. In the general case, the model contains 29 states beginning with the development of a flower (ament) and terminating with the abortion of a flower or seed, or the development of an...
Stochastic cooling: recent theoretical directions
Bisognano, J.
1983-03-01
A kinetic-equation derivation of the stochastic-cooling Fokker-Planck equation of correlation is introduced to describe both the Schottky spectrum and signal suppression. Generalizations to nonlinear gain and coupling between degrees of freedom are presented. Analysis of bunch beam cooling is included.
Stochastic resonance on a circle
Wiesenfeld, K. ); Pierson, D.; Pantazelou, E.; Dames, C.; Moss, F. )
1994-04-04
We describe a new realization of stochastic resonance, applicable to a broad class of systems, based on an underlying excitable dynamics with deterministic reinjection. A simple but general theory of such single-trigger'' systems is compared with analog simulations of the Fitzhugh-Nagumo model, as well as experimental data obtained from stimulated sensory neurons in the crayfish.
Brownian motors and stochastic resonance.
Mateos, José L; Alatriste, Fernando R
2011-12-01
We study the transport properties for a walker on a ratchet potential. The walker consists of two particles coupled by a bistable potential that allow the interchange of the order of the particles while moving through a one-dimensional asymmetric periodic ratchet potential. We consider the stochastic dynamics of the walker on a ratchet with an external periodic forcing, in the overdamped case. The coupling of the two particles corresponds to a single effective particle, describing the internal degree of freedom, in a bistable potential. This double-well potential is subjected to both a periodic forcing and noise and therefore is able to provide a realization of the phenomenon of stochastic resonance. The main result is that there is an optimal amount of noise where the amplitude of the periodic response of the system is maximum, a signal of stochastic resonance, and that precisely for this optimal noise, the average velocity of the walker is maximal, implying a strong link between stochastic resonance and the ratchet effect.
Benzi, Michele; Evans, Thomas M.; Hamilton, Steven P.; ...
2017-03-05
Here, we consider hybrid deterministic-stochastic iterative algorithms for the solution of large, sparse linear systems. Starting from a convergent splitting of the coefficient matrix, we analyze various types of Monte Carlo acceleration schemes applied to the original preconditioned Richardson (stationary) iteration. We expect that these methods will have considerable potential for resiliency to faults when implemented on massively parallel machines. We also establish sufficient conditions for the convergence of the hybrid schemes, and we investigate different types of preconditioners including sparse approximate inverses. Numerical experiments on linear systems arising from the discretization of partial differential equations are presented.
Fuel cell-fuel cell hybrid system
Geisbrecht, Rodney A.; Williams, Mark C.
2003-09-23
A device for converting chemical energy to electricity is provided, the device comprising a high temperature fuel cell with the ability for partially oxidizing and completely reforming fuel, and a low temperature fuel cell juxtaposed to said high temperature fuel cell so as to utilize remaining reformed fuel from the high temperature fuel cell. Also provided is a method for producing electricity comprising directing fuel to a first fuel cell, completely oxidizing a first portion of the fuel and partially oxidizing a second portion of the fuel, directing the second fuel portion to a second fuel cell, allowing the first fuel cell to utilize the first portion of the fuel to produce electricity; and allowing the second fuel cell to utilize the second portion of the fuel to produce electricity.
Stochastic spatial structured model for vertically and horizontally transmitted infection
NASA Astrophysics Data System (ADS)
Silva, Ana T. C.; Assis, Vladimir R. V.; Pinho, Suani T. R.; Tomé, Tânia; de Oliveira, Mário J.
2017-02-01
We study a space structured stochastic model for vertical and horizontal transmitted infection. By means of simple and pair mean-field approximation as well as Monte Carlo simulations, we construct the phase diagram, which displays four states: healthy (H), infected (I), extinct (E), and coexistent (C). In state H only healthy hosts are present, whereas in state I only infected hosts are present. The state E is characterized by the extinction of the hosts whereas in state C there is a coexistence of infected and healthy hosts. In addition to the usual scenario with continuous transition between the I, C and H phases, we found a different scenario with the suppression of the C phase and a discontinuous phase transition between I and H phases.
Stochastic-Dynamical Modeling of Space Time Rainfall
NASA Technical Reports Server (NTRS)
Georgankakos, Konstantine P.
1997-01-01
The focus of this research work is the elucidation of the physical origins of the observed extreme-rainfall variability over tropical oceans. The quantitative results of this work may be used to establish links between deterministic models of the mesoscale and synoptic scale with statistical descriptions of the temporal variability of local tropical oceanic rainfall. In addition, they may be used to quantify the influence of measurement error in large-scale forcing and cloud scale observations on the accuracy of local rainfall variability inferences, important for hydrologic studies. A simple statistical-dynamical model, suitable for use in repetitive Monte Carlo experiments, is formulated as a diagnostic tool for this purpose. Stochastic processes with temporal structure and parameters estimated from observed large-scale data represent large-scale forcing.
Stochastic Particle Real Time Analyzer (SPARTA) Validation and Verification Suite
Gallis, Michael A.; Koehler, Timothy P.; Plimpton, Steven J.
2014-10-01
This report presents the test cases used to verify, validate and demonstrate the features and capabilities of the first release of the 3D Direct Simulation Monte Carlo (DSMC) code SPARTA (Stochastic Real Time Particle Analyzer). The test cases included in this report exercise the most critical capabilities of the code like the accurate representation of physical phenomena (molecular advection and collisions, energy conservation, etc.) and implementation of numerical methods (grid adaptation, load balancing, etc.). Several test cases of simple flow examples are shown to demonstrate that the code can reproduce phenomena predicted by analytical solutions and theory. A number of additional test cases are presented to illustrate the ability of SPARTA to model flow around complicated shapes. In these cases, the results are compared to other well-established codes or theoretical predictions. This compilation of test cases is not exhaustive, and it is anticipated that more cases will be added in the future.
A stochastic model for the analysis of maximum daily temperature
NASA Astrophysics Data System (ADS)
Sirangelo, B.; Caloiero, T.; Coscarelli, R.; Ferrari, E.
2016-08-01
In this paper, a stochastic model for the analysis of the daily maximum temperature is proposed. First, a deseasonalization procedure based on the truncated Fourier expansion is adopted. Then, the Johnson transformation functions were applied for the data normalization. Finally, the fractionally autoregressive integrated moving average model was used to reproduce both short- and long-memory behavior of the temperature series. The model was applied to the data of the Cosenza gauge (Calabria region) and verified on other four gauges of southern Italy. Through a Monte Carlo simulation procedure based on the proposed model, 105 years of daily maximum temperature have been generated. Among the possible applications of the model, the occurrence probabilities of the annual maximum values have been evaluated. Moreover, the procedure was applied for the estimation of the return periods of long sequences of days with maximum temperature above prefixed thresholds.
Simulating rare events in equilibrium or nonequilibrium stochastic systems.
Allen, Rosalind J; Frenkel, Daan; ten Wolde, Pieter Rein
2006-01-14
We present three algorithms for calculating rate constants and sampling transition paths for rare events in simulations with stochastic dynamics. The methods do not require a priori knowledge of the phase-space density and are suitable for equilibrium or nonequilibrium systems in stationary state. All the methods use a series of interfaces in phase space, between the initial and final states, to generate transition paths as chains of connected partial paths, in a ratchetlike manner. No assumptions are made about the distribution of paths at the interfaces. The three methods differ in the way that the transition path ensemble is generated. We apply the algorithms to kinetic Monte Carlo simulations of a genetic switch and to Langevin dynamics simulations of intermittently driven polymer translocation through a pore. We find that the three methods are all of comparable efficiency, and that all the methods are much more efficient than brute-force simulation.
Stochastic Simulations of Pattern Formation in Excitable Media
Vigelius, Matthias; Meyer, Bernd
2012-01-01
We present a method for mesoscopic, dynamic Monte Carlo simulations of pattern formation in excitable reaction–diffusion systems. Using a two-level parallelization approach, our simulations cover the whole range of the parameter space, from the noise-dominated low-particle number regime to the quasi-deterministic high-particle number limit. Three qualitatively different case studies are performed that stand exemplary for the wide variety of excitable systems. We present mesoscopic stochastic simulations of the Gray-Scott model, of a simplified model for intracellular Ca oscillations and, for the first time, of the Oregonator model. We achieve simulations with up to particles. The software and the model files are freely available and researchers can use the models to reproduce our results or adapt and refine them for further exploration. PMID:22900025
GPU Computing in Bayesian Inference of Realized Stochastic Volatility Model
NASA Astrophysics Data System (ADS)
Takaishi, Tetsuya
2015-01-01
The realized stochastic volatility (RSV) model that utilizes the realized volatility as additional information has been proposed to infer volatility of financial time series. We consider the Bayesian inference of the RSV model by the Hybrid Monte Carlo (HMC) algorithm. The HMC algorithm can be parallelized and thus performed on the GPU for speedup. The GPU code is developed with CUDA Fortran. We compare the computational time in performing the HMC algorithm on GPU (GTX 760) and CPU (Intel i7-4770 3.4GHz) and find that the GPU can be up to 17 times faster than the CPU. We also code the program with OpenACC and find that appropriate coding can achieve the similar speedup with CUDA Fortran.
Cancer growth dynamics: stochastic models and noise induced effects
NASA Astrophysics Data System (ADS)
Spagnolo, B.; Fiasconaro, A.; Pizzolato, N.; Valenti, D.; Adorno, D. Persano; Caldara, P.; Ochab-Marcinek, A.; Gudowska-Nowak, E.
2009-04-01
In the framework of the Michaelis-Menten (MM) reaction kinetics, we analyze the cancer growth dynamics in the presence of the immune response. We found the coexistence of noise enhanced stability (NES) and resonant activation (RA) phenomena which act in an opposite way with respect to the extinction of the tumor. The role of the stochastic resonance (SR) in the case of weak cancer therapy has been analyzed. The evolutionary dynamics of a system of cancerous cells in a model of chronic myeloid leukemia (CML) is investigated by a Monte Carlo approach. We analyzed the effects of a targeted therapy on the evolutionary dynamics of normal, first-mutant and cancerous cell populations. We show how the patient response to the therapy changes when an high value of the mutation rate from healthy to cancerous cells is present. Our results are in agreement with clinical observations.
Stochastic Functional Data Analysis: A Diffusion Model-based Approach
Zhu, Bin; Song, Peter X.-K.; Taylor, Jeremy M.G.
2011-01-01
Summary This paper presents a new modeling strategy in functional data analysis. We consider the problem of estimating an unknown smooth function given functional data with noise. The unknown function is treated as the realization of a stochastic process, which is incorporated into a diffusion model. The method of smoothing spline estimation is connected to a special case of this approach. The resulting models offer great flexibility to capture the dynamic features of functional data, and allow straightforward and meaningful interpretation. The likelihood of the models is derived with Euler approximation and data augmentation. A unified Bayesian inference method is carried out via a Markov Chain Monte Carlo algorithm including a simulation smoother. The proposed models and methods are illustrated on some prostate specific antigen data, where we also show how the models can be used for forecasting. PMID:21418053
Zuk, Pawel J; Kochańczyk, Marek; Jaruszewicz, Joanna; Bednorz, Witold; Lipniacki, Tomasz
2012-10-01
Living cells may be considered as biochemical reactors of multiple steady states. Transitions between these states are enabled by noise, or, in spatially extended systems, may occur due to the traveling wave propagation. We analyze a one-dimensional bistable stochastic birth-death process by means of potential and temperature fields. The potential is defined by the deterministic limit of the process, while the temperature field is governed by noise. The stable steady state in which the potential has its global minimum defines the global deterministic attractor. For the stochastic system, in the low noise limit, the stationary probability distribution becomes unimodal, concentrated in one of two stable steady states, defined in this study as the global stochastic attractor. Interestingly, these two attractors may be located in different steady states. This observation suggests that the asymptotic behavior of spatially extended stochastic systems depends on the substrate diffusivity and size of the reactor. We confirmed this hypothesis within kinetic Monte Carlo simulations of a bistable reaction- diffusion model on the hexagonal lattice. In particular, we found that although the kinase-phosphatase system remains inactive in a small domain, the activatory traveling wave may propagate when a larger domain is considered.
NASA Astrophysics Data System (ADS)
Zuk, Pawel J.; Kochańczyk, Marek; Jaruszewicz, Joanna; Bednorz, Witold; Lipniacki, Tomasz
2012-10-01
Living cells may be considered as biochemical reactors of multiple steady states. Transitions between these states are enabled by noise, or, in spatially extended systems, may occur due to the traveling wave propagation. We analyze a one-dimensional bistable stochastic birth-death process by means of potential and temperature fields. The potential is defined by the deterministic limit of the process, while the temperature field is governed by noise. The stable steady state in which the potential has its global minimum defines the global deterministic attractor. For the stochastic system, in the low noise limit, the stationary probability distribution becomes unimodal, concentrated in one of two stable steady states, defined in this study as the global stochastic attractor. Interestingly, these two attractors may be located in different steady states. This observation suggests that the asymptotic behavior of spatially extended stochastic systems depends on the substrate diffusivity and size of the reactor. We confirmed this hypothesis within kinetic Monte Carlo simulations of a bistable reaction- diffusion model on the hexagonal lattice. In particular, we found that although the kinase-phosphatase system remains inactive in a small domain, the activatory traveling wave may propagate when a larger domain is considered.
Müller, Florian Jenny, Patrick Meyer, Daniel W.
2013-10-01
Monte Carlo (MC) is a well known method for quantifying uncertainty arising for example in subsurface flow problems. Although robust and easy to implement, MC suffers from slow convergence. Extending MC by means of multigrid techniques yields the multilevel Monte Carlo (MLMC) method. MLMC has proven to greatly accelerate MC for several applications including stochastic ordinary differential equations in finance, elliptic stochastic partial differential equations and also hyperbolic problems. In this study, MLMC is combined with a streamline-based solver to assess uncertain two phase flow and Buckley–Leverett transport in random heterogeneous porous media. The performance of MLMC is compared to MC for a two dimensional reservoir with a multi-point Gaussian logarithmic permeability field. The influence of the variance and the correlation length of the logarithmic permeability on the MLMC performance is studied.
Monte Carlo Simulation for Perusal and Practice.
ERIC Educational Resources Information Center
Brooks, Gordon P.; Barcikowski, Robert S.; Robey, Randall R.
The meaningful investigation of many problems in statistics can be solved through Monte Carlo methods. Monte Carlo studies can help solve problems that are mathematically intractable through the analysis of random samples from populations whose characteristics are known to the researcher. Using Monte Carlo simulation, the values of a statistic are…
Quantum Monte Carlo Calculations of Transition Metal Oxides
NASA Astrophysics Data System (ADS)
Wagner, Lucas
2006-03-01
Quantum Monte Carlo is a powerful computational tool to study correlated systems, allowing us to explicitly treat many-body interactions with favorable scaling in the number of particles. It has been regarded as a benchmark tool for first and second row condensed matter systems, although its accuracy has not been thoroughly investigated in strongly correlated transition metal oxides. QMC has also historically suffered from the mixed estimator error in operators that do not commute with the Hamiltonian and from stochastic uncertainty, which make small energy differences unattainable. Using the Reptation Monte Carlo algorithm of Moroni and Baroni(along with contributions from others), we have developed a QMC framework that makes these previously unavailable quantities computationally feasible for systems of hundreds of electrons in a controlled and consistent way, and apply this framework to transition metal oxides. We compare these results with traditional mean-field results like the LDA and with experiment where available, focusing in particular on the polarization and lattice constants in a few interesting ferroelectric materials. This work was performed in collaboration with Lubos Mitas and Jeffrey Grossman.
Stochastic many-body perturbation theory for anharmonic molecular vibrations
Hermes, Matthew R.; Hirata, So
2014-08-28
A new quantum Monte Carlo (QMC) method for anharmonic vibrational zero-point energies and transition frequencies is developed, which combines the diagrammatic vibrational many-body perturbation theory based on the Dyson equation with Monte Carlo integration. The infinite sums of the diagrammatic and thus size-consistent first- and second-order anharmonic corrections to the energy and self-energy are expressed as sums of a few m- or 2m-dimensional integrals of wave functions and a potential energy surface (PES) (m is the vibrational degrees of freedom). Each of these integrals is computed as the integrand (including the value of the PES) divided by the value of a judiciously chosen weight function evaluated on demand at geometries distributed randomly but according to the weight function via the Metropolis algorithm. In this way, the method completely avoids cumbersome evaluation and storage of high-order force constants necessary in the original formulation of the vibrational perturbation theory; it furthermore allows even higher-order force constants essentially up to an infinite order to be taken into account in a scalable, memory-efficient algorithm. The diagrammatic contributions to the frequency-dependent self-energies that are stochastically evaluated at discrete frequencies can be reliably interpolated, allowing the self-consistent solutions to the Dyson equation to be obtained. This method, therefore, can compute directly and stochastically the transition frequencies of fundamentals and overtones as well as their relative intensities as pole strengths, without fixed-node errors that plague some QMC. It is shown that, for an identical PES, the new method reproduces the correct deterministic values of the energies and frequencies within a few cm{sup −1} and pole strengths within a few thousandths. With the values of a PES evaluated on the fly at random geometries, the new method captures a noticeably greater proportion of anharmonic effects.
Stochastic many-body perturbation theory for anharmonic molecular vibrations
NASA Astrophysics Data System (ADS)
Hermes, Matthew R.; Hirata, So
2014-08-01
A new quantum Monte Carlo (QMC) method for anharmonic vibrational zero-point energies and transition frequencies is developed, which combines the diagrammatic vibrational many-body perturbation theory based on the Dyson equation with Monte Carlo integration. The infinite sums of the diagrammatic and thus size-consistent first- and second-order anharmonic corrections to the energy and self-energy are expressed as sums of a few m- or 2m-dimensional integrals of wave functions and a potential energy surface (PES) (m is the vibrational degrees of freedom). Each of these integrals is computed as the integrand (including the value of the PES) divided by the value of a judiciously chosen weight function evaluated on demand at geometries distributed randomly but according to the weight function via the Metropolis algorithm. In this way, the method completely avoids cumbersome evaluation and storage of high-order force constants necessary in the original formulation of the vibrational perturbation theory; it furthermore allows even higher-order force constants essentially up to an infinite order to be taken into account in a scalable, memory-efficient algorithm. The diagrammatic contributions to the frequency-dependent self-energies that are stochastically evaluated at discrete frequencies can be reliably interpolated, allowing the self-consistent solutions to the Dyson equation to be obtained. This method, therefore, can compute directly and stochastically the transition frequencies of fundamentals and overtones as well as their relative intensities as pole strengths, without fixed-node errors that plague some QMC. It is shown that, for an identical PES, the new method reproduces the correct deterministic values of the energies and frequencies within a few cm-1 and pole strengths within a few thousandths. With the values of a PES evaluated on the fly at random geometries, the new method captures a noticeably greater proportion of anharmonic effects.
Stochastic many-body perturbation theory for anharmonic molecular vibrations.
Hermes, Matthew R; Hirata, So
2014-08-28
A new quantum Monte Carlo (QMC) method for anharmonic vibrational zero-point energies and transition frequencies is developed, which combines the diagrammatic vibrational many-body perturbation theory based on the Dyson equation with Monte Carlo integration. The infinite sums of the diagrammatic and thus size-consistent first- and second-order anharmonic corrections to the energy and self-energy are expressed as sums of a few m- or 2m-dimensional integrals of wave functions and a potential energy surface (PES) (m is the vibrational degrees of freedom). Each of these integrals is computed as the integrand (including the value of the PES) divided by the value of a judiciously chosen weight function evaluated on demand at geometries distributed randomly but according to the weight function via the Metropolis algorithm. In this way, the method completely avoids cumbersome evaluation and storage of high-order force constants necessary in the original formulation of the vibrational perturbation theory; it furthermore allows even higher-order force constants essentially up to an infinite order to be taken into account in a scalable, memory-efficient algorithm. The diagrammatic contributions to the frequency-dependent self-energies that are stochastically evaluated at discrete frequencies can be reliably interpolated, allowing the self-consistent solutions to the Dyson equation to be obtained. This method, therefore, can compute directly and stochastically the transition frequencies of fundamentals and overtones as well as their relative intensities as pole strengths, without fixed-node errors that plague some QMC. It is shown that, for an identical PES, the new method reproduces the correct deterministic values of the energies and frequencies within a few cm(-1) and pole strengths within a few thousandths. With the values of a PES evaluated on the fly at random geometries, the new method captures a noticeably greater proportion of anharmonic effects.
Fuels research: Fuel thermal stability overview
NASA Technical Reports Server (NTRS)
Cohen, S. M.
1980-01-01
Alternative fuels or crude supplies are examined with respect to satisfying aviation fuel needs for the next 50 years. The thermal stability of potential future fuels is discussed and the effects of these characteristics on aircraft fuel systems are examined. Advanced fuel system technology and design guidelines for future fuels with lower thermal stability are reported.
Quantum Monte Carlo method applied to non-Markovian barrier transmission
NASA Astrophysics Data System (ADS)
Hupin, Guillaume; Lacroix, Denis
2010-01-01
In nuclear fusion and fission, fluctuation and dissipation arise because of the coupling of collective degrees of freedom with internal excitations. Close to the barrier, quantum, statistical, and non-Markovian effects are expected to be important. In this work, a new approach based on quantum Monte Carlo addressing this problem is presented. The exact dynamics of a system coupled to an environment is replaced by a set of stochastic evolutions of the system density. The quantum Monte Carlo method is applied to systems with quadratic potentials. In all ranges of temperature and coupling, the stochastic method matches the exact evolution, showing that non-Markovian effects can be simulated accurately. A comparison with other theories, such as Nakajima-Zwanzig or time-convolutionless, shows that only the latter can be competitive if the expansion in terms of coupling constant is made at least to fourth order. A systematic study of the inverted parabola case is made at different temperatures and coupling constants. The asymptotic passing probability is estimated by different approaches including the Markovian limit. Large differences with an exact result are seen in the latter case or when only second order in the coupling strength is considered, as is generally assumed in nuclear transport models. In contrast, if fourth order in the coupling or quantum Monte Carlo method is used, a perfect agreement is obtained.
NASA Astrophysics Data System (ADS)
Shchurovskaya, M. V.; Alferov, V. P.; Geraskin, N. I.; Radaev, A. I.
2017-01-01
The results of the validation of a research reactor calculation using Monte Carlo and deterministic codes against experimental data and based on code-to-code comparison are presented. The continuous energy Monte Carlo code MCU-PTR and the nodal diffusion-based deterministic code TIGRIS were used for full 3-D calculation of the IRT MEPhI research reactor. The validation included the investigations for the reactor with existing high enriched uranium (HEU, 90 w/o) fuel and low enriched uranium (LEU, 19.7 w/o, U-9%Mo) fuel.
Bean, R.W.
1963-11-19
A ceramic fuel element for a nuclear reactor that has improved structural stability as well as improved cooling and fission product retention characteristics is presented. The fuel element includes a plurality of stacked hollow ceramic moderator blocks arranged along a tubular raetallic shroud that encloses a series of axially apertured moderator cylinders spaced inwardly of the shroud. A plurality of ceramic nuclear fuel rods are arranged in the annular space between the shroud and cylinders of moderator and appropriate support means and means for directing gas coolant through the annular space are also provided. (AEC)