Stabilized multilevel Monte Carlo method for stiff stochastic differential equations
Abdulle, Assyr Blumenthal, Adrian
2013-10-15
A multilevel Monte Carlo (MLMC) method for mean square stable stochastic differential equations with multiple scales is proposed. For such problems, that we call stiff, the performance of MLMC methods based on classical explicit methods deteriorates because of the time step restriction to resolve the fastest scales that prevents to exploit all the levels of the MLMC approach. We show that by switching to explicit stabilized stochastic methods and balancing the stabilization procedure simultaneously with the hierarchical sampling strategy of MLMC methods, the computational cost for stiff systems is significantly reduced, while keeping the computational algorithm fully explicit and easy to implement. Numerical experiments on linear and nonlinear stochastic differential equations and on a stochastic partial differential equation illustrate the performance of the stabilized MLMC method and corroborate our theoretical findings.
Optimization of Monte Carlo transport simulations in stochastic media
Liang, C.; Ji, W.
2012-07-01
This paper presents an accurate and efficient approach to optimize radiation transport simulations in a stochastic medium of high heterogeneity, like the Very High Temperature Gas-cooled Reactor (VHTR) configurations packed with TRISO fuel particles. Based on a fast nearest neighbor search algorithm, a modified fast Random Sequential Addition (RSA) method is first developed to speed up the generation of the stochastic media systems packed with both mono-sized and poly-sized spheres. A fast neutron tracking method is then developed to optimize the next sphere boundary search in the radiation transport procedure. In order to investigate their accuracy and efficiency, the developed sphere packing and neutron tracking methods are implemented into an in-house continuous energy Monte Carlo code to solve an eigenvalue problem in VHTR unit cells. Comparison with the MCNP benchmark calculations for the same problem indicates that the new methods show considerably higher computational efficiency. (authors)
Protein folding and phylogenetic tree reconstruction using stochastic approximation Monte Carlo
Cheon, Sooyoung
2007-09-17
Recently, the stochastic approximation Monte Carlo algorithm has been proposed by Liang et al. (2005) as a general-purpose stochastic optimization and simulation algorithm. An annealing version of this algorithm was developed for real small protein...
Monte Carlo stochastic-dynamics study of dielectric response and nonergodicity in proton glass Rb or K, W H or D, A P or As has been simulated using the Monte Carlo stochastic-dynamics method crystals over the whole range 0 x 1 can be grown. The frustrated FE and AFE interac- tions suppress both
Semi-stochastic full configuration interaction quantum Monte Carlo: developments and application
Blunt, N S; Kersten, J A F; Spencer, J S; Booth, George H; Alavi, Ali
2015-01-01
We expand upon the recent semi-stochastic adaptation to full configuration quantum Monte Carlo (FCIQMC). We present an alternate method for generating the deterministic space without a priori knowledge of the wave function and demonstrate the resulting gains in stochastic efficiency for a variety of both molecular and lattice systems. The algorithmic details of an efficient semi-stochastic implementation are presented, with particular consideration given to the effect that the adaptation has on parallel performance in FCIQMC. We further demonstrate the benefit for calculation of reduced density matrices in FCIQMC through replica sampling, where the semi-stochastic adaptation seems to have even larger efficiency gains. We then combine these ideas to produce explicitly correlated corrected FCIQMC energies for the Beryllium dimer, for which stochastic errors on the order of wavenumber accuracy are achievable.
Semi-stochastic full configuration interaction quantum Monte Carlo: Developments and application
NASA Astrophysics Data System (ADS)
Blunt, N. S.; Smart, Simon D.; Kersten, J. A. F.; Spencer, J. S.; Booth, George H.; Alavi, Ali
2015-05-01
We expand upon the recent semi-stochastic adaptation to full configuration interaction quantum Monte Carlo (FCIQMC). We present an alternate method for generating the deterministic space without a priori knowledge of the wave function and present stochastic efficiencies for a variety of both molecular and lattice systems. The algorithmic details of an efficient semi-stochastic implementation are presented, with particular consideration given to the effect that the adaptation has on parallel performance in FCIQMC. We further demonstrate the benefit for calculation of reduced density matrices in FCIQMC through replica sampling, where the semi-stochastic adaptation seems to have even larger efficiency gains. We then combine these ideas to produce explicitly correlated corrected FCIQMC energies for the beryllium dimer, for which stochastic errors on the order of wavenumber accuracy are achievable.
Semi-stochastic full configuration interaction quantum Monte Carlo: Developments and application
Blunt, N. S. Kersten, J. A. F.; Smart, Simon D.; Spencer, J. S.; Booth, George H.; Alavi, Ali
2015-05-14
We expand upon the recent semi-stochastic adaptation to full configuration interaction quantum Monte Carlo (FCIQMC). We present an alternate method for generating the deterministic space without a priori knowledge of the wave function and present stochastic efficiencies for a variety of both molecular and lattice systems. The algorithmic details of an efficient semi-stochastic implementation are presented, with particular consideration given to the effect that the adaptation has on parallel performance in FCIQMC. We further demonstrate the benefit for calculation of reduced density matrices in FCIQMC through replica sampling, where the semi-stochastic adaptation seems to have even larger efficiency gains. We then combine these ideas to produce explicitly correlated corrected FCIQMC energies for the beryllium dimer, for which stochastic errors on the order of wavenumber accuracy are achievable.
Semi-stochastic full configuration interaction quantum Monte Carlo: Developments and application.
Blunt, N S; Smart, Simon D; Kersten, J A F; Spencer, J S; Booth, George H; Alavi, Ali
2015-05-14
We expand upon the recent semi-stochastic adaptation to full configuration interaction quantum Monte Carlo (FCIQMC). We present an alternate method for generating the deterministic space without a priori knowledge of the wave function and present stochastic efficiencies for a variety of both molecular and lattice systems. The algorithmic details of an efficient semi-stochastic implementation are presented, with particular consideration given to the effect that the adaptation has on parallel performance in FCIQMC. We further demonstrate the benefit for calculation of reduced density matrices in FCIQMC through replica sampling, where the semi-stochastic adaptation seems to have even larger efficiency gains. We then combine these ideas to produce explicitly correlated corrected FCIQMC energies for the beryllium dimer, for which stochastic errors on the order of wavenumber accuracy are achievable. PMID:25978883
A Comparison of Monte Carlo Particle Transport Algorithms for Binary Stochastic Mixtures
Brantley, P S
2009-02-23
Two Monte Carlo algorithms originally proposed by Zimmerman and Zimmerman and Adams for particle transport through a binary stochastic mixture are numerically compared using a standard set of planar geometry benchmark problems. In addition to previously-published comparisons of the ensemble-averaged probabilities of reflection and transmission, we include comparisons of detailed ensemble-averaged total and material scalar flux distributions. Because not all benchmark scalar flux distribution data used to produce plots in previous publications remains available, we have independently regenerated the benchmark solutions including scalar flux distributions. Both Monte Carlo transport algorithms robustly produce physically-realistic scalar flux distributions for the transport problems examined. The first algorithm reproduces the standard Levermore-Pomraning model results for the probabilities of reflection and transmission. The second algorithm generally produces significantly more accurate probabilities of reflection and transmission and also significantly more accurate total and material scalar flux distributions.
Brantley, P S
2009-06-30
Particle transport through binary stochastic mixtures has received considerable research attention in the last two decades. Zimmerman and Adams proposed a Monte Carlo algorithm (Algorithm A) that solves the Levermore-Pomraning equations and another Monte Carlo algorithm (Algorithm B) that should be more accurate as a result of improved local material realization modeling. Zimmerman and Adams numerically confirmed these aspects of the Monte Carlo algorithms by comparing the reflection and transmission values computed using these algorithms to a standard suite of planar geometry binary stochastic mixture benchmark transport solutions. The benchmark transport problems are driven by an isotropic angular flux incident on one boundary of a binary Markovian statistical planar geometry medium. In a recent paper, we extended the benchmark comparisons of these Monte Carlo algorithms to include the scalar flux distributions produced. This comparison is important, because as demonstrated, an approximate model that gives accurate reflection and transmission probabilities can produce unphysical scalar flux distributions. Brantley and Palmer recently investigated the accuracy of the Levermore-Pomraning model using a new interior source binary stochastic medium benchmark problem suite. In this paper, we further investigate the accuracy of the Monte Carlo algorithms proposed by Zimmerman and Adams by comparing to the benchmark results from the interior source binary stochastic medium benchmark suite, including scalar flux distributions. Because the interior source scalar flux distributions are of an inherently different character than the distributions obtained for the incident angular flux benchmark problems, the present benchmark comparison extends the domain of problems for which the accuracy of these Monte Carlo algorithms has been investigated.
Asselineau, Charles-Alexis; Zapata, Jose; Pye, John
2015-06-01
A stochastic optimisation method adapted to illumination and radiative heat transfer problems involving Monte-Carlo ray-tracing is presented. A solar receiver shape optimisation case study illustrates the advantages of the method and its potential: efficient receivers are identified using a moderate computational cost. PMID:26072868
A Monte Carlo simulation based inverse propagation method for stochastic model updating
NASA Astrophysics Data System (ADS)
Bao, Nuo; Wang, Chunjie
2015-08-01
This paper presents an efficient stochastic model updating method based on statistical theory. Significant parameters have been selected implementing the F-test evaluation and design of experiments, and then the incomplete fourth-order polynomial response surface model (RSM) has been developed. Exploiting of the RSM combined with Monte Carlo simulation (MCS), reduces the calculation amount and the rapid random sampling becomes possible. The inverse uncertainty propagation is given by the equally weighted sum of mean and covariance matrix objective functions. The mean and covariance of parameters are estimated synchronously by minimizing the weighted objective function through hybrid of particle-swarm and Nelder-Mead simplex optimization method, thus the better correlation between simulation and test is achieved. Numerical examples of a three degree-of-freedom mass-spring system under different conditions and GARTEUR assembly structure validated the feasibility and effectiveness of the proposed method.
Reduced Monte Carlo methods for the solution of stochastic groundwater flow problems
NASA Astrophysics Data System (ADS)
Pasetto, D.; Guadagnini, A.; Putti, M.
2012-04-01
Reduced order modeling is often employed to decrease the computational cost of numerical solutions of parametric Partial Differential Equations. Reduced basis, balanced truncation, projections methods are among the most studied techniques to achieve model reduction. We study the applicability of snapshot-based Proper Orthogonal Decomposition (POD) to Monte Carlo (MC) simulations applied to the solution of the stochastic groundwater flow problem. POD model reduction is obtained by projecting the model equations onto a space generated by a small number of basis functions (principal components). These are obtained upon exploring the solution (probability) space with snapshots, i.e., system states obtained by solving the original process-based equations. The reduced model is then employed to complete the ensemble by adding multiple realizations. We apply this technique to a two dimensional simulation of steady state saturated groundwater flow, and explore the sensitivity of the method to the number of snapshots and associated principal components in terms of accuracy and efficiency of the overall MC procedure. In our preliminary results, we distinguish the problem of heterogeneous recharge, in which the stochastic term is confined to the forcing function (additive stochasticity), from the case of heterogeneous hydraulic conductivity, in which the stochastic term is multiplicative. In the first scenario, the linearity of the problem is fully exploited and the POD approach yields accurate and efficient realizations, leading to substantial speed up of the MC method. The second scenario poses a significant challenge, as the adoption of a few snapshots based on the full model does not provide enough variability in the reduced order replicates, thus leading to poor convergence of the MC method. We find that increasing the number of snapshots improves the convergence of MC but only for large integral scales of the log-conductivity field. The technique is then extended to take full advantage of the solution of moment differential equations of groundwater flow.
Energy Science and Technology Software Center (ESTSC)
2010-10-20
The "Monte Carlo Benchmark" (MCB) is intended to model the computatiional performance of Monte Carlo algorithms on parallel architectures. It models the solution of a simple heuristic transport equation using a Monte Carlo technique. The MCB employs typical features of Monte Carlo algorithms such as particle creation, particle tracking, tallying particle information, and particle destruction. Particles are also traded among processors using MPI calls.
Accurate Monte Carlo tests of the stochastic Ginzburg-Landau model with multiplicative colored noise
Jingdong Bao Inst. of Atomic Energy, Beijing ); Yizhong Zhuo Academia Sinica, Beijing ); Xizhen Wu )
1992-03-01
An accurate and fast Monte Carlo algorithm is proposed for solving the Ginzburg-Landau equation with multiplicative colored noise. The stable cases of solution for choosing time steps and trajectory numbers are discussed.
NASA Astrophysics Data System (ADS)
Starke, B.; Koch, M.
2005-12-01
To calibrate and validate tank experiments of macrodispersion in density-dependent flow within a stochastically heterogeneous medium performed in a 10m long, 1.2m high and 0.1m wide Plexiglas tank at the University of Kassel over the last few years, numerous Monte Carlo simulations using the SUTRA density-dependent flow and transport model have been performed. Objective of this ongoing long-term study is the analysis of the effects of the stochastic properties of the porous medium on the steady-state macrodispersion, particularly, the transversal dispersion. The tank experiments have been set up to mimic density dependent flow under hydrodynamically stable conditions (horizontally stratified flow, whereby saltwater is injected horizontally into freshwater in the lower half of the tank). Numerous experiments with saltwater concentrations ranging from c_0 = 250 (fresh water) to c_0 =100000 ppm and three inflow velocities of u = 1,4 and 8 m/day each are carried out for three stochastic, anisotropically packed sand structures with different mean K_g, variance ?2, and horizontal and vertical correlation lengths ?_x, ?_z for the permeability variations. For each flow and transport experiment carried out in one tankpack, a large number of Monte Carlo simulations with stochastic realizations taken from the corresponding statistical family (with predefined K_g, ?2, ?_x, ?_z) are simulated under steady-state conditions. From moment analyses and laterals widths of the simulated saltwater plume, variances ?_D2 of lateral dispersion are calculated as a function of horizontal distance x from the tank inlet. Using simple square root regression analysis of ?_D2(x), an expectation value for the transversal dispersivity E(A_T) is then computed which should be representative for the particular medium family and the given flow conditions. One issue of particular interest concerns the number N of Monte Carlo simulations reqired to get an asymptotically stable value E(?_D2) or E(A_T). Although this number depends essentially on the variance ?2 of the heterogeneous medium, increasing with the latter, we find out that N = O(100), i.e. an order of magnitude less than what has been found in previously published Monte Carlo simulations of tracer-type macrodispersion in stochastically heterogeneous media. As for the physics of the macrodispersion process retrieved from both the experiments and the Monte Carlo simulations, we find reasonable agreement that, as expected, deterioriates somewhat as the density contrast and the variance of the permeability distribution of the porpus medium increase. Another aspect that will be discussed in detail is the different degree of sensitivity of the lateral macrodispersion to the various parameters describing the flow and the porous medium.
NASA Astrophysics Data System (ADS)
Franke, Brian C.; Kensek, Ronald P.; Prinja, Anil K.
2014-06-01
Stochastic-media simulations require numerous boundary crossings. We consider two Monte Carlo electron transport approaches and evaluate accuracy with numerous material boundaries. In the condensed-history method, approximations are made based on infinite-medium solutions for multiple scattering over some track length. Typically, further approximations are employed for material-boundary crossings where infinite-medium solutions become invalid. We have previously explored an alternative "condensed transport" formulation, a Generalized Boltzmann-Fokker-Planck GBFP method, which requires no special boundary treatment but instead uses approximations to the electron-scattering cross sections. Some limited capabilities for analog transport and a GBFP method have been implemented in the Integrated Tiger Series (ITS) codes. Improvements have been made to the condensed history algorithm. The performance of the ITS condensed-history and condensed-transport algorithms are assessed for material-boundary crossings. These assessments are made both by introducing artificial material boundaries and by comparison to analog Monte Carlo simulations.
Brown, F.B.; Sutton, T.M.
1996-02-01
This report is composed of the lecture notes from the first half of a 32-hour graduate-level course on Monte Carlo methods offered at KAPL. These notes, prepared by two of the principle developers of KAPL`s RACER Monte Carlo code, cover the fundamental theory, concepts, and practices for Monte Carlo analysis. In particular, a thorough grounding in the basic fundamentals of Monte Carlo methods is presented, including random number generation, random sampling, the Monte Carlo approach to solving transport problems, computational geometry, collision physics, tallies, and eigenvalue calculations. Furthermore, modern computational algorithms for vector and parallel approaches to Monte Carlo calculations are covered in detail, including fundamental parallel and vector concepts, the event-based algorithm, master/slave schemes, parallel scaling laws, and portability issues.
Semi-stochastic full configuration interaction quantum Monte Carlo: developments and application
Blunt, N. S.; Smart, Simon D.; Kersten, J. A. F.; Spencer, J. S.; Booth, George H.; Alavi, Ali
2015-05-14
deterministic FCI approaches. While many traditional projector QMC methods such as diffusion Monte Carlo (DMC) sample the wave function in real space, FCIQMC performs the sampling in a space of discrete basis states. This discrete sampling of the wave function... -up for the chromium dimer (bond length 1.5A?, SV basis, CAS (24,30)) from 24 to 1152 cores on ARCHER, a Cray XC30. This system has a Hilbert space size of ? O[1014], and approximately 2 × 108 walkers were used in each simulation (sufficient to converge the initiator...
Cramer, S.N.
1984-01-01
The MORSE code is a large general-use multigroup Monte Carlo code system. Although no claims can be made regarding its superiority in either theoretical details or Monte Carlo techniques, MORSE has been, since its inception at ORNL in the late 1960s, the most widely used Monte Carlo radiation transport code. The principal reason for this popularity is that MORSE is relatively easy to use, independent of any installation or distribution center, and it can be easily customized to fit almost any specific need. Features of the MORSE code are described.
A stochastic Markov chain approach for tennis: Monte Carlo simulation and modeling
NASA Astrophysics Data System (ADS)
Aslam, Kamran
This dissertation describes the computational formulation of probability density functions (pdfs) that facilitate head-to-head match simulations in tennis along with ranking systems developed from their use. A background on the statistical method used to develop the pdfs , the Monte Carlo method, and the resulting rankings are included along with a discussion on ranking methods currently being used both in professional sports and in other applications. Using an analytical theory developed by Newton and Keller in [34] that defines a tennis player's probability of winning a game, set, match and single elimination tournament, a computational simulation has been developed in Matlab that allows further modeling not previously possible with the analytical theory alone. Such experimentation consists of the exploration of non-iid effects, considers the concept the varying importance of points in a match and allows an unlimited number of matches to be simulated between unlikely opponents. The results of these studies have provided pdfs that accurately model an individual tennis player's ability along with a realistic, fair and mathematically sound platform for ranking them.
NASA Astrophysics Data System (ADS)
Zhang, Yue; Sun, Xian; Thiele, Antje; Hinz, Stefan
2015-10-01
Synthetic aperture radar (SAR) systems, such as TanDEM-X, TerraSAR-X and Cosmo-SkyMed, acquire imagery with high spatial resolution (HR), making it possible to observe objects in urban areas with high detail. In this paper, we propose a new top-down framework for three-dimensional (3D) building reconstruction from HR interferometric SAR (InSAR) data. Unlike most methods proposed before, we adopt a generative model and utilize the reconstruction process by maximizing a posteriori estimation (MAP) through Monte Carlo methods. The reason for this strategy refers to the fact that the noisiness of SAR images calls for a thorough prior model to better cope with the inherent amplitude and phase fluctuations. In the reconstruction process, according to the radar configuration and the building geometry, a 3D building hypothesis is mapped to the SAR image plane and decomposed to feature regions such as layover, corner line, and shadow. Then, the statistical properties of intensity, interferometric phase and coherence of each region are explored respectively, and are included as region terms. Roofs are not directly considered as they are mixed with wall into layover area in most cases. When estimating the similarity between the building hypothesis and the real data, the prior, the region term, together with the edge term related to the contours of layover and corner line, are taken into consideration. In the optimization step, in order to achieve convergent reconstruction outputs and get rid of local extrema, special transition kernels are designed. The proposed framework is evaluated on the TanDEM-X dataset and performs well for buildings reconstruction.
Comparative Monte Carlo efficiency by Monte Carlo analysis.
Rubenstein, B M; Gubernatis, J E; Doll, J D
2010-09-01
We propose a modified power method for computing the subdominant eigenvalue ?{2} of a matrix or continuous operator. While useful both deterministically and stochastically, we focus on defining simple Monte Carlo methods for its application. The methods presented use random walkers of mixed signs to represent the subdominant eigenfunction. Accordingly, the methods must cancel these signs properly in order to sample this eigenfunction faithfully. We present a simple procedure to solve this sign problem and then test our Monte Carlo methods by computing ?{2} of various Markov chain transition matrices. As |?{2}| of this matrix controls the rate at which Monte Carlo sampling relaxes to a stationary condition, its computation also enabled us to compare efficiencies of several Monte Carlo algorithms as applied to two quite different types of problems. We first computed ?{2} for several one- and two-dimensional Ising models, which have a discrete phase space, and compared the relative efficiencies of the Metropolis and heat-bath algorithms as functions of temperature and applied magnetic field. Next, we computed ?{2} for a model of an interacting gas trapped by a harmonic potential, which has a mutidimensional continuous phase space, and studied the efficiency of the Metropolis algorithm as a function of temperature and the maximum allowable step size ?. Based on the ?{2} criterion, we found for the Ising models that small lattices appear to give an adequate picture of comparative efficiency and that the heat-bath algorithm is more efficient than the Metropolis algorithm only at low temperatures where both algorithms are inefficient. For the harmonic trap problem, we found that the traditional rule of thumb of adjusting ? so that the Metropolis acceptance rate is around 50% is often suboptimal. In general, as a function of temperature or ? , ?{2} for this model displayed trends defining optimal efficiency that the acceptance ratio does not. The cases studied also suggested that Monte Carlo simulations for a continuum model are likely more efficient than those for a discretized version of the model. PMID:21230207
Marcus, Ryan C.
2012-07-25
MCMini is a proof of concept that demonstrates the possibility for Monte Carlo neutron transport using OpenCL with a focus on performance. This implementation, written in C, shows that tracing particles and calculating reactions on a 3D mesh can be done in a highly scalable fashion. These results demonstrate a potential path forward for MCNP or other Monte Carlo codes.
DYNAMICAL ANALYSIS OF LOW TEMPERATURE MONTE CARLO CLUSTER ALGORITHMS
DYNAMICAL ANALYSIS OF LOW TEMPERATURE MONTE CARLO CLUSTER ALGORITHMS by Fabio Martinelli Abstract that greatly hampered Monte Carlo simulations of critical phenomena in ferromagnetic systems of statistical like plane rotators [9] or completely frustrated systems [10]. This type of stochastic algorithms
Quantum Gibbs ensemble Monte Carlo
Fantoni, Riccardo; Moroni, Saverio
2014-09-21
We present a path integral Monte Carlo method which is the full quantum analogue of the Gibbs ensemble Monte Carlo method of Panagiotopoulos to study the gas-liquid coexistence line of a classical fluid. Unlike previous extensions of Gibbs ensemble Monte Carlo to include quantum effects, our scheme is viable even for systems with strong quantum delocalization in the degenerate regime of temperature. This is demonstrated by an illustrative application to the gas-superfluid transition of {sup 4}He in two dimensions.
Wormhole Hamiltonian Monte Carlo
Lan, Shiwei; Streets, Jeffrey; Shahbaba, Babak
2015-01-01
In machine learning and statistics, probabilistic inference involving multimodal distributions is quite difficult. This is especially true in high dimensional problems, where most existing algorithms cannot easily move from one mode to another. To address this issue, we propose a novel Bayesian inference approach based on Markov Chain Monte Carlo. Our method can effectively sample from multimodal distributions, especially when the dimension is high and the modes are isolated. To this end, it exploits and modifies the Riemannian geometric properties of the target distribution to create wormholes connecting modes in order to facilitate moving between them. Further, our proposed method uses the regeneration technique in order to adapt the algorithm by identifying new modes and updating the network of wormholes without affecting the stationary distribution. To find new modes, as opposed to redis-covering those previously identified, we employ a novel mode searching algorithm that explores a residual energy function obtained by subtracting an approximate Gaussian mixture density (based on previously discovered modes) from the target density function. PMID:25861551
Isotropic Monte Carlo Grain Growth
Energy Science and Technology Software Center (ESTSC)
2013-04-25
IMCGG performs Monte Carlo simulations of normal grain growth in metals on a hexagonal grid in two dimensions with periodic boundary conditions. This may be performed with either an isotropic or a misorientation - and incliantion-dependent grain boundary energy.
Markov Chain Monte Carlo Usher's Algorithm
Bremen, UniversitÃ¤t
Concepts Markov Chain Monte Carlo Usher's Algorithm Markov Chain Monte Carlo for Parameter Optimization Holger Schultheis 04.11.2014 1 / 27 #12;Concepts Markov Chain Monte Carlo Usher's Algorithm Topics 1 Concepts 2 Markov Chain Monte Carlo Basics Example Metropolis and Simulated Annealing 3 Usher
Bieda, Bogus?aw
2014-05-15
The purpose of the paper is to present the results of application of stochastic approach based on Monte Carlo (MC) simulation for life cycle inventory (LCI) data of Mittal Steel Poland (MSP) complex in Kraków, Poland. In order to assess the uncertainty, the software CrystalBall® (CB), which is associated with Microsoft® Excel spreadsheet model, is used. The framework of the study was originally carried out for 2005. The total production of steel, coke, pig iron, sinter, slabs from continuous steel casting (CSC), sheets from hot rolling mill (HRM) and blast furnace gas, collected in 2005 from MSP was analyzed and used for MC simulation of the LCI model. In order to describe random nature of all main products used in this study, normal distribution has been applied. The results of the simulation (10,000 trials) performed with the use of CB consist of frequency charts and statistical reports. The results of this study can be used as the first step in performing a full LCA analysis in the steel industry. Further, it is concluded that the stochastic approach is a powerful method for quantifying parameter uncertainty in LCA/LCI studies and it can be applied to any steel industry. The results obtained from this study can help practitioners and decision-makers in the steel production management. PMID:24290145
Proton Upset Monte Carlo Simulation
NASA Technical Reports Server (NTRS)
O'Neill, Patrick M.; Kouba, Coy K.; Foster, Charles C.
2009-01-01
The Proton Upset Monte Carlo Simulation (PROPSET) program calculates the frequency of on-orbit upsets in computer chips (for given orbits such as Low Earth Orbit, Lunar Orbit, and the like) from proton bombardment based on the results of heavy ion testing alone. The software simulates the bombardment of modern microelectronic components (computer chips) with high-energy (.200 MeV) protons. The nuclear interaction of the proton with the silicon of the chip is modeled and nuclear fragments from this interaction are tracked using Monte Carlo techniques to produce statistically accurate predictions.
Michael H. Seymour
2010-08-17
I review the status of the general-purpose Monte Carlo event generators for the LHC, with emphasis on areas of recent physics developments. There has been great progress, especially in multi-jet simulation, but I mention some question marks that have recently arisen.
Monte Carlo calculations of nuclei
Pieper, S.C.
1997-10-01
Nuclear many-body calculations have the complication of strong spin- and isospin-dependent potentials. In these lectures the author discusses the variational and Green`s function Monte Carlo techniques that have been developed to address this complication, and presents a few results.
Is Monte Carlo embarrassingly parallel?
Hoogenboom, J. E.
2012-07-01
Monte Carlo is often stated as being embarrassingly parallel. However, running a Monte Carlo calculation, especially a reactor criticality calculation, in parallel using tens of processors shows a serious limitation in speedup and the execution time may even increase beyond a certain number of processors. In this paper the main causes of the loss of efficiency when using many processors are analyzed using a simple Monte Carlo program for criticality. The basic mechanism for parallel execution is MPI. One of the bottlenecks turn out to be the rendez-vous points in the parallel calculation used for synchronization and exchange of data between processors. This happens at least at the end of each cycle for fission source generation in order to collect the full fission source distribution for the next cycle and to estimate the effective multiplication factor, which is not only part of the requested results, but also input to the next cycle for population control. Basic improvements to overcome this limitation are suggested and tested. Also other time losses in the parallel calculation are identified. Moreover, the threading mechanism, which allows the parallel execution of tasks based on shared memory using OpenMP, is analyzed in detail. Recommendations are given to get the maximum efficiency out of a parallel Monte Carlo calculation. (authors)
A Monte Carlo approach to water management
NASA Astrophysics Data System (ADS)
Koutsoyiannis, D.
2012-04-01
Common methods for making optimal decisions in water management problems are insufficient. Linear programming methods are inappropriate because hydrosystems are nonlinear with respect to their dynamics, operation constraints and objectives. Dynamic programming methods are inappropriate because water management problems cannot be divided into sequential stages. Also, these deterministic methods cannot properly deal with the uncertainty of future conditions (inflows, demands, etc.). Even stochastic extensions of these methods (e.g. linear-quadratic-Gaussian control) necessitate such drastic oversimplifications of hydrosystems that may make the obtained results irrelevant to the real world problems. However, a Monte Carlo approach is feasible and can form a general methodology applicable to any type of hydrosystem. This methodology uses stochastic simulation to generate system inputs, either unconditional or conditioned on a prediction, if available, and represents the operation of the entire system through a simulation model as faithful as possible, without demanding a specific mathematical form that would imply oversimplifications. Such representation fully respects the physical constraints, while at the same time it evaluates the system operation constraints and objectives in probabilistic terms, and derives their distribution functions and statistics through Monte Carlo simulation. As the performance criteria of a hydrosystem operation will generally be highly nonlinear and highly nonconvex functions of the control variables, a second Monte Carlo procedure, implementing stochastic optimization, is necessary to optimize system performance and evaluate the control variables of the system. The latter is facilitated if the entire representation is parsimonious, i.e. if the number of control variables is kept at a minimum by involving a suitable system parameterization. The approach is illustrated through three examples for (a) a hypothetical system of two reservoirs performing a variety of functions, (b) the water resource system of Athens comprising four reservoirs and many aqueducts, and (c) a human-modified inadequately measured basin in which the parameter fitting of a hydrological model is sought.
Markov Chain Monte Carlo and Gibbs Sampling
Walsh, Bruce
Appendix 3 Markov Chain Monte Carlo and Gibbs Sampling Far better an approximate answer development of Markov Chain Monte Carlo (MCMC) meth uses the previous sample value to randomly generate the next sample value, generating a Markov chain
Applications of Monte Carlo Methods in Calculus.
ERIC Educational Resources Information Center
Gordon, Sheldon P.; Gordon, Florence S.
1990-01-01
Discusses the application of probabilistic ideas, especially Monte Carlo simulation, to calculus. Describes some applications using the Monte Carlo method: Riemann sums; maximizing and minimizing a function; mean value theorems; and testing conjectures. (YP)
Danon, Yaron
-Dimensional Binary Stochastic Mixture Timothy J. Donovan* Lockheed Martin Corporation P.O. Box 1072, Schenectady, New through the boundaries of a two-dimensional binary stochastic material. The mixture is specified within (CLS) to eliminate the need to explicitly model the geometry of the mixture. Two variations
Monte Carlo Experiments: Design and Implementation.
ERIC Educational Resources Information Center
Paxton, Pamela; Curran, Patrick J.; Bollen, Kenneth A.; Kirby, Jim; Chen, Feinian
2001-01-01
Illustrates the design and planning of Monte Carlo simulations, presenting nine steps in planning and performing a Monte Carlo analysis from developing a theoretically derived question of interest through summarizing the results. Uses a Monte Carlo simulation to illustrate many of the relevant points. (SLD)
MARKOV CHAIN MONTE CARLO MATTHEW JOSEPH
May, J. Peter
MARKOV CHAIN MONTE CARLO MATTHEW JOSEPH Abstract. Markov chain Monte Carlo is an umbrella term for algorithms that use Markov chains to sample from a given probability distribution. This paper is a brief examination of Markov chain Monte Carlo and its usage. We begin by discussing Markov chains and the ergodicity
Parallel Monte Carlo Simulation for control system design
NASA Technical Reports Server (NTRS)
Schubert, Wolfgang M.
1995-01-01
The research during the 1993/94 academic year addressed the design of parallel algorithms for stochastic robustness synthesis (SRS). SRS uses Monte Carlo simulation to compute probabilities of system instability and other design-metric violations. The probabilities form a cost function which is used by a genetic algorithm (GA). The GA searches for the stochastic optimal controller. The existing sequential algorithm was analyzed and modified to execute in a distributed environment. For this, parallel approaches to Monte Carlo simulation and genetic algorithms were investigated. Initial empirical results are available for the KSR1.
Fulger, Daniel; Scalas, Enrico; Germano, Guido
2008-02-01
We present a numerical method for the Monte Carlo simulation of uncoupled continuous-time random walks with a Lévy alpha -stable distribution of jumps in space and a Mittag-Leffler distribution of waiting times, and apply it to the stochastic solution of the Cauchy problem for a partial differential equation with fractional derivatives both in space and in time. The one-parameter Mittag-Leffler function is the natural survival probability leading to time-fractional diffusion equations. Transformation methods for Mittag-Leffler random variables were found later than the well-known transformation method by Chambers, Mallows, and Stuck for Lévy alpha -stable random variables and so far have not received as much attention; nor have they been used together with the latter in spite of their mathematical relationship due to the geometric stability of the Mittag-Leffler distribution. Combining the two methods, we obtain an accurate approximation of space- and time-fractional diffusion processes almost as easy and fast to compute as for standard diffusion processes. PMID:18352002
Khromov, K. Yu.; Vaks, V. G. Zhuravlev, I. A.
2013-02-15
The previously developed ab initio model and the kinetic Monte Carlo method (KMCM) are used to simulate precipitation in a number of iron-copper alloys with different copper concentrations x and temperatures T. The same simulations are also made using an improved version of the previously suggested stochastic statistical method (SSM). The results obtained enable us to make a number of general conclusions about the dependences of the decomposition kinetics in Fe-Cu alloys on x and T. We also show that the SSM usually describes the precipitation kinetics in good agreement with the KMCM, and using the SSM in conjunction with the KMCM allows extending the KMC simulations to the longer evolution times. The results of simulations seem to agree with available experimental data for Fe-Cu alloys within statistical errors of simulations and the scatter of experimental results. Comparison of simulation results with experiments for some multicomponent Fe-Cu-based alloys allows making certain conclusions about the influence of alloying elements in these alloys on the precipitation kinetics at different stages of evolution.
Zimmerman, G.B.
1997-06-24
Monte Carlo methods appropriate to simulate the transport of x-rays, neutrons, ion and electrons in Inertial Confinement Fusion targets are described and analyzed. The Implicit Monte Carlo method of x-ray transport handles symmetry within indirect drive ICF hohlraums well, but can be improved 50X in efficiency by angular biasing the x-rays towards the fuel capsule. Accurate simulation of thermonuclear burns nd burn diagnostics involves detailed particle source spectra, charged particle ranges, inflight reaction kinematics, corrections for bulk and thermal Doppler effects and variance reduction to obtain adequate statistics for rare events. It is found that the effects of angular Coulomb scattering must be included in models of charged particle transport through heterogeneous materials.
Womersley, J. . Dept. of Physics)
1992-10-01
The D0 detector at the Fermilab Tevatron began its first data taking run in May 1992. For analysis of the expected 25 pb[sup [minus]1] data sample, roughly half a million simulated events will be needed. The GEANT-based Monte Carlo program used to generate these events is described, together with comparisons to test beam data. Some novel techniques used to speed up execution and simplify geometrical input are described.
Parallel Monte Carlo reactor neutronics
Blomquist, R.N.; Brown, F.B.
1994-03-01
The issues affecting implementation of parallel algorithms for large-scale engineering Monte Carlo neutron transport simulations are discussed. For nuclear reactor calculations, these include load balancing, recoding effort, reproducibility, domain decomposition techniques, I/O minimization, and strategies for different parallel architectures. Two codes were parallelized and tested for performance. The architectures employed include SIMD, MIMD-distributed memory, and workstation network with uneven interactive load. Speedups linear with the number of nodes were achieved.
Chen, Jinsong
Estimating reservoir parameters from seismic and electromagnetic data using stochastic rock, and pore pressure in reservoirs using seismic and electromagnetic (EM) data. Within the Bayesian framework, unknown reservoir parameters at each pixel in space are considered as random variables and the co
Compressible generalized hybrid Monte Carlo.
Fang, Youhan; Sanz-Serna, J M; Skeel, Robert D
2014-05-01
One of the most demanding calculations is to generate random samples from a specified probability distribution (usually with an unknown normalizing prefactor) in a high-dimensional configuration space. One often has to resort to using a Markov chain Monte Carlo method, which converges only in the limit to the prescribed distribution. Such methods typically inch through configuration space step by step, with acceptance of a step based on a Metropolis(-Hastings) criterion. An acceptance rate of 100% is possible in principle by embedding configuration space in a higher dimensional phase space and using ordinary differential equations. In practice, numerical integrators must be used, lowering the acceptance rate. This is the essence of hybrid Monte Carlo methods. Presented is a general framework for constructing such methods under relaxed conditions: the only geometric property needed is (weakened) reversibility; volume preservation is not needed. The possibilities are illustrated by deriving a couple of explicit hybrid Monte Carlo methods, one based on barrier-lowering variable-metric dynamics and another based on isokinetic dynamics. PMID:24811626
An Introduction to Multilevel Monte Carlo for Option Valuation
Higham, Desmond J
2015-01-01
Monte Carlo is a simple and flexible tool that is widely used in computational finance. In this context, it is common for the quantity of interest to be the expected value of a random variable defined via a stochastic differential equation. In 2008, Giles proposed a remarkable improvement to the approach of discretizing with a numerical method and applying standard Monte Carlo. His multilevel Monte Carlo method offers an order of speed up given by the inverse of epsilon, where epsilon is the required accuracy. So computations can run 100 times more quickly when two digits of accuracy are required. The multilevel philosophy has since been adopted by a range of researchers and a wealth of practically significant results has arisen, most of which have yet to make their way into the expository literature. In this work, we give a brief, accessible, introduction to multilevel Monte Carlo and summarize recent results applicable to the task of option evaluation.
Present status of vectorized Monte Carlo
Brown, F.B.
1987-01-01
Monte Carlo applications have traditionally been limited by the large amounts of computer time required to produce acceptably small statistical uncertainties, so the immediate benefit of vectorization is an increase in either the number of jobs completed or the number of particles processed per job, typically by one order of magnitude or more. This results directly in improved engineering design analyses, since Monte Carlo methods are used as standards for correcting more approximate methods. The relatively small number of vectorized programs is a consequence of the newness of vectorized Monte Carlo, the difficulties of nonportability, and the very large development effort required to rewrite or restructure Monte Carlo codes for vectorization. Based on the successful efforts to date, it may be concluded that Monte Carlo vectorization will spread to increasing numbers of codes and applications. The possibility of multitasking provides even further motivation for vectorizing Monte Carlo, since the step from vector to multitasked vector is relatively straightforward.
1-D EQUILIBRIUM DISCRETE DIFFUSION MONTE CARLO
T. EVANS; ET AL
2000-08-01
We present a new hybrid Monte Carlo method for 1-D equilibrium diffusion problems in which the radiation field coexists with matter in local thermodynamic equilibrium. This method, the Equilibrium Discrete Diffusion Monte Carlo (EqDDMC) method, combines Monte Carlo particles with spatially discrete diffusion solutions. We verify the EqDDMC method with computational results from three slab problems. The EqDDMC method represents an incremental step toward applying this hybrid methodology to non-equilibrium diffusion, where it could be simultaneously coupled to Monte Carlo transport.
NASA Technical Reports Server (NTRS)
Bell, Thomas L.; Abdullah, A.; Martin, Russell L.; North, Gerald R.
1990-01-01
Estimates of monthly average rainfall based on satellite observations from a low earth orbit will differ from the true monthly average because the satellite observes a given area only intermittently. This sampling error inherent in satellite monitoring of rainfall would occur even if the satellite instruments could measure rainfall perfectly. The size of this error is estimated for a satellite system being studied at NASA, the Tropical Rainfall Measuring Mission (TRMM). First, the statistical description of rainfall on scales from 1 to 1000 km is examined in detail, based on rainfall data from the Global Atmospheric Research Project Atlantic Tropical Experiment (GATE). A TRMM-like satellite is flown over a two-dimensional time-evolving simulation of rainfall using a stochastic model with statistics tuned to agree with GATE statistics. The distribution of sampling errors found from many months of simulated observations is found to be nearly normal, even though the distribution of area-averaged rainfall is far from normal. For a range of orbits likely to be employed in TRMM, sampling error is found to be less than 10 percent of the mean for rainfall averaged over a 500 x 500 sq km area.
Single scatter electron Monte Carlo
Svatos, M.M.
1997-03-01
A single scatter electron Monte Carlo code (SSMC), CREEP, has been written which bridges the gap between existing transport methods and modeling real physical processes. CREEP simulates ionization, elastic and bremsstrahlung events individually. Excitation events are treated with an excitation-only stopping power. The detailed nature of these simulations allows for calculation of backscatter and transmission coefficients, backscattered energy spectra, stopping powers, energy deposits, depth dose, and a variety of other associated quantities. Although computationally intense, the code relies on relatively few mathematical assumptions, unlike other charged particle Monte Carlo methods such as the commonly-used condensed history method. CREEP relies on sampling the Lawrence Livermore Evaluated Electron Data Library (EEDL) which has data for all elements with an atomic number between 1 and 100, over an energy range from approximately several eV (or the binding energy of the material) to 100 GeV. Compounds and mixtures may also be used by combining the appropriate element data via Bragg additivity.
Monte Carlo surface flux tallies
Favorite, Jeffrey A
2010-11-19
Particle fluxes on surfaces are difficult to calculate with Monte Carlo codes because the score requires a division by the surface-crossing angle cosine, and grazing angles lead to inaccuracies. We revisit the standard practice of dividing by half of a cosine 'cutoff' for particles whose surface-crossing cosines are below the cutoff. The theory behind this approximation is sound, but the application of the theory to all possible situations does not account for two implicit assumptions: (1) the grazing band must be symmetric about 0, and (2) a single linear expansion for the angular flux must be applied in the entire grazing band. These assumptions are violated in common circumstances; for example, for separate in-going and out-going flux tallies on internal surfaces, and for out-going flux tallies on external surfaces. In some situations, dividing by two-thirds of the cosine cutoff is more appropriate. If users were able to control both the cosine cutoff and the substitute value, they could use these parameters to make accurate surface flux tallies. The procedure is demonstrated in a test problem in which Monte Carlo surface fluxes in cosine bins are converted to angular fluxes and compared with the results of a discrete ordinates calculation.
Density-matrix quantum Monte Carlo method
NASA Astrophysics Data System (ADS)
Blunt, N. S.; Rogers, T. W.; Spencer, J. S.; Foulkes, W. M. C.
2014-06-01
We present a quantum Monte Carlo method capable of sampling the full density matrix of a many-particle system at finite temperature. This allows arbitrary reduced density matrix elements and expectation values of complicated nonlocal observables to be evaluated easily. The method resembles full configuration interaction quantum Monte Carlo but works in the space of many-particle operators instead of the space of many-particle wave functions. One simulation provides the density matrix at all temperatures simultaneously, from T =? to T =0, allowing the temperature dependence of expectation values to be studied. The direct sampling of the density matrix also allows the calculation of some previously inaccessible entanglement measures. We explain the theory underlying the method, describe the algorithm, and introduce an importance-sampling procedure to improve the stochastic efficiency. To demonstrate the potential of our approach, the energy and staggered magnetization of the isotropic antiferromagnetic Heisenberg model on small lattices, the concurrence of one-dimensional spin rings, and the Renyi S2 entanglement entropy of various sublattices of the 6×6 Heisenberg model are calculated. The nature of the sign problem in the method is also investigated.
Summarizing Monte Carlo Results in Methodological Research.
ERIC Educational Resources Information Center
Harwell, Michael R.
Monte Carlo studies of statistical tests are prominently featured in the methodological research literature. Unfortunately, the information from these studies does not appear to have significantly influenced methodological practice in educational and psychological research. One reason is that Monte Carlo studies lack an overarching theory to guide…
Fission Matrix Capability for MCNP Monte Carlo
Carney, Sean E.; Brown, Forrest B.; Kiedrowski, Brian C.; Martin, William R.
2012-09-05
In a Monte Carlo criticality calculation, before the tallying of quantities can begin, a converged fission source (the fundamental eigenvector of the fission kernel) is required. Tallies of interest may include powers, absorption rates, leakage rates, or the multiplication factor (the fundamental eigenvalue of the fission kernel, k{sub eff}). Just as in the power iteration method of linear algebra, if the dominance ratio (the ratio of the first and zeroth eigenvalues) is high, many iterations of neutron history simulations are required to isolate the fundamental mode of the problem. Optically large systems have large dominance ratios, and systems containing poor neutron communication between regions are also slow to converge. The fission matrix method, implemented into MCNP[1], addresses these problems. When Monte Carlo random walk from a source is executed, the fission kernel is stochastically applied to the source. Random numbers are used for: distances to collision, reaction types, scattering physics, fission reactions, etc. This method is used because the fission kernel is a complex, 7-dimensional operator that is not explicitly known. Deterministic methods use approximations/discretization in energy, space, and direction to the kernel. Consequently, they are faster. Monte Carlo directly simulates the physics, which necessitates the use of random sampling. Because of this statistical noise, common convergence acceleration methods used in deterministic methods do not work. In the fission matrix method, we are using the random walk information not only to build the next-iteration fission source, but also a spatially-averaged fission kernel. Just like in deterministic methods, this involves approximation and discretization. The approximation is the tallying of the spatially-discretized fission kernel with an incorrect fission source. We address this by making the spatial mesh fine enough that this error is negligible. As a consequence of discretization we get a spatially low-order kernel, the fundamental eigenvector of which should converge faster than that of continuous kernel. We can then redistribute the fission bank to match the fundamental fission matrix eigenvector, effectively eliminating all higher modes. For all computations here biasing is not used, with the intention of comparing the unaltered, conventional Monte Carlo process with the fission matrix results. The source convergence of standard Monte Carlo criticality calculations are, to some extent, always subject to the characteristics of the problem. This method seeks to partially eliminate this problem-dependence by directly calculating the spatial coupling. The primary cost of this, which has prevented widespread use since its inception [2,3,4], is the extra storage required. To account for the coupling of all N spatial regions to every other region requires storing N{sup 2} values. For realistic problems, where a fine resolution is required for the suppression of discretization error, the storage becomes inordinate. Two factors lead to a renewed interest here: the larger memory available on modern computers and the development of a better storage scheme based on physical intuition. When the distance between source and fission events is short compared with the size of the entire system, saving memory by accounting for only local coupling introduces little extra error. We can gain other information from directly tallying the fission kernel: higher eigenmodes and eigenvalues. Conventional Monte Carlo cannot calculate this data - here we have a way to get new information for multiplying systems. In Ref. [5], higher mode eigenfunctions are analyzed for a three-region 1-dimensional problem and 2-dimensional homogenous problem. We analyze higher modes for more realistic problems. There is also the question of practical use of this information; here we examine a way of using eigenmode information to address the negative confidence interval bias due to inter-cycle correlation. We apply this method mainly to four problems: 2D pressurized water reactor (PWR) [6],
Importance iteration in MORSE Monte Carlo calculations
Kloosterman, J.L.; Hoogenboom, J.E. . Interfaculty Reactor Institute)
1994-05-01
an expression to calculate point values (the expected detector response of a particle emerging from a collision or the source) is derived and implemented in the MORSE-SGC/S Monte Carlo code. It is outlined how these point values can be smoothed as a function of energy and as a function of the optical thickness between the detector and the source. The smoothed point values are subsequently used to calculate the biasing parameters of the Monte Carlo runs to follow. The method is illustrated by an example that shows that the obtained biasing parameters lead to a more efficient Monte Carlo calculation.
Monte Carlo techniques in statistical physics
NASA Astrophysics Data System (ADS)
Murthy, K. P. N.
2006-11-01
In this paper we shall briefly review a few Markov Chain Monte Carlo methods for simulating closed systems described by canonical ensembles. We cover both Boltzmann and non-Boltzmann sampling techniques. The Metropolis algorithm is a typical example of Boltzmann Monte Carlo method. We discuss the time-symmetry of the Markov chain generated by Metropolis like algo- rithms that obey detailed balance. The non-Boltzmann Monte Carlo techniques reviewed include the multicanonical and Wang-Landau sampling. We list what we consider as milestones in the historical development of Monte Carlo methods in statistical physics. We dedicate this article to Prof. Dr. G. Ananthakrishna and wish him the very best in the coming years
A general method for debiasing a Monte Carlo estimator
McLeish, Don
2010-01-01
Consider a process, stochastic or deterministic, obtained by using a numerical integration scheme, or from Monte-Carlo methods involving an approximation to an integral, or a Newton-Raphson iteration to approximate the root of an equation. We will assume that we can sample from the distribution of the process from time 0 to finite time n. We propose a scheme for unbiased estimation of the limiting value of the process, together with estimates of standard error and apply this to examples including numerical integrals, root-finding and option pricing in a Heston Stochastic Volatility model. This results in unbiased estimators in place of biased ones i nmany potential applications.
Monte-Carlo Go Reinforcement Learning Experiments Bruno Bouzy
Bouzy, Bruno
Monte-Carlo Go Reinforcement Learning Experiments Bruno Bouzy Universit´e Ren´e Descartes UFR de during simulations performed in a Monte-Carlo Go archi- tecture. Currently, Monte-Carlo is a popular technique for computer Go. In a previous study, Monte-Carlo was associated with domain-dependent knowledge
NASA Astrophysics Data System (ADS)
Morales-Casique, E.; Briseño-Ruiz, J. V.; Hernández, A. F.; Herrera, G. S.; Escolero-Fuentes, O.
2014-12-01
We present a comparison of three stochastic approaches for estimating log hydraulic conductivity (Y) and predicting steady-state groundwater flow. Two of the approaches are based on the data assimilation technique known as ensemble Kalman filter (EnKF) and differ in the way prior statistical moment estimates (PSME) (required to build the Kalman gain matrix) are obtained. In the first approach, the Monte Carlo method is employed to compute PSME of the variables and parameters; we denote this approach by EnKFMC. In the second approach PSME are computed through the direct solution of approximate nonlocal (integrodifferential) equations that govern the spatial conditional ensemble means (statistical expectations) and covariances of hydraulic head (h) and fluxes; we denote this approach by EnKFME. The third approach consists of geostatistical stochastic inversion of the same nonlocal moment equations; we denote this approach by IME. In addition to testing the EnKFMC and EnKFME methods in the traditional manner that estimate Y over the entire grid, we propose novel corresponding algorithms that estimate Y at a few selected locations and then interpolate over all grid elements via kriging as done in the IME method. We tested these methods to estimate Y and h in steady-state groundwater flow in a synthetic two-dimensional domain with a well pumping at a constant rate, located at the center of the domain. In addition, to evaluate the performance of the estimation methods, we generated four unconditional different realizations that served as "true" fields. The results of our numerical experiments indicate that the three methods were effective in estimating h, reaching at least 80% of predictive coverage, although both EnKF were superior to the IME method. With respect to estimating Y, the three methods reached similar accuracy in terms of the mean absolute value error. Coupling the EnKF methods with kriging to estimate Y reduces to one fourth the CPU time required for data assimilation while both estimation accuracy and uncertainty do not deteriorate significantly.
Multiple-time-stepping generalized hybrid Monte Carlo methods
Escribano, Bruno; Akhmatskaya, Elena; Reich, Sebastian; Azpiroz, Jon M.
2015-01-01
Performance of the generalized shadow hybrid Monte Carlo (GSHMC) method [1], which proved to be superior in sampling efficiency over its predecessors [2–4], molecular dynamics and hybrid Monte Carlo, can be further improved by combining it with multi-time-stepping (MTS) and mollification of slow forces. We demonstrate that the comparatively simple modifications of the method not only lead to better performance of GSHMC itself but also allow for beating the best performed methods, which use the similar force splitting schemes. In addition we show that the same ideas can be successfully applied to the conventional generalized hybrid Monte Carlo method (GHMC). The resulting methods, MTS-GHMC and MTS-GSHMC, provide accurate reproduction of thermodynamic and dynamical properties, exact temperature control during simulation and computational robustness and efficiency. MTS-GHMC uses a generalized momentum update to achieve weak stochastic stabilization to the molecular dynamics (MD) integrator. MTS-GSHMC adds the use of a shadow (modified) Hamiltonian to filter the MD trajectories in the HMC scheme. We introduce a new shadow Hamiltonian formulation adapted to force-splitting methods. The use of such Hamiltonians improves the acceptance rate of trajectories and has a strong impact on the sampling efficiency of the method. Both methods were implemented in the open-source MD package ProtoMol and were tested on a water and a protein systems. Results were compared to those obtained using a Langevin Molly (LM) method [5] on the same systems. The test results demonstrate the superiority of the new methods over LM in terms of stability, accuracy and sampling efficiency. This suggests that putting the MTS approach in the framework of hybrid Monte Carlo and using the natural stochasticity offered by the generalized hybrid Monte Carlo lead to improving stability of MTS and allow for achieving larger step sizes in the simulation of complex systems.
Monte Carlo and Molecular Dynamics Tools 3. Monte Carlo techniques for time evolution
Sjöstrand, Torbjörn
Monte Carlo and Molecular Dynamics Tools 3. Monte Carlo techniques for time evolution Torbj¨orn Sj for it to decay at time t Naively P(t) = c = N(t) = 1 - ct. Wrong! Conservation of probability driven by depletion
Novel Quantum Monte Carlo Approaches for Quantum Liquids
NASA Astrophysics Data System (ADS)
Rubenstein, Brenda M.
Quantum Monte Carlo methods are a powerful suite of techniques for solving the quantum many-body problem. By using random numbers to stochastically sample quantum properties, QMC methods are capable of studying low-temperature quantum systems well beyond the reach of conventional deterministic techniques. QMC techniques have likewise been indispensible tools for augmenting our current knowledge of superfluidity and superconductivity. In this thesis, I present two new quantum Monte Carlo techniques, the Monte Carlo Power Method and Bose-Fermi Auxiliary-Field Quantum Monte Carlo, and apply previously developed Path Integral Monte Carlo methods to explore two new phases of quantum hard spheres and hydrogen. I lay the foundation for a subsequent description of my research by first reviewing the physics of quantum liquids in Chapter One and the mathematics behind Quantum Monte Carlo algorithms in Chapter Two. I then discuss the Monte Carlo Power Method, a stochastic way of computing the first several extremal eigenvalues of a matrix too memory-intensive to be stored and therefore diagonalized. As an illustration of the technique, I demonstrate how it can be used to determine the second eigenvalues of the transition matrices of several popular Monte Carlo algorithms. This information may be used to quantify how rapidly a Monte Carlo algorithm is converging to the equilibrium probability distribution it is sampling. I next present the Bose-Fermi Auxiliary-Field Quantum Monte Carlo algorithm. This algorithm generalizes the well-known Auxiliary-Field Quantum Monte Carlo algorithm for fermions to bosons and Bose-Fermi mixtures. Despite some shortcomings, the Bose-Fermi Auxiliary-Field Quantum Monte Carlo algorithm represents the first exact technique capable of studying Bose-Fermi mixtures of any size in any dimension. In Chapter Six, I describe a new Constant Stress Path Integral Monte Carlo algorithm for the study of quantum mechanical systems under high pressures. While the eventual hope is to apply this algorithm to the exploration of yet unidentified high-pressure, low-temperature phases of hydrogen, I employ this algorithm to determine whether or not quantum hard spheres can form a low-temperature bcc solid if exchange is not taken into account. In the final chapter of this thesis, I use Path Integral Monte Carlo once again to explore whether glassy para-hydrogen exhibits superfluidity. Physicists have long searched for ways to coax hydrogen into becoming a superfluid. I present evidence that, while glassy hydrogen does not crystallize at the temperatures at which hydrogen might become a superfluid, it nevertheless does not exhibit superfluidity. This is because the average binding energy per p-H2 molecule poses a severe barrier to exchange regardless of whether the system is crystalline. All in all, this work extends the reach of Quantum Monte Carlo methods to new systems and brings the power of existing methods to bear on new problems. Portions of this work have been published in Rubenstein, PRE (2010) and Rubenstein, PRA (2012) [167;169]. Other papers not discussed here published during my Ph.D. include Rubenstein, BPJ (2008) and Rubenstein, PRL (2012) [166;168]. The work in Chapters 6 and 7 is currently unpublished. [166] Brenda M. Rubenstein, Ivan Coluzza, and Mark A. Miller. Controlling the folding and substrate-binding of proteins using polymer brushes. Physical Review Letters, 108(20):208104, May 2012. [167] Brenda M. Rubenstein, J.E. Gubernatis, and J.D. Doll. Comparative monte carlo efficiency by monte carlo analysis. Physical Review E, 82(3):036701, September 2010. [168] Brenda M. Rubenstein and Laura J. Kaufman. The role of extracellular matrix in glioma invasion: A cellular potts model approach. Biophysical Journal, 95(12):5661-- 5680, December 2008. [169] Brenda M. Rubenstein, Shiwei Zhang, and David R. Reichman. Finite-temperature auxiliary-field quantum monte carlo for bose-fermi mixtures. Physical Review A, 86(5):053606, November 2012.
VERIFICATION OF THE SHIFT MONTE CARLO CODE
Sly, Nicholas; Mervin, Mervin Brenden; Mosher, Scott W; Evans, Thomas M; Wagner, John C; Maldonado, G. Ivan
2012-01-01
Shift is a new hybrid Monte Carlo/deterministic radiation transport code being developed at Oak Ridge National Laboratory. At its current stage of development, Shift includes a fully-functional parallel Monte Carlo capability for simulating eigenvalue and fixed-source multigroup transport problems. This paper focuses on recent efforts to verify Shift s Monte Carlo component using the two-dimensional and three-dimensional C5G7 NEA benchmark problems. Comparisons were made between the benchmark eigenvalues and those output by the Shift code. In addition, mesh-based scalar flux tally results generated by Shift were compared to those obtained using MCNP5 on an identical model and tally grid. The Shift-generated eigenvalues were within three standard deviations of the benchmark and MCNP5 values in all cases. The flux tallies generated by Shift were found to be in very good agreement with those from MCNP
Quantum speedup of Monte Carlo methods
Montanaro, Ashley
2015-01-01
Monte Carlo methods use random sampling to estimate numerical quantities which are hard to compute deterministically. One important example is the use in statistical physics of rapidly mixing Markov chains to approximately compute partition functions. In this work, we describe a quantum algorithm which can accelerate Monte Carlo methods in a very general setting. The algorithm estimates the expected output value of an arbitrary randomized or quantum subroutine with bounded variance, achieving a near-quadratic speedup over the best possible classical algorithm. Combining the algorithm with the use of quantum walks gives a quantum speedup of the fastest known classical algorithms with rigorous performance bounds for computing partition functions, which use multiple-stage Markov chain Monte Carlo techniques. The quantum algorithm can also be used to estimate the total variation distance between probability distributions efficiently. PMID:26528079
Shell model the Monte Carlo way
Ormand, W.E.
1995-03-01
The formalism for the auxiliary-field Monte Carlo approach to the nuclear shell model is presented. The method is based on a linearization of the two-body part of the Hamiltonian in an imaginary-time propagator using the Hubbard-Stratonovich transformation. The foundation of the method, as applied to the nuclear many-body problem, is discussed. Topics presented in detail include: (1) the density-density formulation of the method, (2) computation of the overlaps, (3) the sign of the Monte Carlo weight function, (4) techniques for performing Monte Carlo sampling, and (5) the reconstruction of response functions from an imaginary-time auto-correlation function using MaxEnt techniques. Results obtained using schematic interactions, which have no sign problem, are presented to demonstrate the feasibility of the method, while an extrapolation method for realistic Hamiltonians is presented. In addition, applications at finite temperature are outlined.
Nuclear pairing within a configuration-space Monte Carlo approach
NASA Astrophysics Data System (ADS)
Lingle, Mark; Volya, Alexander
2015-06-01
Pairing correlations in nuclei play a decisive role in determining nuclear drip lines, binding energies, and many collective properties. In this work a new configuration-space Monte Carlo (CSMC) method for treating nuclear pairing correlations is developed, implemented, and demonstrated. In CSMC the Hamiltonian matrix is stochastically generated in Krylov subspace, resulting in the Monte Carlo version of Lanczos-like diagonalization. The advantages of this approach over other techniques are discussed; the absence of the fermionic sign problem, probabilistic interpretation of quantum-mechanical amplitudes, and ability to handle truly large-scale problems with defined precision and error control are noteworthy merits of CSMC. The features of our CSMC approach are shown using models and realistic examples. Special attention is given to difficult limits: situations with nonconstant pairing strengths, cases with nearly degenerate excited states, limits when pairing correlations in finite systems are weak, and problems when the relevant configuration space is large.
Monte Carlo simulation of launchsite winds at Kennedy Space Center
NASA Technical Reports Server (NTRS)
Queen, Eric M.; Moerder, Daniel D.; Warner, Michael S.
1991-01-01
This paper develops and validates an easily implemented model for simulating random horizontal wind profiles over the Kennedy Space Center (KSC) at Cape Canaveral, Florida. The model is intended for use in Monte Carlo launch vehicle simulations of the type employed in mission planning, where the large number of profiles needed for statistical fidelity of such simulation experiments makes the use of actual wind measurements impractical. The model is based on measurements made at KSC and represents vertical correlations by a decaying exponential model which is parameterized via least-squares parameter optimization against the sample data. The validity of the model is evaluated by comparing two Monte Carlo simulations of an asymmetric, heavy-lift launch vehicle. In the first simulation, the measured wind profiles are used, while in the second, the wind profiles are generated using the stochastic model. The simulations indicate that the use of either the measured or simulated wind field results in similar launch vehicle performance.
Monte Carlo evaluation of thermal desorption rates
Adams, J.E.; Doll, J.D.
1981-05-01
The recently reported method for computing thermal desorption rates via a Monte Carlo evaluation of the appropriate transition state theory expression (J. E. Adams and J. D. Doll, J. Chem. Phys. 74, 1467 (1980)) is extended, by the use of importance sampling, so as to generate the complete temperature dependence in a single calculation. We also describe a straightforward means of calculating the activation energy for the desorption process within the same Monte Carlo framework. The result obtained in this way represents, for the case of a simple desorptive event, an upper bound to the true value.
Geodesic Monte Carlo on Embedded Manifolds
Byrne, Simon; Girolami, Mark
2013-01-01
Markov chain Monte Carlo methods explicitly defined on the manifold of probability distributions have recently been established. These methods are constructed from diffusions across the manifold and the solution of the equations describing geodesic flows in the Hamilton–Jacobi representation. This paper takes the differential geometric basis of Markov chain Monte Carlo further by considering methods to simulate from probability distributions that themselves are defined on a manifold, with common examples being classes of distributions describing directional statistics. Proposal mechanisms are developed based on the geodesic flows over the manifolds of support for the distributions, and illustrative examples are provided for the hypersphere and Stiefel manifold of orthonormal matrices. PMID:25309024
A comparison of Monte Carlo generators
Golan, Tomasz
2015-05-15
A comparison of GENIE, NEUT, NUANCE, and NuWro Monte Carlo neutrino event generators is presented using a set of four observables: protons multiplicity, total visible energy, most energetic proton momentum, and ?{sup +} two-dimensional energy vs cosine distribution.
A comparison of Monte Carlo generators
Golan, Tomasz
2014-01-01
A comparison of GENIE, NEUT, NUANCE, and NuWro Monte Carlo neutrino event generators is presented using a set of four observables: protons multiplicity, total visible energy, most energetic proton momentum, and $\\pi^+$ two-dimensional energy vs cosine distribution.
Quantum algorithm for exact Monte Carlo sampling
Nicolas Destainville; Bertrand Georgeot; Olivier Giraud
2010-06-23
We build a quantum algorithm which uses the Grover quantum search procedure in order to sample the exact equilibrium distribution of a wide range of classical statistical mechanics systems. The algorithm is based on recently developed exact Monte Carlo sampling methods, and yields a polynomial gain compared to classical procedures.
Structural Reliability and Monte Carlo Simulation.
ERIC Educational Resources Information Center
Laumakis, P. J.; Harlow, G.
2002-01-01
Analyzes a simple boom structure and assesses its reliability using elementary engineering mechanics. Demonstrates the power and utility of Monte-Carlo simulation by showing that such a simulation can be implemented more readily with results that compare favorably to the theoretical calculations. (Author/MM)
Monte Carlo transition dynamics and variance reduction
Fitzgerald, M.; Picard, R.R.; Silver, R.N.
2000-01-01
For Metropolis Monte Carlo simulations in statistical physics, efficient, easy-to-implement, and unbiased statistical estimators of thermodynamic properties are based on the transition dynamics. Using an Ising model example, they demonstrate (problem-specific) variance reductions compared to conventional histogram estimators. A proof of variance reduction in a microstate limit is presented.
Markov Chain Monte Carlo Prof. David Page
Page Jr., C. David
Markov Chain Monte Carlo Prof. David Page transcribed by Matthew G. Lee #12;Markov Chain · A Markov from state s to state s' · For any time t, T(s s') is the probability of the Markov process being in state s' at time t+1 given that it is in state s at time t #12;Some Properties of Markov Chains (Some we
Improved Monte Carlo Renormalization Group Method
Gupta, R.; Wilson, K.G.; Umrigar, C.
1985-01-01
An extensive program to analyse critical systems using an Improved Monte Carlo Renormalization Group Method (IMCRG) being undertaken at LANL and Cornell is described. Here we first briefly review the method and then list some of the topics being investigated. 9 refs.
Parallel processing Monte Carlo radiation transport codes
McKinney, G.W.
1994-02-01
Issues related to distributed-memory multiprocessing as applied to Monte Carlo radiation transport are discussed. Measurements of communication overhead are presented for the radiation transport code MCNP which employs the communication software package PVM, and average efficiency curves are provided for a homogeneous virtual machine.
A quasi-Monte Carlo Metropolis algorithm
Owen, Art B.; Tribble, Seth D.
2005-01-01
This work presents a version of the Metropolis–Hastings algorithm using quasi-Monte Carlo inputs. We prove that the method yields consistent estimates in some problems with finite state spaces and completely uniformly distributed inputs. In some numerical examples, the proposed method is much more accurate than ordinary Metropolis–Hastings sampling. PMID:15956207
A Monte Carlo Study of Titrating Polyelectrolytes
Söderberg, Bo
A Monte Carlo Study of Titrating Polyelectrolytes Magnus Ullner y and Bo J¨onsson z Physical three different models for linear, titrating polyelectrolytes in a saltfree environment: i) a rigid chains, with up to several thousand titrating groups. The results have been compared to a mean field
TOPICAL REVIEW Monte Carlo methods for phase equilibria of uids
TOPICAL REVIEW Monte Carlo methods for phase equilibria of uids Athanassios Z. Panagiotopoulos z. Matter, 1999 Electronic mail: thanos@ipst.umd.edu. #12;Monte Carlo methods for phase equilibria of uids 2 the determination of phase equilibria by simulati
Landon, Colin Donald
2010-01-01
Direct Simulation Monte Carlo (DSMC)-the prevalent stochastic particle method for high-speed rarefied gas flows-simulates the Boltzmann equation using distributions of representative particles. Although very efficient in ...
Baker, R.S.; Filippone, W.F. . Dept. of Nuclear and Energy Engineering); Alcouffe, R.E. )
1991-01-01
The neutron transport equation is solved by a hybrid method that iteratively couples regions where deterministic (S{sub N}) and stochastic (Monte Carlo) methods are applied. Unlike previous hybrid methods, the Monte Carlo and S{sub N} regions are fully coupled in the sense that no assumption is made about geometrical separation of decoupling. The fully coupled Monte Carlo/S{sub N} technique consists of defining spatial and/or energy regions of a problem in which either a Monte Carlo calculation or an S{sub N} calculation is to be performed. The Monte Carlo and S{sub N} regions are then connected through the common angular boundary fluxes, which are determined iteratively using the response matrix technique, and group sources. The hybrid method provides a new method of solving problems involving both optically thick and optically thin regions that neither Monte Carlo nor S{sub N} is well suited for by itself. The fully coupled Monte Carlo/S{sub N} method has been implemented in the S{sub N} code TWODANT by adding special-purpose Monte Carlo subroutines to calculate the response matrices and group sources, and linkage subroutines to carry out the interface flux iterations. The common angular boundary fluxes are included in the S{sub N} code as interior boundary sources, leaving the logic for the solution of the transport flux unchanged, while, with minor modifications, the diffusion synthetic accelerator remains effective in accelerating the S{sub N} calculations. The Monte Carlo routines have been successfully vectorized, with approximately a factor of five increases in speed over the nonvectorized version. The hybrid method is capable of solving forward, inhomogeneous source problems in X-Y and R-Z geometries. This capability now includes mulitigroup problems involving upscatter and fission in non-highly multiplying systems. 8 refs., 8 figs., 1 tab.
Markov Chain Monte Carlo and Related Topics Department of Statistics
Liu, Jun
Markov Chain Monte Carlo and Related Topics Jun S. Liu Department of Statistics Stanford University review of recent developments in Markov chain Monte Carlo methodology. The methods discussed include methodology, especially Markov chain Monte Carlo (MCMC), provides an enormous scope for realistic statistical
Application of New Monte Carlo Algorithms to Various Spin Systems
Katsumoto, Shingo
due to the randomness or frustration, and the low-temperature slow dy- namics in quantum Monte CarloApplication of New Monte Carlo Algorithms to Various Spin Systems Yutaka Okabe, Yusuke Tomita Introduction In the Monte Carlo simulation, we sometimes suffer from the problem of slow dynamics. The critical
Markov Chain Monte-Carlo Models of Starburst Clusters
NASA Astrophysics Data System (ADS)
Melnick, Jorge
2015-01-01
There are a number of stochastic effects that must be considered when comparing models to observations of starburst clusters: the IMF is never fully populated; the stars can never be strictly coeval; stars rotate and their photometric properties depend on orientation; a significant fraction of massive stars are in interacting binaries; and the extinction varies from star to star. The probability distributions of each of these effects are not a priori known, but must be extracted from the observations. Markov Chain Monte-Carlo methods appear to provide the best statistical approach. Here I present an example of stochastic age effects upon the upper mass limit of the IMF of the Arches cluster as derived from near-IR photometry.
Quantum Monte Carlo by message passing
Bonca, J.; Gubernatis, J.E.
1993-05-01
We summarize results of quantum Monte Carlo simulations of the degenerate single-impurity Anderson model using the impurity algorithm of Hirsch and Fye. Using methods of Bayesian statistical inference, coupled with the principle of maximum entropy, we extracted the single-particle spectral density from the imaginary-time Green`s function. The variations of resulting spectral densities with model parameters agree qualitatively with the spectral densities predicted by NCA calculations. All the simulations were performed on a cluster of 16 IBM R6000/560 workstations under the control of the message-passing software PVM. We described the trivial parallelization of our quantum Monte Carlo code both for the cluster and the CM-5 computer. Other issues for effective parallelization of the impurity algorithm are also discussed.
Quantum Monte Carlo by message passing
Bonca, J.; Gubernatis, J.E.
1993-01-01
We summarize results of quantum Monte Carlo simulations of the degenerate single-impurity Anderson model using the impurity algorithm of Hirsch and Fye. Using methods of Bayesian statistical inference, coupled with the principle of maximum entropy, we extracted the single-particle spectral density from the imaginary-time Green's function. The variations of resulting spectral densities with model parameters agree qualitatively with the spectral densities predicted by NCA calculations. All the simulations were performed on a cluster of 16 IBM R6000/560 workstations under the control of the message-passing software PVM. We described the trivial parallelization of our quantum Monte Carlo code both for the cluster and the CM-5 computer. Other issues for effective parallelization of the impurity algorithm are also discussed.
Monte Carlo Simulation of THz Multipliers
NASA Technical Reports Server (NTRS)
East, J.; Blakey, P.
1997-01-01
Schottky Barrier diode frequency multipliers are critical components in submillimeter and Thz space based earth observation systems. As the operating frequency of these multipliers has increased, the agreement between design predictions and experimental results has become poorer. The multiplier design is usually based on a nonlinear model using a form of harmonic balance and a model for the Schottky barrier diode. Conventional voltage dependent lumped element models do a poor job of predicting THz frequency performance. This paper will describe a large signal Monte Carlo simulation of Schottky barrier multipliers. The simulation is a time dependent particle field Monte Carlo simulation with ohmic and Schottky barrier boundary conditions included that has been combined with a fixed point solution for the nonlinear circuit interaction. The results in the paper will point out some important time constants in varactor operation and will describe the effects of current saturation and nonlinear resistances on multiplier operation.
Monte Carlo simulation of gas Cerenkov detectors
Mack, J.M.; Jain, M.; Jordan, T.M.
1984-01-01
Theoretical study of selected gamma-ray and electron diagnostic necessitates coupling Cerenkov radiation to electron/photon cascades. A Cerenkov production model and its incorporation into a general geometry Monte Carlo coupled electron/photon transport code is discussed. A special optical photon ray-trace is implemented using bulk optical properties assigned to each Monte Carlo zone. Good agreement exists between experimental and calculated Cerenkov data in the case of a carbon-dioxide gas Cerenkov detector experiment. Cerenkov production and threshold data are presented for a typical carbon-dioxide gas detector that converts a 16.7 MeV photon source to Cerenkov light, which is collected by optics and detected by a photomultiplier.
Quantum Monte Carlo Calculations for Carbon Nanotubes
Thomas Luu; Timo A. Lähde
2015-11-16
We show how lattice Quantum Monte Carlo can be applied to the electronic properties of carbon nanotubes in the presence of strong electron-electron correlations. We employ the path-integral formalism and use methods developed within the lattice QCD community for our numerical work. Our lattice Hamiltonian is closely related to the hexagonal Hubbard model augmented by a long-range electron-electron interaction. We apply our method to the single-quasiparticle spectrum of the (3,3) armchair nanotube configuration, and consider the effects of strong electron-electron correlations. Our approach is equally applicable to other nanotubes, as well as to other carbon nanostructures. We benchmark our Monte Carlo calculations against the two- and four-site Hubbard models, where a direct numerical solution is feasible.
Introduction to Multicanonical Monte Carlo Simulations
Bernd A. Berg
1999-09-15
Monte Carlo simulation with {\\it a-priori} unknown weights have attracted recent attention and progress has been made in understanding (i) the technical feasibility of such simulations and (ii) classes of systems for which such simulations lead to major improvements over conventional Monte Carlo simulations. After briefly sketching the history of multicanonical calculations and their range of application, a general introduction in the context of the statistical physics of the d-dimensional generalized Potts models is given. Multicanonical simulations yield canonical expectation values for a range of temperatures or any other parameter(s) for which appropriate weights can be constructed. We shall address in some details the question how the multicanonical weights are actually obtained. Subsequently miscellaneous topics related to the considered algorithms are reviewed. Then multicanonical studies of first order phase transitions are discussed and finally applications to complex systems such as spin glasses and proteins.
Quantum Monte Carlo methods for nuclear physics
Carlson, J; Pederiva, F; Pieper, Steven C; Schiavilla, R; Schmidt, K E; Wiringa, R B
2014-01-01
Quantum Monte Carlo methods have proved very valuable to study the structure and reactions of light nuclei and nucleonic matter starting from realistic nuclear interactions and currents. These ab-initio calculations reproduce many low-lying states and transition moments in light nuclei, and simultaneously predict many properties of light nuclei and neutron matter over a rather wide range of energy and momenta. We review the nuclear interactions and currents, and describe the continuum Quantum Monte Carlo methods used in nuclear physics. These methods are similar to those used in condensed matter and electronic structure but naturally include spin-isospin, tensor, spin-orbit, and three-body interactions. We present a variety of results including the low-lying spectra of light nuclei, nuclear form factors, and transition matrix elements. We also describe low-energy scattering techniques, studies of the electroweak response of nuclei relevant in electron and neutrino scattering, and the properties of dense nucle...
Status of Monte Carlo at Los Alamos
Thompson, W.L.; Cashwell, E.D.; Godfrey, T.N.K.; Schrandt, R.G.; Deutsch, O.L.; Booth, T.E.
1980-05-01
Four papers were presented by Group X-6 on April 22, 1980, at the Oak Ridge Radiation Shielding Information Center (RSIC) Seminar-Workshop on Theory and Applications of Monte Carlo Methods. These papers are combined into one report for convenience and because they are related to each other. The first paper (by Thompson and Cashwell) is a general survey about X-6 and MCNP and is an introduction to the other three papers. It can also serve as a resume of X-6. The second paper (by Godfrey) explains some of the details of geometry specification in MCNP. The third paper (by Cashwell and Schrandt) illustrates calculating flux at a point with MCNP; in particular, the once-more-collided flux estimator is demonstrated. Finally, the fourth paper (by Thompson, Deutsch, and Booth) is a tutorial on some variance-reduction techniques. It should be required for a fledging Monte Carlo practitioner.
Monte Carlo Simulations of Star Clusters
Mirek Giersz
2000-06-30
A revision of Stod\\'o{\\l}kiewicz's Monte Carlo code is used to simulate evolution of large star clusters. The survey on the evolution of multi-mass N-body systems influenced by the tidal field of a parent galaxy and by stellar evolution is discussed. For the first time, the simulation on the "star-by-star" bases of evolution of 1,000,000 body star cluster is presented. \\
Simulated Annealing using Hybrid Monte Carlo
R. Salazar; R. Toral
1997-07-31
We propose a variant of the Simulated Annealing method for optimization in the multivariate analysis of differentiable functions. The method uses global actualizations via the Hybrid Monte Carlo algorithm in their generalized version for the proposal of new configurations. We show how this choice can improve upon the performance of simulated annealing methods (mainly when the number of variables is large) by allowing a more effective searching scheme and a faster annealing schedule.
Monte Carlo simulation of Alaska wolf survival
NASA Astrophysics Data System (ADS)
Feingold, S. J.
1996-02-01
Alaskan wolves live in a harsh climate and are hunted intensively. Penna's biological aging code, using Monte Carlo methods, has been adapted to simulate wolf survival. It was run on the case in which hunting causes the disruption of wolves' social structure. Social disruption was shown to increase the number of deaths occurring at a given level of hunting. For high levels of social disruption, the population did not survive.
Monte-Carlo Simulations: FLUKA vs. MCNPX
Oden, M.; Krasa, A.; Majerle, M.; Svoboda, O.; Wagner, V.
2007-11-26
Several experiments were performed at the Phasotron and Nuclotron accelerators in JINR Dubna in which spallation reactions and neutron transport were studied. The experimental results were checked against the predictions of the Monte-Carlo code MCNPX. The discrepancies at 1.5 GeV and 2 GeV on the 'Energy plus Transmutation' setup were observed. Therefore the experimental results were checked with another code-FLUKA.
Canonical Demon Monte Carlo Renormalization Group
M. Hasenbusch; K. Pinn; C. Wieczerkowski
1994-11-23
We describe a method to compute renormalized coupling constants in a Monte Carlo renormalization group calculation. It can be used, e.g., for lattice spin or gauge models. The basic idea is to simulate a joint system of block spins and canonical demons. Unlike the Microcanonical Renormalization Group of Creutz et al. it avoids systematical errors in small volumes. We present numerical results for the O(3) nonlinear sigma-model.
Applications of Maxent to quantum Monte Carlo
Silver, R.N.; Sivia, D.S.; Gubernatis, J.E. ); Jarrell, M. . Dept. of Physics)
1990-01-01
We consider the application of maximum entropy methods to the analysis of data produced by computer simulations. The focus is the calculation of the dynamical properties of quantum many-body systems by Monte Carlo methods, which is termed the Analytical Continuation Problem.'' For the Anderson model of dilute magnetic impurities in metals, we obtain spectral functions and transport coefficients which obey Kondo Universality.'' 24 refs., 7 figs.
Monte Carlo Approach to M-Theory
Werner Krauth; Hermann Nicolai; Matthias Staudacher
1998-04-01
We discuss supersymmetric Yang-Mills theory dimensionally reduced to zero dimensions and evaluate the SU(2) and SU(3) partition functions by Monte Carlo methods. The exactly known SU(2) results are reproduced to very high precision. Our calculations for SU(3) agree closely with an extension of a conjecture due to Green and Gutperle concerning the exact value of the SU(N) partition functions.
An introduction to Monte Carlo methods
NASA Astrophysics Data System (ADS)
Walter, J.-C.; Barkema, G. T.
2015-01-01
Monte Carlo simulations are methods for simulating statistical systems. The aim is to generate a representative ensemble of configurations to access thermodynamical quantities without the need to solve the system analytically or to perform an exact enumeration. The main principles of Monte Carlo simulations are ergodicity and detailed balance. The Ising model is a lattice spin system with nearest neighbor interactions that is appropriate to illustrate different examples of Monte Carlo simulations. It displays a second order phase transition between disordered (high temperature) and ordered (low temperature) phases, leading to different strategies of simulations. The Metropolis algorithm and the Glauber dynamics are efficient at high temperature. Close to the critical temperature, where the spins display long range correlations, cluster algorithms are more efficient. We introduce the rejection free (or continuous time) algorithm and describe in details an interesting alternative representation of the Ising model using graphs instead of spins with the so-called Worm algorithm. We conclude with an important discussion of the dynamical effects such as thermalization and correlation time.
Hybrid S{sub N}/Monte Carlo research and results
Baker, R.S.
1993-05-01
The neutral particle transport equation is solved by a hybrid method that iteratively couples regions where deterministic (S{sub N}) and stochastic (Monte Carlo) methods are applied. The Monte Carlo and S{sub N} regions are fully coupled in the sense that no assumption is made about geometrical separation or decoupling. The hybrid Monte Carlo/S{sub N} method provides a new means of solving problems involving both optically thick and optically thin regions that neither Monte Carlo nor S{sub N} is well suited for by themselves. The hybrid method has been successfully applied to realistic shielding problems. The vectorized Monte Carlo algorithm in the hybrid method has been ported to the massively parallel architecture of the Connection Machine. Comparisons of performance on a vector machine (Cray Y-MP) and the Connection Machine (CM-2) show that significant speedups are obtainable for vectorized Monte Carlo algorithms on massively parallel machines, even when realistic problems requiring variance reduction are considered. However, the architecture of the Connection Machine does place some limitations on the regime in which the Monte Carlo algorithm may be expected to perform well.
Hybrid S[sub N]/Monte Carlo research and results
Baker, R.S.
1993-01-01
The neutral particle transport equation is solved by a hybrid method that iteratively couples regions where deterministic (S[sub N]) and stochastic (Monte Carlo) methods are applied. The Monte Carlo and S[sub N] regions are fully coupled in the sense that no assumption is made about geometrical separation or decoupling. The hybrid Monte Carlo/S[sub N] method provides a new means of solving problems involving both optically thick and optically thin regions that neither Monte Carlo nor S[sub N] is well suited for by themselves. The hybrid method has been successfully applied to realistic shielding problems. The vectorized Monte Carlo algorithm in the hybrid method has been ported to the massively parallel architecture of the Connection Machine. Comparisons of performance on a vector machine (Cray Y-MP) and the Connection Machine (CM-2) show that significant speedups are obtainable for vectorized Monte Carlo algorithms on massively parallel machines, even when realistic problems requiring variance reduction are considered. However, the architecture of the Connection Machine does place some limitations on the regime in which the Monte Carlo algorithm may be expected to perform well.
Monte Carlo simulation of intercalated carbon nanotubes.
Mykhailenko, Oleksiy; Matsui, Denis; Prylutskyy, Yuriy; Le Normand, Francois; Eklund, Peter; Scharff, Peter
2007-01-01
Monte Carlo simulations of the single- and double-walled carbon nanotubes (CNT) intercalated with different metals have been carried out. The interrelation between the length of a CNT, the number and type of metal atoms has also been established. This research is aimed at studying intercalated systems based on CNTs and d-metals such as Fe and Co. Factors influencing the stability of these composites have been determined theoretically by the Monte Carlo method with the Tersoff potential. The modeling of CNTs intercalated with metals by the Monte Carlo method has proved that there is a correlation between the length of a CNT and the number of endo-atoms of specific type. Thus, in the case of a metallic CNT (9,0) with length 17 bands (3.60 nm), in contrast to Co atoms, Fe atoms are extruded out of the CNT if the number of atoms in the CNT is not less than eight. Thus, this paper shows that a CNT of a certain size can be intercalated with no more than eight Fe atoms. The systems investigated are stabilized by coordination of 3d-atoms close to the CNT wall with a radius-vector of (0.18-0.20) nm. Another characteristic feature is that, within the temperature range of (400-700) K, small systems exhibit ground-state stabilization which is not characteristic of the higher ones. The behavior of Fe and Co endo-atoms between the walls of a double-walled carbon nanotube (DW CNT) is explained by a dominating van der Waals interaction between the Co atoms themselves, which is not true for the Fe atoms. PMID:17033783
Quantum Monte Carlo for vibrating molecules
Brown, W.R. |
1996-08-01
Quantum Monte Carlo (QMC) has successfully computed the total electronic energies of atoms and molecules. The main goal of this work is to use correlation function quantum Monte Carlo (CFQMC) to compute the vibrational state energies of molecules given a potential energy surface (PES). In CFQMC, an ensemble of random walkers simulate the diffusion and branching processes of the imaginary-time time dependent Schroedinger equation in order to evaluate the matrix elements. The program QMCVIB was written to perform multi-state VMC and CFQMC calculations and employed for several calculations of the H{sub 2}O and C{sub 3} vibrational states, using 7 PES`s, 3 trial wavefunction forms, two methods of non-linear basis function parameter optimization, and on both serial and parallel computers. In order to construct accurate trial wavefunctions different wavefunctions forms were required for H{sub 2}O and C{sub 3}. In order to construct accurate trial wavefunctions for C{sub 3}, the non-linear parameters were optimized with respect to the sum of the energies of several low-lying vibrational states. In order to stabilize the statistical error estimates for C{sub 3} the Monte Carlo data was collected into blocks. Accurate vibrational state energies were computed using both serial and parallel QMCVIB programs. Comparison of vibrational state energies computed from the three C{sub 3} PES`s suggested that a non-linear equilibrium geometry PES is the most accurate and that discrete potential representations may be used to conveniently determine vibrational state energies.
A Monte Carlo algorithm for degenerate plasmas
Turrell, A.E. Sherlock, M.; Rose, S.J.
2013-09-15
A procedure for performing Monte Carlo calculations of plasmas with an arbitrary level of degeneracy is outlined. It has possible applications in inertial confinement fusion and astrophysics. Degenerate particles are initialised according to the Fermi–Dirac distribution function, and scattering is via a Pauli blocked binary collision approximation. The algorithm is tested against degenerate electron–ion equilibration, and the degenerate resistivity transport coefficient from unmagnetised first order transport theory. The code is applied to the cold fuel shell and alpha particle equilibration problem of inertial confinement fusion.
Kinetic Monte Carlo simulations of proton conductivity
NASA Astrophysics Data System (ADS)
Mas?owski, T.; Drzewi?ski, A.; Ulner, J.; Wojtkiewicz, J.; Zdanowska-Fr?czek, M.; Nordlund, K.; Kuronen, A.
2014-07-01
The kinetic Monte Carlo method is used to model the dynamic properties of proton diffusion in anhydrous proton conductors. The results have been discussed with reference to a two-step process called the Grotthuss mechanism. There is a widespread belief that this mechanism is responsible for fast proton mobility. We showed in detail that the relative frequency of reorientation and diffusion processes is crucial for the conductivity. Moreover, the current dependence on proton concentration has been analyzed. In order to test our microscopic model the proton transport in polymer electrolyte membranes based on benzimidazole C7H6N2 molecules is studied.
Modulated pulse bathymetric lidar Monte Carlo simulation
NASA Astrophysics Data System (ADS)
Luo, Tao; Wang, Yabo; Wang, Rong; Du, Peng; Min, Xia
2015-10-01
A typical modulated pulse bathymetric lidar system is investigated by simulation using a modulated pulse lidar simulation system. In the simulation, the return signal is generated by Monte Carlo method with modulated pulse propagation model and processed by mathematical tools like cross-correlation and digital filter. Computer simulation results incorporating the modulation detection scheme reveal a significant suppression of the water backscattering signal and corresponding target contrast enhancement. More simulation experiments are performed with various modulation and reception variables to investigate the effect of them on the bathymetric system performance.
Monte Carlo simulation for the transport beamline
Romano, F.; Cuttone, G.; Jia, S. B.; Varisano, A.; Attili, A.; Marchetto, F.; Russo, G.; Cirrone, G. A. P.; Schillaci, F.; Scuderi, V.; Carpinelli, M.
2013-07-26
In the framework of the ELIMED project, Monte Carlo (MC) simulations are widely used to study the physical transport of charged particles generated by laser-target interactions and to preliminarily evaluate fluence and dose distributions. An energy selection system and the experimental setup for the TARANIS laser facility in Belfast (UK) have been already simulated with the GEANT4 (GEometry ANd Tracking) MC toolkit. Preliminary results are reported here. Future developments are planned to implement a MC based 3D treatment planning in order to optimize shots number and dose delivery.
Marcus, Ryan C.
2012-07-24
Overview of this presentation is (1) Exascale computing - different technologies, getting there; (2) high-performance proof-of-concept MCMini - features and results; and (3) OpenCL toolkit - Oatmeal (OpenCL Automatic Memory Allocation Library) - purpose and features. Despite driver issues, OpenCL seems like a good, hardware agnostic tool. MCMini demonstrates the possibility for GPGPU-based Monte Carlo methods - it shows great scaling for HPC application and algorithmic equivalence. Oatmeal provides a flexible framework to aid in the development of scientific OpenCL codes.
Monte Carlo procedure for protein design
NASA Astrophysics Data System (ADS)
Irbäck, Anders; Peterson, Carsten; Potthast, Frank; Sandelin, Erik
1998-11-01
A method for sequence optimization in protein models is presented. The approach, which has inherited its basic philosophy from recent work by Deutsch and Kurosky [Phys. Rev. Lett. 76, 323 (1996)] by maximizing conditional probabilities rather than minimizing energy functions, is based upon a different and very efficient multisequence Monte Carlo scheme. By construction, the method ensures that the designed sequences represent good folders thermodynamically. A bootstrap procedure for the sequence space search is devised making very large chains feasible. The algorithm is successfully explored on the two-dimensional HP model [K. F. Lau and K. A. Dill, Macromolecules 32, 3986 (1989)] with chain lengths N=16, 18, and 32.
Monte Carlo methods to calculate impact probabilities
NASA Astrophysics Data System (ADS)
Rickman, H.; Wi?niowski, T.; Wajer, P.; Gabryszewski, R.; Valsecchi, G. B.
2014-09-01
Context. Unraveling the events that took place in the solar system during the period known as the late heavy bombardment requires the interpretation of the cratered surfaces of the Moon and terrestrial planets. This, in turn, requires good estimates of the statistical impact probabilities for different source populations of projectiles, a subject that has received relatively little attention, since the works of Öpik (1951, Proc. R. Irish Acad. Sect. A, 54, 165) and Wetherill (1967, J. Geophys. Res., 72, 2429). Aims: We aim to work around the limitations of the Öpik and Wetherill formulae, which are caused by singularities due to zero denominators under special circumstances. Using modern computers, it is possible to make good estimates of impact probabilities by means of Monte Carlo simulations, and in this work, we explore the available options. Methods: We describe three basic methods to derive the average impact probability for a projectile with a given semi-major axis, eccentricity, and inclination with respect to a target planet on an elliptic orbit. One is a numerical averaging of the Wetherill formula; the next is a Monte Carlo super-sizing method using the target's Hill sphere. The third uses extensive minimum orbit intersection distance (MOID) calculations for a Monte Carlo sampling of potentially impacting orbits, along with calculations of the relevant interval for the timing of the encounter allowing collision. Numerical experiments are carried out for an intercomparison of the methods and to scrutinize their behavior near the singularities (zero relative inclination and equal perihelion distances). Results: We find an excellent agreement between all methods in the general case, while there appear large differences in the immediate vicinity of the singularities. With respect to the MOID method, which is the only one that does not involve simplifying assumptions and approximations, the Wetherill averaging impact probability departs by diverging toward infinity, while the Hill sphere method results in a severely underestimated probability. We provide a discussion of the reasons for these differences, and we finally present the results of the MOID method in the form of probability maps for the Earth and Mars on their current orbits. These maps show a relatively flat probability distribution, except for the occurrence of two ridges found at small inclinations and for coinciding projectile/target perihelion distances. Conclusions: Our results verify the standard formulae in the general case, away from the singularities. In fact, severe shortcomings are limited to the immediate vicinity of those extreme orbits. On the other hand, the new Monte Carlo methods can be used without excessive consumption of computer time, and the MOID method avoids the problems associated with the other methods. Appendices are available in electronic form at http://www.aanda.org
Monte Carlo simulation for the transport beamline
NASA Astrophysics Data System (ADS)
Romano, F.; Attili, A.; Cirrone, G. A. P.; Carpinelli, M.; Cuttone, G.; Jia, S. B.; Marchetto, F.; Russo, G.; Schillaci, F.; Scuderi, V.; Tramontana, A.; Varisano, A.
2013-07-01
In the framework of the ELIMED project, Monte Carlo (MC) simulations are widely used to study the physical transport of charged particles generated by laser-target interactions and to preliminarily evaluate fluence and dose distributions. An energy selection system and the experimental setup for the TARANIS laser facility in Belfast (UK) have been already simulated with the GEANT4 (GEometry ANd Tracking) MC toolkit. Preliminary results are reported here. Future developments are planned to implement a MC based 3D treatment planning in order to optimize shots number and dose delivery.
Canonical Demon Monte Carlo Renormalization Group
M. Hasenbusch; K. Pinn; C. Wieczerkowski
1994-06-27
We describe a new method to compute renormalized coupling constants in a Monte Carlo renormalization group calculation. The method can be used for a general class of models, e.g., lattice spin or gauge models. The basic idea is to simulate a joint system of block spins and canonical demons. In contrast to the Microcanonical Renormalization Group invented by Creutz et al. our method does not suffer from systematical errors stemming from a simultaneous use of two different ensembles. We present numerical results for the $O(3)$ nonlinear $\\sigma$-model.
Monte Carlo Generation of Bohmian Trajectories
T. M. Coffey; R. E. Wyatt; W. C. Schieve
2008-07-01
We report on a Monte Carlo method that generates one-dimensional trajectories for Bohm's formulation of quantum mechanics that doesn't involve differentiation or integration of any equations of motion. At each time, t=n\\delta t (n=1,2,3,...), N particle positions are randomly sampled from the quantum probability density. Trajectories are built from the sorted N sampled positions at each time. These trajectories become the exact Bohm solutions in the limits N->\\infty and \\delta t -> 0. Higher dimensional problems can be solved by this method for separable wave functions. Several examples are given, including the two-slit experiment.
Discrete diffusion Monte Carlo for frequency-dependent radiative transfer
Densmore, Jeffrey D; Kelly, Thompson G; Urbatish, Todd J
2010-11-17
Discrete Diffusion Monte Carlo (DDMC) is a technique for increasing the efficiency of Implicit Monte Carlo radiative-transfer simulations. In this paper, we develop an extension of DDMC for frequency-dependent radiative transfer. We base our new DDMC method on a frequency-integrated diffusion equation for frequencies below a specified threshold. Above this threshold we employ standard Monte Carlo. With a frequency-dependent test problem, we confirm the increased efficiency of our new DDMC technique.
Monte Carlo Reliability Model for Microwave Monolithic Integrated Circuits
Rubloff, Gary W.
Monte Carlo Reliability Model for Microwave Monolithic Integrated Circuits Aris Christou Materials Carlo simulation is reported for analog integrated circuits and is based on the modification behavior of MMICs (Monolithic Microwave Integrated Circuits) from individual FET (Field Effect Transistor
Calculating Pi Using the Monte Carlo Method
NASA Astrophysics Data System (ADS)
Williamson, Timothy
2013-11-01
During the summer of 2012, I had the opportunity to participate in a research experience for teachers at the center for sustainable energy at Notre Dame University (RET @ cSEND) working with Professor John LoSecco on the problem of using antineutrino detection to accurately determine the fuel makeup and operating power of nuclear reactors. During full power operation, a reactor may produce 1021 antineutrinos per second with approximately 100 per day being detected. While becoming familiar with the design and operation of the detectors, and how total antineutrino flux could be obtained from such a small sample, I read about a simulation program called Monte Carlo. Further investigation led me to the Monte Carlo method page of Wikipedia2 where I saw an example of approximating pi using this simulation. Other examples where this method was applied were typically done with computer simulations2 or purely mathematical.3 It is my belief that this method may be easily related to the students by performing the simple activity of sprinkling rice on an arc drawn in a square. The activity that follows was inspired by those simulations and was used by my AP Physics class last year with very good results.
Quantum Monte Carlo methods for nuclear physics
Carlson, Joseph A.; Gandolfi, Stefano; Pederiva, Francesco; Pieper, Steven C.; Schiavilla, Rocco; Schmidt, K. E,; Wiringa, Robert B.
2014-10-19
Quantum Monte Carlo methods have proved very valuable to study the structure and reactions of light nuclei and nucleonic matter starting from realistic nuclear interactions and currents. These ab-initio calculations reproduce many low-lying states, moments and transitions in light nuclei, and simultaneously predict many properties of light nuclei and neutron matter over a rather wide range of energy and momenta. We review the nuclear interactions and currents, and describe the continuum Quantum Monte Carlo methods used in nuclear physics. These methods are similar to those used in condensed matter and electronic structure but naturally include spin-isospin, tensor, spin-orbit, and three-bodymore »interactions. We present a variety of results including the low-lying spectra of light nuclei, nuclear form factors, and transition matrix elements. We also describe low-energy scattering techniques, studies of the electroweak response of nuclei relevant in electron and neutrino scattering, and the properties of dense nucleonic matter as found in neutron stars. A coherent picture of nuclear structure and dynamics emerges based upon rather simple but realistic interactions and currents.« less
THE MCNPX MONTE CARLO RADIATION TRANSPORT CODE
WATERS, LAURIE S.; MCKINNEY, GREGG W.; DURKEE, JOE W.; FENSIN, MICHAEL L.; JAMES, MICHAEL R.; JOHNS, RUSSELL C.; PELOWITZ, DENISE B.
2007-01-10
MCNPX (Monte Carlo N-Particle eXtended) is a general-purpose Monte Carlo radiation transport code with three-dimensional geometry and continuous-energy transport of 34 particles and light ions. It contains flexible source and tally options, interactive graphics, and support for both sequential and multi-processing computer platforms. MCNPX is based on MCNP4B, and has been upgraded to most MCNP5 capabilities. MCNP is a highly stable code tracking neutrons, photons and electrons, and using evaluated nuclear data libraries for low-energy interaction probabilities. MCNPX has extended this base to a comprehensive set of particles and light ions, with heavy ion transport in development. Models have been included to calculate interaction probabilities when libraries are not available. Recent additions focus on the time evolution of residual nuclei decay, allowing calculation of transmutation and delayed particle emission. MCNPX is now a code of great dynamic range, and the excellent neutronics capabilities allow new opportunities to simulate devices of interest to experimental particle physics; particularly calorimetry. This paper describes the capabilities of the current MCNPX version 2.6.C, and also discusses ongoing code development.
Scalable Domain Decomposed Monte Carlo Particle Transport
NASA Astrophysics Data System (ADS)
O'Brien, Matthew Joseph
In this dissertation, we present the parallel algorithms necessary to run domain decomposed Monte Carlo particle transport on large numbers of processors (millions of processors). Previous algorithms were not scalable, and the parallel overhead became more computationally costly than the numerical simulation. The main algorithms we consider are: • Domain decomposition of constructive solid geometry: enables extremely large calculations in which the background geometry is too large to fit in the memory of a single computational node. • Load Balancing: keeps the workload per processor as even as possible so the calculation runs efficiently. • Global Particle Find: if particles are on the wrong processor, globally resolve their locations to the correct processor based on particle coordinate and background domain. • Visualizing constructive solid geometry, sourcing particles, deciding that particle streaming communication is completed and spatial redecomposition. These algorithms are some of the most important parallel algorithms required for domain decomposed Monte Carlo particle transport. We demonstrate that our previous algorithms were not scalable, prove that our new algorithms are scalable, and run some of the algorithms up to 2 million MPI processes on the Sequoia supercomputer.
Quantum Monte Carlo methods for nuclear physics
J. Carlson; S. Gandolfi; F. Pederiva; Steven C. Pieper; R. Schiavilla; K. E. Schmidt; R. B. Wiringa
2015-04-29
Quantum Monte Carlo methods have proved very valuable to study the structure and reactions of light nuclei and nucleonic matter starting from realistic nuclear interactions and currents. These ab-initio calculations reproduce many low-lying states, moments and transitions in light nuclei, and simultaneously predict many properties of light nuclei and neutron matter over a rather wide range of energy and momenta. We review the nuclear interactions and currents, and describe the continuum Quantum Monte Carlo methods used in nuclear physics. These methods are similar to those used in condensed matter and electronic structure but naturally include spin-isospin, tensor, spin-orbit, and three-body interactions. We present a variety of results including the low-lying spectra of light nuclei, nuclear form factors, and transition matrix elements. We also describe low-energy scattering techniques, studies of the electroweak response of nuclei relevant in electron and neutrino scattering, and the properties of dense nucleonic matter as found in neutron stars. A coherent picture of nuclear structure and dynamics emerges based upon rather simple but realistic interactions and currents.
Quantum Monte Carlo for atoms and molecules
Barnett, R.N.
1989-11-01
The diffusion quantum Monte Carlo with fixed nodes (QMC) approach has been employed in studying energy-eigenstates for 1--4 electron systems. Previous work employing the diffusion QMC technique yielded energies of high quality for H{sub 2}, LiH, Li{sub 2}, and H{sub 2}O. Here, the range of calculations with this new approach has been extended to include additional first-row atoms and molecules. In addition, improvements in the previously computed fixed-node energies of LiH, Li{sub 2}, and H{sub 2}O have been obtained using more accurate trial functions. All computations were performed within, but are not limited to, the Born-Oppenheimer approximation. In our computations, the effects of variation of Monte Carlo parameters on the QMC solution of the Schroedinger equation were studied extensively. These parameters include the time step, renormalization time and nodal structure. These studies have been very useful in determining which choices of such parameters will yield accurate QMC energies most efficiently. Generally, very accurate energies (90--100% of the correlation energy is obtained) have been computed with single-determinant trail functions multiplied by simple correlation functions. Improvements in accuracy should be readily obtained using more complex trial functions.
CosmoMC: Cosmological MonteCarlo
NASA Astrophysics Data System (ADS)
Lewis, Antony; Bridle, Sarah
2011-06-01
We present a fast Markov Chain Monte-Carlo exploration of cosmological parameter space. We perform a joint analysis of results from recent CMB experiments and provide parameter constraints, including sigma_8, from the CMB independent of other data. We next combine data from the CMB, HST Key Project, 2dF galaxy redshift survey, supernovae Ia and big-bang nucleosynthesis. The Monte Carlo method allows the rapid investigation of a large number of parameters, and we present results from 6 and 9 parameter analyses of flat models, and an 11 parameter analysis of non-flat models. Our results include constraints on the neutrino mass (m_nu < 0.3eV), equation of state of the dark energy, and the tensor amplitude, as well as demonstrating the effect of additional parameters on the base parameter constraints. In a series of appendices we describe the many uses of importance sampling, including computing results from new data and accuracy correction of results generated from an approximate method. We also discuss the different ways of converting parameter samples to parameter constraints, the effect of the prior, assess the goodness of fit and consistency, and describe the use of analytic marginalization over normalization parameters.
Reverse Monte Carlo modeling in confined systems
Sánchez-Gil, V.; Noya, E. G.; Lomba, E.
2014-01-14
An extension of the well established Reverse Monte Carlo (RMC) method for modeling systems under close confinement has been developed. The method overcomes limitations induced by close confinement in systems such as fluids adsorbed in microporous materials. As a test of the method, we investigate a model system of {sup 36}Ar adsorbed into two zeolites with significantly different pore sizes: Silicalite-I (a pure silica form of ZSM-5 zeolite, characterized by relatively narrow channels forming a 3D network) at partial and full loadings and siliceous Faujasite (which exhibits relatively wide channels and large cavities). The model systems are simulated using grand canonical Monte Carlo and, in each case, its structure factor is used as input for the proposed method, which shows a rapid convergence and yields an adsorbate microscopic structure in good agreement with that of the model system, even to the level of three body correlations, when these are induced by the confining media. The application to experimental systems is straightforward incorporating factors such as the experimental resolution and appropriate q-sampling, along the lines of previous experiences of RMC modeling of powder diffraction data including Bragg and diffuse scattering.
Quantum Monte Carlo methods for nuclear physics
NASA Astrophysics Data System (ADS)
Carlson, J.; Gandolfi, S.; Pederiva, F.; Pieper, Steven C.; Schiavilla, R.; Schmidt, K. E.; Wiringa, R. B.
2015-07-01
Quantum Monte Carlo methods have proved valuable to study the structure and reactions of light nuclei and nucleonic matter starting from realistic nuclear interactions and currents. These ab initio calculations reproduce many low-lying states, moments, and transitions in light nuclei, and simultaneously predict many properties of light nuclei and neutron matter over a rather wide range of energy and momenta. The nuclear interactions and currents are reviewed along with a description of the continuum quantum Monte Carlo methods used in nuclear physics. These methods are similar to those used in condensed matter and electronic structure but naturally include spin-isospin, tensor, spin-orbit, and three-body interactions. A variety of results are presented, including the low-lying spectra of light nuclei, nuclear form factors, and transition matrix elements. Low-energy scattering techniques, studies of the electroweak response of nuclei relevant in electron and neutrino scattering, and the properties of dense nucleonic matter as found in neutron stars are also described. A coherent picture of nuclear structure and dynamics emerges based upon rather simple but realistic interactions and currents.
Quantum Monte Carlo methods for nuclear physics
Carlson, Joseph A.; Gandolfi, Stefano; Pederiva, Francesco; Pieper, Steven C.; Schiavilla, Rocco; Schmidt, K. E,; Wiringa, Robert B.
2014-10-19
Quantum Monte Carlo methods have proved very valuable to study the structure and reactions of light nuclei and nucleonic matter starting from realistic nuclear interactions and currents. These ab-initio calculations reproduce many low-lying states, moments and transitions in light nuclei, and simultaneously predict many properties of light nuclei and neutron matter over a rather wide range of energy and momenta. We review the nuclear interactions and currents, and describe the continuum Quantum Monte Carlo methods used in nuclear physics. These methods are similar to those used in condensed matter and electronic structure but naturally include spin-isospin, tensor, spin-orbit, and three-body interactions. We present a variety of results including the low-lying spectra of light nuclei, nuclear form factors, and transition matrix elements. We also describe low-energy scattering techniques, studies of the electroweak response of nuclei relevant in electron and neutrino scattering, and the properties of dense nucleonic matter as found in neutron stars. A coherent picture of nuclear structure and dynamics emerges based upon rather simple but realistic interactions and currents.
Enhanced Neoclassical Polarization: Monte Carlo Simulation
NASA Astrophysics Data System (ADS)
Xiao, Yong; Molvig, Kim; Ernst, Darin; Hallatschek, Klaus
2003-10-01
The theoretical prediction of enhanced neoclassical polarization (K. Molvig, Yong Xiao, D. R. Ernst, K. Hallatschek, Sherwood Fusion Theory Conference, 2003.) in a tokamak plasma is investigated numerically using a Monte Carlo approach to combine the effects of collisions with guiding center tokamak orbits. The collisionless, kinematic contribution to the polarization first calculated by Rosenbluth and Hinton (M.N. Rosenbluth and F.L. Hinton, Phys. Rev. Lett. 80), 724 (1998). is reproduced from the orbits directly. A fifth order Runge-Kutta orbit integrator is used to give extremely high orbit accuracy. The cancellation of opposite trapped and circulating particle radial flows is verified explicitly in this simulation. The Monte Carlo representation of pitch angle scattering collisions (X.Q. Xu and M.N. Rosenbluth, Phys. Fluids B 3), 627 (1991) is used to compute the collisional processes. The numerical simulation determines the generalized Fokker-Planck coefficients used as the basis for transport in the Lagrangian formulation (I.B. Bernstein and K. Molvig, Phys. Fluids, 26), 1488 (1983). of transport theory. The computation generates the banana diffusion coefficient, < ? ? ^2/? t>, and the correlated cross process, < ? ? ? ? /? t>, responsible for the enhanced polarization. The numerical procedure generates smooth coefficients and resolves the analytic singularity that occurs at the trapped-circulating boundary.
Discrete range clustering using Monte Carlo methods
NASA Technical Reports Server (NTRS)
Chatterji, G. B.; Sridhar, B.
1993-01-01
For automatic obstacle avoidance guidance during rotorcraft low altitude flight, a reliable model of the nearby environment is needed. Such a model may be constructed by applying surface fitting techniques to the dense range map obtained by active sensing using radars. However, for covertness, passive sensing techniques using electro-optic sensors are desirable. As opposed to the dense range map obtained via active sensing, passive sensing algorithms produce reliable range at sparse locations, and therefore, surface fitting techniques to fill the gaps in the range measurement are not directly applicable. Both for automatic guidance and as a display for aiding the pilot, these discrete ranges need to be grouped into sets which correspond to objects in the nearby environment. The focus of this paper is on using Monte Carlo methods for clustering range points into meaningful groups. One of the aims of the paper is to explore whether simulated annealing methods offer significant advantage over the basic Monte Carlo method for this class of problems. We compare three different approaches and present application results of these algorithms to a laboratory image sequence and a helicopter flight sequence.
Quantum Monte Carlo Endstation for Petascale Computing
Lubos Mitas
2011-01-26
NCSU research group has been focused on accomplising the key goals of this initiative: establishing new generation of quantum Monte Carlo (QMC) computational tools as a part of Endstation petaflop initiative for use at the DOE ORNL computational facilities and for use by computational electronic structure community at large; carrying out high accuracy quantum Monte Carlo demonstration projects in application of these tools to the forefront electronic structure problems in molecular and solid systems; expanding the impact of QMC methods and approaches; explaining and enhancing the impact of these advanced computational approaches. In particular, we have developed quantum Monte Carlo code (QWalk, www.qwalk.org) which was significantly expanded and optimized using funds from this support and at present became an actively used tool in the petascale regime by ORNL researchers and beyond. These developments have been built upon efforts undertaken by the PI's group and collaborators over the period of the last decade. The code was optimized and tested extensively on a number of parallel architectures including petaflop ORNL Jaguar machine. We have developed and redesigned a number of code modules such as evaluation of wave functions and orbitals, calculations of pfaffians and introduction of backflow coordinates together with overall organization of the code and random walker distribution over multicore architectures. We have addressed several bottlenecks such as load balancing and verified efficiency and accuracy of the calculations with the other groups of the Endstation team. The QWalk package contains about 50,000 lines of high quality object-oriented C++ and includes also interfaces to data files from other conventional electronic structure codes such as Gamess, Gaussian, Crystal and others. This grant supported PI for one month during summers, a full-time postdoc and partially three graduate students over the period of the grant duration, it has resulted in 13 published papers, 15 invited talks and lectures nationally and internationally. My former graduate student and postdoc Dr. Michal Bajdich, who was supported byt this grant, is currently a postdoc with ORNL in the group of Dr. F. Reboredo and Dr. P. Kent and is using the developed tools in a number of DOE projects. The QWalk package has become a truly important research tool used by the electronic structure community and has attracted several new developers in other research groups. Our tools use several types of correlated wavefunction approaches, variational, diffusion and reptation methods, large-scale optimization methods for wavefunctions and enables to calculate energy differences such as cohesion, electronic gaps, but also densities and other properties, using multiple runs one can obtain equations of state for given structures and beyond. Our codes use efficient numerical and Monte Carlo strategies (high accuracy numerical orbitals, multi-reference wave functions, highly accurate correlation factors, pairing orbitals, force biased and correlated sampling Monte Carlo), are robustly parallelized and enable to run on tens of thousands cores very efficiently. Our demonstration applications were focused on the challenging research problems in several fields of materials science such as transition metal solids. We note that our study of FeO solid was the first QMC calculation of transition metal oxides at high pressures.
Normality of Monte Carlo criticality eigenfunction decomposition coefficients
Toth, B. E.; Martin, W. R.; Griesheimer, D. P.
2013-07-01
A proof is presented, which shows that after a single Monte Carlo (MC) neutron transport power method iteration without normalization, the coefficients of an eigenfunction decomposition of the fission source density are normally distributed when using analog or implicit capture MC. Using a Pearson correlation coefficient test, the proof is corroborated by results from a uniform slab reactor problem, and those results also suggest that the coefficients are normally distributed with normalization. The proof and numerical test results support the application of earlier work on the convergence of eigenfunctions under stochastic operators. Knowledge of the Gaussian shape of decomposition coefficients allows researchers to determine an appropriate level of confidence in the distribution of fission sites taken from a MC simulation. This knowledge of the shape of the probability distributions of decomposition coefficients encourages the creation of new predictive convergence diagnostics. (authors)
Anomalous Scaling in Passive Scalar Advection: Monte Carlo Lagrangian Trajectories
NASA Astrophysics Data System (ADS)
Gat, Omri; Procaccia, Itamar; Zeitak, Reuven
1998-06-01
We present a numerical method which is used to calculate anomalous scaling exponents of structure functions in the Kraichnan passive scalar advection model [R. H. Kraichnan, Phys. Fluids 11, 945 (1968)]. This Monte Carlo method, which is applicable in any space dimension, is based on the Lagrangian path interpretation of passive scalar dynamics, and uses the recently discovered equivalence between scaling exponents of structure functions and relaxation rates in the stochastic shape dynamics of groups of Lagrangian particles. We calculate third and fourth order anomalous exponents for several dimensions, comparing with the predictions of perturbative calculations in large dimensions. We find that Kraichnan's closure appears to give results in close agreement with the numerics. The third order exponents are compatible with our own previous nonperturbative calculations.
Estimating rock mass properties using Monte Carlo simulation: Ankara andesites
NASA Astrophysics Data System (ADS)
Sari, Mehmet; Karpuz, Celal; Ayday, Can
2010-07-01
In the paper, a previously introduced method ( Sari, 2009) is applied to the problem of estimating the rock mass properties of Ankara andesites. For this purpose, appropriate closed form (parametric) distributions are described for intact rock and discontinuity parameters of the Ankara andesites at three distinct weathering grades. Then, these distributions are included as inputs in the Rock Mass Rating ( RMR) classification system prepared in a spreadsheet model. A stochastic analysis is carried out to evaluate the influence of correlations between relevant distributions on the simulated RMR values. The model is also used in Monte Carlo simulations to estimate the possible ranges of the Hoek-Brown strength parameters of the rock under investigation. The proposed approach provides a straightforward and effective assessment of the variability of the rock mass properties. Hence, a wide array of mechanical characteristics can be adequately represented in any preliminary design consideration for a given rock mass.
RADIATIVE HEAT TRANSFER WITH QUASI-MONTE CARLO METHODS
RADIATIVE HEAT TRANSFER WITH QUASI-MONTE CARLO METHODS A. Kersch1 W. Moroko2 A. Schuster1 1Siemens of Quasi-Monte Carlo to this problem. 1.1 Radiative Heat Transfer Reactors In the manufacturing of the problems which can be solved by such a simulation is high accuracy modeling of the radiative heat transfer
CERN-TH.6275/91 Monte Carlo Event Generation
Sjöstrand, Torbjörn
CERN-TH.6275/91 Monte Carlo Event Generation for LHC T. Sj¨ostrand CERN -- Geneva Abstract The necessity of event generators for LHC physics studies is illustrated, and the Monte Carlo approach is outlined. A survey is presented of existing event generators, followed by a more detailed study
Dalton Vinicus Kozak Simulao Direta de Monte Carlo de
Sharipov, Felix
Dalton Vinicus Kozak Simulação Direta de Monte Carlo de Escoamentos Internos e Externos de Gases;Dalton Vinicus Kozak Simulação Direta de Monte Carlo de Escoamentos Internos e Externos de Gases no Amplo Vinicius Kozak e aprovada em 15 de dezembro de 2010, em Curitiba, Estado do Paraná, pela banca examinadora
Monte Carlo Test Assembly for Item Pool Analysis and Extension
ERIC Educational Resources Information Center
Belov, Dmitry I.; Armstrong, Ronald D.
2005-01-01
A new test assembly algorithm based on a Monte Carlo random search is presented in this article. A major advantage of the Monte Carlo test assembly over other approaches (integer programming or enumerative heuristics) is that it performs a uniform sampling from the item pool, which provides every feasible item combination (test) with an equal…
Economic Risk Analysis: Using Analytical and Monte Carlo Techniques.
ERIC Educational Resources Information Center
O'Donnell, Brendan R.; Hickner, Michael A.; Barna, Bruce A.
2002-01-01
Describes the development and instructional use of a Microsoft Excel spreadsheet template that facilitates analytical and Monte Carlo risk analysis of investment decisions. Discusses a variety of risk assessment methods followed by applications of the analytical and Monte Carlo methods. Uses a case study to illustrate use of the spreadsheet tool…
Monte Carlo Simulation of Sintering on Multiprocessor Systems
Maguire Jr., Gerald Q.
great time and memory constraints. A metallurgy process called sintering, by which powders are formedMonte Carlo Simulation of Sintering on Multiprocessor Systems Jens R. Lind Master of Science Thesis storage and parallel execution for simulation of an atomic process #12;ii #12;iii Monte Carlo Simulation
Markov chain Monte Carlo updating schemes for hidden
Steinsland, Ingelin
NTNU Markov chain Monte Carlo updating schemes for hidden Gaussian Markov random field models Ingelin Steinsland ingelins@math.ntnu.no Norwegian University of Science and Technology Markov chain Monte Carlo updating schemes for hidden Gaussian Markov random field models p.1/45 #12;NTNU Background "The
A Primer in Monte Carlo Integration Using Mathcad
ERIC Educational Resources Information Center
Hoyer, Chad E.; Kegerreis, Jeb S.
2013-01-01
The essentials of Monte Carlo integration are presented for use in an upper-level physical chemistry setting. A Mathcad document that aids in the dissemination and utilization of this information is described and is available in the Supporting Information. A brief outline of Monte Carlo integration is given, along with ideas and pedagogy for…
Monte Carlo Methods Final Project Nests and Tootsie Pops: Bayesian
Monte Carlo Methods Final Project Nests and Tootsie Pops: Bayesian Sampling with Monte Carlo of computing the evidence or inte- grated likelihood Z under a model; the Nested Sampling method Introduction 1 2 Nested Sampling 3 2.1 General Framework
Monte Carlo solution of antiferromagnetic quantum Heisenberg spin systems
Lee, D.H.; Joannopoulos, J.D.; Negele, J.W.
1984-08-01
A Monte Carlo method is introduced that overcomes the problem of alternating signs in Handscomb's method of simulating antiferromagnetic quantum Heisenberg systems. The scheme is applied to both bipartite and frustrated lattices. Results of internal energy, specific heat, and uniform and staggered susceptibilities are presented suggesting that quantum antiferromagnets may now be studied as extensively as classical spin systems using conventional Monte Carlo techniques.
abcpmc: Approximate Bayesian Computation for Population Monte-Carlo code
NASA Astrophysics Data System (ADS)
Akeret, Joel
2015-04-01
abcpmc is a Python Approximate Bayesian Computing (ABC) Population Monte Carlo (PMC) implementation based on Sequential Monte Carlo (SMC) with Particle Filtering techniques. It is extendable with k-nearest neighbour (KNN) or optimal local covariance matrix (OLCM) pertubation kernels and has built-in support for massively parallelized sampling on a cluster using MPI.
PATH INTEGRAL MONTE CARLO SIMULATIONS OF HOT DENSE BURKHARD MILITZER
Militzer, Burkhard
PATH INTEGRAL MONTE CARLO SIMULATIONS OF HOT DENSE HYDROGEN BY BURKHARD MILITZER Diplom, Humboldt-Champaign, 2000 Urbana, Illinois #12; c Copyright by Burkhard Militzer, 2000 #12; PATH INTEGRAL MONTE CARLO SIMULATIONS OF HOT DENSE HYDROGEN Burkhard Militzer, Ph.D. Department of Physics University of Illinois
The Monte Carlo Method. Popular Lectures in Mathematics.
ERIC Educational Resources Information Center
Sobol', I. M.
The Monte Carlo Method is a method of approximately solving mathematical and physical problems by the simulation of random quantities. The principal goal of this booklet is to suggest to specialists in all areas that they will encounter problems which can be solved by the Monte Carlo Method. Part I of the booklet discusses the simulation of random…
A Theory of Monte Carlo Visibility Sampling RAVI RAMAMOORTHI
California at San Diego, University of
121 A Theory of Monte Carlo Visibility Sampling RAVI RAMAMOORTHI University of California, Berkeley and production rendering, but Monte-Carlo sampling of visibility is often the main source of noise in rendered images. Indeed, it is common to use deterministic uniform sampling for the smoother shading effects
Deterministic Simulation for Risk Management Quasi-Monte Carlo beats
Papageorgiou, Anargyros
, and commodity prices. One of the most widely accepted concepts in market risk management is Value at Risk. Risk detailed description of the reasons for the tremendous growth in market risk management, the establishment1 Deterministic Simulation for Risk Management Quasi-Monte Carlo beats Monte Carlo for Value
Deterministic Simulation for Risk Management QuasiMonte Carlo beats
Papageorgiou, Anargyros
, and commodity prices. One of the most widely accepted concepts in market risk management is Value at Risk. Risk detailed description of the reasons for the tremendous growth in market risk management, the establishment1 Deterministic Simulation for Risk Management QuasiMonte Carlo beats Monte Carlo for Value
Sampling from a polytope and hard-disk Monte Carlo
Sebastian C. Kapfer; Werner Krauth
2013-01-21
The hard-disk problem, the statics and the dynamics of equal two-dimensional hard spheres in a periodic box, has had a profound influence on statistical and computational physics. Markov-chain Monte Carlo and molecular dynamics were first discussed for this model. Here we reformulate hard-disk Monte Carlo algorithms in terms of another classic problem, namely the sampling from a polytope. Local Markov-chain Monte Carlo, as proposed by Metropolis et al. in 1953, appears as a sequence of random walks in high-dimensional polytopes, while the moves of the more powerful event-chain algorithm correspond to molecular dynamics evolution. We determine the convergence properties of Monte Carlo methods in a special invariant polytope associated with hard-disk configurations, and the implications for convergence of hard-disk sampling. Finally, we discuss parallelization strategies for event-chain Monte Carlo and present results for a multicore implementation.
Entanglement spectroscopy using quantum Monte Carlo
NASA Astrophysics Data System (ADS)
Chung, Chia-Min; Bonnes, Lars; Chen, Pochung; Läuchli, Andreas M.
2014-05-01
We present a numerical scheme to reconstruct a subset of the entanglement spectrum of quantum many body systems using quantum Monte Carlo. The approach builds on the replica trick to evaluate particle number resolved traces of the first n of powers of a reduced density matrix. From this information we reconstruct n entanglement spectrum levels using a polynomial root solver. We illustrate the power and limitations of the method by an application to the extended Bose-Hubbard model in one dimension where we are able to resolve the quasidegeneracy of the entanglement spectrum in the Haldane-insulator phase. In general, the method is able to reconstruct the largest few eigenvalues in each symmetry sector and typically performs better when the eigenvalues are not too different.
Entanglement Spectroscopy using Quantum Monte Carlo
NASA Astrophysics Data System (ADS)
Chung, Chia-Min; Bonnes, Lars; Chen, Pochung; Läuchli, Andreas
2014-03-01
We present a numerical scheme to reconstruct a subset of the entanglement spectrum of quantum many body systems using quantum Monte Carlo. The approach builds on the replica trick to evaluate particle number resolved traces of the first n of powers of a reduced density matrix. From this information we reconstruct n entanglement spectrum levels using a polynomial root solver. We illustrate the power and limitations of the method by an application to the extended Bose-Hubbard model in one dimension where we are able to resolve the quasi-degeneracy of the entanglement spectrum in the Haldane-Insulator phase. In general the method is able to reconstruct the largest few eigenvalues in each symmetry sector and typically performs better when the eigenvalues are not too different.
Quantum Monte Carlo Calculations of Light Nuclei
Steven C. Pieper; R. B. Wiringa
2001-03-06
Accurate quantum Monte Carlo calculations of ground and low-lying excited states of light p-shell nuclei are now possible for realistic nuclear Hamiltonians that fit nucleon-nucleon scattering data. At present, results for more than 30 different (J^pi;T) states, plus isobaric analogs, in A \\leq 8 nuclei have been obtained with an excellent reproduction of the experimental energy spectrum. These microscopic calculations show that nuclear structure, including both single-particle and clustering aspects, can be explained starting from elementary two- and three-nucleon interactions. Various density and momentum distributions, electromagnetic form factors, and spectroscopic factors have also been computed, as well as electroweak capture reactions of astrophysical interest.
Monte Carlo applications to acoustical field solutions
NASA Technical Reports Server (NTRS)
Haviland, J. K.; Thanedar, B. D.
1973-01-01
The Monte Carlo technique is proposed for the determination of the acoustical pressure-time history at chosen points in a partial enclosure, the central idea of this technique being the tracing of acoustical rays. A statistical model is formulated and an algorithm for pressure is developed, the conformity of which is examined by two approaches and is shown to give the known results. The concepts that are developed are applied to the determination of the transient field due to a sound source in a homogeneous medium in a rectangular enclosure with perfect reflecting walls, and the results are compared with those presented by Mintzer based on the Laplace transform approach, as well as with a normal mode solution.
Monte Carlo Sampling in Fractal Landscapes
Jorge C. Leitão; João M. Viana Parente Lopes; Eduardo G. Altmann
2013-05-30
We propose a flat-histogram Monte Carlo method to efficiently sample fractal landscapes such as escape time functions of open chaotic systems. This is achieved by using a random-walk step which depends on the height of the landscape via the largest Lyapunov exponent of the associated chaotic system. By generalizing the Wang-Landau algorithm, we obtain a method which simultaneously constructs the density of states (escape time distribution) and the correct step-length distribution. As a result, averages are obtained in polynomial computational time, a dramatic improvement over the exponential scaling of traditional uniform sampling. Our results are not limited by the dimensionality of the phase space and are confirmed numerically for dimensions as large as 30.
Monte Carlo Simulation of Endlinking Oligomers
NASA Technical Reports Server (NTRS)
Hinkley, Jeffrey A.; Young, Jennifer A.
1998-01-01
This report describes initial efforts to model the endlinking reaction of phenylethynyl-terminated oligomers. Several different molecular weights were simulated using the Bond Fluctuation Monte Carlo technique on a 20 x 20 x 20 unit lattice with periodic boundary conditions. After a monodisperse "melt" was equilibrated, chain ends were linked whenever they came within the allowed bond distance. Ends remained reactive throughout, so that multiple links were permitted. Even under these very liberal crosslinking assumptions, geometrical factors limited the degree of crosslinking. Average crosslink functionalities were 2.3 to 2.6; surprisingly, they did not depend strongly on the chain length. These results agreed well with the degrees of crosslinking inferred from experiment in a cured phenylethynyl-terminated polyimide oligomer.
Hybrid algorithms in quantum Monte Carlo
NASA Astrophysics Data System (ADS)
Kim, Jeongnim; Esler, Kenneth P.; McMinis, Jeremy; Morales, Miguel A.; Clark, Bryan K.; Shulenburger, Luke; Ceperley, David M.
2012-12-01
With advances in algorithms and growing computing powers, quantum Monte Carlo (QMC) methods have become a leading contender for high accuracy calculations for the electronic structure of realistic systems. The performance gain on recent HPC systems is largely driven by increasing parallelism: the number of compute cores of a SMP and the number of SMPs have been going up, as the Top500 list attests. However, the available memory as well as the communication and memory bandwidth per element has not kept pace with the increasing parallelism. This severely limits the applicability of QMC and the problem size it can handle. OpenMP/MPI hybrid programming provides applications with simple but effective solutions to overcome efficiency and scalability bottlenecks on large-scale clusters based on multi/many-core SMPs. We discuss the design and implementation of hybrid methods in QMCPACK and analyze its performance on current HPC platforms characterized by various memory and communication hierarchies.
Resist develop prediction by Monte Carlo simulation
NASA Astrophysics Data System (ADS)
Sohn, Dong-Soo; Jeon, Kyoung-Ah; Sohn, Young-Soo; Oh, Hye-Keun
2002-07-01
Various resist develop models have been suggested to express the phenomena from the pioneering work of Dill's model in 1975 to the recent Shipley's enhanced notch model. The statistical Monte Carlo method can be applied to the process such as development and post exposure bake. The motions of developer during development process were traced by using this method. We have considered that the surface edge roughness of the resist depends on the weight percentage of protected and de-protected polymer in the resist. The results are well agreed with other papers. This study can be helpful for the developing of new photoresist and developer that can be used to pattern the device features smaller than 100 nm.
Exploring Theory Space with Monte Carlo Reweighting
James S. Gainer; Joseph Lykken; Konstantin T. Matchev; Stephen Mrenna; Myeonghun Park
2014-12-25
Theories of new physics often involve a large number of unknown parameters which need to be scanned. Additionally, a putative signal in a particular channel may be due to a variety of distinct models of new physics. This makes experimental attempts to constrain the parameter space of motivated new physics models with a high degree of generality quite challenging. We describe how the reweighting of events may allow this challenge to be met, as fully simulated Monte Carlo samples generated for arbitrary benchmark models can be effectively re-used. In particular, we suggest procedures that allow more efficient collaboration between theorists and experimentalists in exploring large theory parameter spaces in a rigorous way at the LHC.
Exploring theory space with Monte Carlo reweighting
Gainer, James S.; Lykken, Joseph; Matchev, Konstantin T.; Mrenna, Stephen; Park, Myeonghun
2014-10-13
Theories of new physics often involve a large number of unknown parameters which need to be scanned. Additionally, a putative signal in a particular channel may be due to a variety of distinct models of new physics. This makes experimental attempts to constrain the parameter space of motivated new physics models with a high degree of generality quite challenging. We describe how the reweighting of events may allow this challenge to be met, as fully simulated Monte Carlo samples generated for arbitrary benchmark models can be effectively re-used. In particular, we suggest procedures that allow more efficient collaboration between theorists and experimentalists in exploring large theory parameter spaces in a rigorous way at the LHC.
Quantum Ice : a quantum Monte Carlo study
Nic Shannon; Olga Sikora; Frank Pollmann; Karlo Penc; Peter Fulde
2011-12-13
Ice states, in which frustrated interactions lead to a macroscopic ground-state degeneracy, occur in water ice, in problems of frustrated charge order on the pyrochlore lattice, and in the family of rare-earth magnets collectively known as spin ice. Of particular interest at the moment are "quantum spin ice" materials, where large quantum fluctuations may permit tunnelling between a macroscopic number of different classical ground states. Here we use zero-temperature quantum Monte Carlo simulations to show how such tunnelling can lift the degeneracy of a spin or charge ice, stabilising a unique "quantum ice" ground state --- a quantum liquid with excitations described by the Maxwell action of 3+1-dimensional quantum electrodynamics. We further identify a competing ordered "squiggle" state, and show how both squiggle and quantum ice states might be distinguished in neutron scattering experiments on a spin ice material.
Radiation Modeling with Direct Simulation Monte Carlo
NASA Technical Reports Server (NTRS)
Carlson, Ann B.; Hassan, H. A.
1991-01-01
Improvements in the modeling of radiation in low density shock waves with direct simulation Monte Carlo (DSMC) are the subject of this study. A new scheme to determine the relaxation collision numbers for excitation of electronic states is proposed. This scheme attempts to move the DSMC programs toward a more detailed modeling of the physics and more reliance on available rate data. The new method is compared with the current modeling technique and both techniques are compared with available experimental data. The differences in the results are evaluated. The test case is based on experimental measurements from the AVCO-Everett Research Laboratory electric arc-driven shock tube of a normal shock wave in air at 10 km/s and .1 Torr. The new method agrees with the available data as well as the results from the earlier scheme and is more easily extrapolated to di erent ow conditions.
Exploring theory space with Monte Carlo reweighting
Gainer, James S.; Lykken, Joseph; Matchev, Konstantin T.; Mrenna, Stephen; Park, Myeonghun
2014-10-13
Theories of new physics often involve a large number of unknown parameters which need to be scanned. Additionally, a putative signal in a particular channel may be due to a variety of distinct models of new physics. This makes experimental attempts to constrain the parameter space of motivated new physics models with a high degree of generality quite challenging. We describe how the reweighting of events may allow this challenge to be met, as fully simulated Monte Carlo samples generated for arbitrary benchmark models can be effectively re-used. In particular, we suggest procedures that allow more efficient collaboration between theoristsmore »and experimentalists in exploring large theory parameter spaces in a rigorous way at the LHC.« less
Parametric Learning and Monte Carlo Optimization
Wolpert, David H
2007-01-01
This paper uncovers and explores the close relationship between Monte Carlo Optimization of a parametrized integral (MCO), Parametric machine-Learning (PL), and `blackbox' or `oracle'-based optimization (BO). We make four contributions. First, we prove that MCO is mathematically identical to a broad class of PL problems. This identity potentially provides a new application domain for all broadly applicable PL techniques: MCO. Second, we introduce immediate sampling, a new version of the Probability Collectives (PC) algorithm for blackbox optimization. Immediate sampling transforms the original BO problem into an MCO problem. Accordingly, by combining these first two contributions, we can apply all PL techniques to BO. In our third contribution we validate this way of improving BO by demonstrating that cross-validation and bagging improve immediate sampling. Finally, conventional MC and MCO procedures ignore the relationship between the sample point locations and the associated values of the integrand; only th...
Quantum ice: a quantum Monte Carlo study.
Shannon, Nic; Sikora, Olga; Pollmann, Frank; Penc, Karlo; Fulde, Peter
2012-02-10
Ice states, in which frustrated interactions lead to a macroscopic ground-state degeneracy, occur in water ice, in problems of frustrated charge order on the pyrochlore lattice, and in the family of rare-earth magnets collectively known as spin ice. Of particular interest at the moment are "quantum spin-ice" materials, where large quantum fluctuations may permit tunnelling between a macroscopic number of different classical ground states. Here we use zero-temperature quantum Monte Carlo simulations to show how such tunnelling can lift the degeneracy of a spin or charge ice, stabilizing a unique "quantum-ice" ground state-a quantum liquid with excitations described by the Maxwell action of (3+1)-dimensional quantum electrodynamics. We further identify a competing ordered squiggle state, and show how both squiggle and quantum-ice states might be distinguished in neutron scattering experiments on a spin-ice material. PMID:22401117
Methods for Monte Carlo simulations of biomacromolecules
Vitalis, Andreas; Pappu, Rohit V.
2010-01-01
The state-of-the-art for Monte Carlo (MC) simulations of biomacromolecules is reviewed. Available methodologies for sampling conformational equilibria and associations of biomacromolecules in the canonical ensemble, given a continuum description of the solvent environment, are reviewed. Detailed sections are provided dealing with the choice of degrees of freedom, the efficiencies of MC algorithms and algorithmic peculiarities, as well as the optimization of simple movesets. The issue of introducing correlations into elementary MC moves, and the applicability of such methods to simulations of biomacromolecules is discussed. A brief discussion of multicanonical methods and an overview of recent simulation work highlighting the potential of MC methods are also provided. It is argued that MC simulations, while underutilized biomacromolecular simulation community, hold promise for simulations of complex systems and phenomena that span multiple length scales, especially when used in conjunction with implicit solvation models or other coarse graining strategies. PMID:20428473
Hybrid algorithms in quantum Monte Carlo
Esler, Kenneth P; Mcminis, Jeremy; Morales, Miguel A; Clark, Bryan K.; Shulenburger, Luke; Ceperley, David M
2012-01-01
With advances in algorithms and growing computing powers, quantum Monte Carlo (QMC) methods have become a leading contender for high accuracy calculations for the electronic structure of realistic systems. The performance gain on recent HPC systems is largely driven by increasing parallelism: the number of compute cores of a SMP and the number of SMPs have been going up, as the Top500 list attests. However, the available memory as well as the communication and memory bandwidth per element has not kept pace with the increasing parallelism. This severely limits the applicability of QMC and the problem size it can handle. OpenMP/MPI hybrid programming provides applications with simple but effective solutions to overcome efficiency and scalability bottlenecks on large-scale clusters based on multi/many-core SMPs. We discuss the design and implementation of hybrid methods in QMCPACK and analyze its performance on current HPC platforms characterized by various memory and communication hierarchies.
Monte Carlo approaches to effective field theories
Carlson, J. ); Schmidt, K.E. . Dept. of Physics)
1991-01-01
In this paper, we explore the application of continuum Monte Carlo methods to effective field theory models. Effective field theories, in this context, are those in which a Fock space decomposition of the state is useful. These problems arise both in nuclear and condensed matter physica. In nuclear physics, much work has been done on effective field theories of mesons and baryons. While the theories are not fundamental, they should be able to describe nuclear properties at low energy and momentum scales. After describing the methods, we solve two simple scalar field theory problems; the polaron and two nucleons interacting through scalar meson exchange. The methods presented here are rather straightforward extensions of methods used to solve quantum mechanics problems. Monte Carlo methods are used to avoid the truncation inherent in a Tamm-Dancoff approach and its associated difficulties. Nevertheless, the methods will be most valuable when the Fock space decomposition of the states is useful. Hence, while they are not intended for ab initio studies of QCD, they may prove valuable in studies of light nuclei, or for systems of interacting electrons and phonons. In these problems a Fock space decomposition can be used to reduce the number of degrees of freedom and to retain the rotational symmetries exactly. The problems we address here are comparatively simple, but offer useful initial tests of the method. We present results for the polaron and two non-relativistic nucleons interacting through scalar meson exchange. In each case, it is possible to integrate out the boson degrees of freedom exactly, and obtain a retarded form of the action that depends only upon the fermion paths. Here we keep the explicit bosons, though, since we would like to retain information about the boson components of the states and it will be necessary to keep these components in order to treat non-scalar of interacting bosonic fields.
Crossing the mesoscale no-mans land via parallel kinetic Monte Carlo.
Garcia Cardona, Cristina; Webb, Edmund Blackburn, III; Wagner, Gregory John; Tikare, Veena; Holm, Elizabeth Ann; Plimpton, Steven James; Thompson, Aidan Patrick; Slepoy, Alexander; Zhou, Xiao Wang; Battaile, Corbett Chandler; Chandross, Michael Evan
2009-10-01
The kinetic Monte Carlo method and its variants are powerful tools for modeling materials at the mesoscale, meaning at length and time scales in between the atomic and continuum. We have completed a 3 year LDRD project with the goal of developing a parallel kinetic Monte Carlo capability and applying it to materials modeling problems of interest to Sandia. In this report we give an overview of the methods and algorithms developed, and describe our new open-source code called SPPARKS, for Stochastic Parallel PARticle Kinetic Simulator. We also highlight the development of several Monte Carlo models in SPPARKS for specific materials modeling applications, including grain growth, bubble formation, diffusion in nanoporous materials, defect formation in erbium hydrides, and surface growth and evolution.
Recent advances and future prospects for Monte Carlo
Brown, Forrest B
2010-01-01
The history of Monte Carlo methods is closely linked to that of computers: The first known Monte Carlo program was written in 1947 for the ENIAC; a pre-release of the first Fortran compiler was used for Monte Carlo In 1957; Monte Carlo codes were adapted to vector computers in the 1980s, clusters and parallel computers in the 1990s, and teraflop systems in the 2000s. Recent advances include hierarchical parallelism, combining threaded calculations on multicore processors with message-passing among different nodes. With the advances In computmg, Monte Carlo codes have evolved with new capabilities and new ways of use. Production codes such as MCNP, MVP, MONK, TRIPOLI and SCALE are now 20-30 years old (or more) and are very rich in advanced featUres. The former 'method of last resort' has now become the first choice for many applications. Calculations are now routinely performed on office computers, not just on supercomputers. Current research and development efforts are investigating the use of Monte Carlo methods on FPGAs. GPUs, and many-core processors. Other far-reaching research is exploring ways to adapt Monte Carlo methods to future exaflop systems that may have 1M or more concurrent computational processes.
Iterative acceleration methods for Monte Carlo and deterministic criticality calculations
Urbatsch, T.J.
1995-11-01
If you have ever given up on a nuclear criticality calculation and terminated it because it took so long to converge, you might find this thesis of interest. The author develops three methods for improving the fission source convergence in nuclear criticality calculations for physical systems with high dominance ratios for which convergence is slow. The Fission Matrix Acceleration Method and the Fission Diffusion Synthetic Acceleration (FDSA) Method are acceleration methods that speed fission source convergence for both Monte Carlo and deterministic methods. The third method is a hybrid Monte Carlo method that also converges for difficult problems where the unaccelerated Monte Carlo method fails. The author tested the feasibility of all three methods in a test bed consisting of idealized problems. He has successfully accelerated fission source convergence in both deterministic and Monte Carlo criticality calculations. By filtering statistical noise, he has incorporated deterministic attributes into the Monte Carlo calculations in order to speed their source convergence. He has used both the fission matrix and a diffusion approximation to perform unbiased accelerations. The Fission Matrix Acceleration method has been implemented in the production code MCNP and successfully applied to a real problem. When the unaccelerated calculations are unable to converge to the correct solution, they cannot be accelerated in an unbiased fashion. A Hybrid Monte Carlo method weds Monte Carlo and a modified diffusion calculation to overcome these deficiencies. The Hybrid method additionally possesses reduced statistical errors.
Spectral backward Monte Carlo method for surface infrared image simulation
NASA Astrophysics Data System (ADS)
Sun, Haifeng; Xia, Xinlin; Sun, Chuang; Chen, Xue
2014-11-01
The surface infrared radiation is an important part that contributes to the infrared image of the airplane. The Monte Carlo method for the infrared image calculation is suitable for the complex geometry of targets like airplanes. The backward Monte Carlo method is prior to the forward Monte Carlo method for the usually long distance between targets and the detector. Similar to the non-gray absorbing media, the random number relation is developed for the radiation of the spectral surface. In the backward Monte Carlo method, one random number that reverses the wave length (or wave number) may result deferent wave numbers for targets' surface elements on the track of a photon bundle. Through the manipulation of the densities of a photon bundles in arbitrary small intervals near wave numbers, all the wave lengths corresponding to one random number on the targets' surface elements on the track of the photon bundle are kept the same to keep the balance of the energy of the photon bundle. The model developed together with the energy partition model is incorporated into the backward Monte Carlo method to form the spectral backward Monte Carlo method. The developed backward Monte Carlo method is used to calculate the infrared images of a simple configuration with two gray spectral bands, and the efficiency of it is validated by compared the results of it to that of the non-spectral backward Monte Carlo method . Then the validated spectral backward Monte Carlo method is used to simulate the infrared image of the SDM airplane model with spectral surface, and the distribution of received infrared radiation flux of pixels in the detector is analyzed.
Continuous-time quantum Monte Carlo impurity solvers
NASA Astrophysics Data System (ADS)
Gull, Emanuel; Werner, Philipp; Fuchs, Sebastian; Surer, Brigitte; Pruschke, Thomas; Troyer, Matthias
2011-04-01
Continuous-time quantum Monte Carlo impurity solvers are algorithms that sample the partition function of an impurity model using diagrammatic Monte Carlo techniques. The present paper describes codes that implement the interaction expansion algorithm originally developed by Rubtsov, Savkin, and Lichtenstein, as well as the hybridization expansion method developed by Werner, Millis, Troyer, et al. These impurity solvers are part of the ALPS-DMFT application package and are accompanied by an implementation of dynamical mean-field self-consistency equations for (single orbital single site) dynamical mean-field problems with arbitrary densities of states. Program summaryProgram title: dmft Catalogue identifier: AEIL_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AEIL_v1_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: ALPS LIBRARY LICENSE version 1.1 No. of lines in distributed program, including test data, etc.: 899 806 No. of bytes in distributed program, including test data, etc.: 32 153 916 Distribution format: tar.gz Programming language: C++ Operating system: The ALPS libraries have been tested on the following platforms and compilers: Linux with GNU Compiler Collection (g++ version 3.1 and higher), and Intel C++ Compiler (icc version 7.0 and higher) MacOS X with GNU Compiler (g++ Apple-version 3.1, 3.3 and 4.0) IBM AIX with Visual Age C++ (xlC version 6.0) and GNU (g++ version 3.1 and higher) compilers Compaq Tru64 UNIX with Compq C++ Compiler (cxx) SGI IRIX with MIPSpro C++ Compiler (CC) HP-UX with HP C++ Compiler (aCC) Windows with Cygwin or coLinux platforms and GNU Compiler Collection (g++ version 3.1 and higher) RAM: 10 MB-1 GB Classification: 7.3 External routines: ALPS [1], BLAS/LAPACK, HDF5 Nature of problem: (See [2].) Quantum impurity models describe an atom or molecule embedded in a host material with which it can exchange electrons. They are basic to nanoscience as representations of quantum dots and molecular conductors and play an increasingly important role in the theory of "correlated electron" materials as auxiliary problems whose solution gives the "dynamical mean field" approximation to the self-energy and local correlation functions. Solution method: Quantum impurity models require a method of solution which provides access to both high and low energy scales and is effective for wide classes of physically realistic models. The continuous-time quantum Monte Carlo algorithms for which we present implementations here meet this challenge. Continuous-time quantum impurity methods are based on partition function expansions of quantum impurity models that are stochastically sampled to all orders using diagrammatic quantum Monte Carlo techniques. For a review of quantum impurity models and their applications and of continuous-time quantum Monte Carlo methods for impurity models we refer the reader to [2]. Additional comments: Use of dmft requires citation of this paper. Use of any ALPS program requires citation of the ALPS [1] paper. Running time: 60 s-8 h per iteration.
Discrete Diffusion Monte Carlo for Electron Thermal Transport
NASA Astrophysics Data System (ADS)
Chenhall, Jeffrey; Cao, Duc; Wollaeger, Ryan; Moses, Gregory
2014-10-01
The iSNB (implicit Schurtz Nicolai Busquet electron thermal transport method of Cao et al. is adapted to a Discrete Diffusion Monte Carlo (DDMC) solution method for eventual inclusion in a hybrid IMC-DDMC (Implicit Monte Carlo) method. The hybrid method will combine the efficiency of a diffusion method in short mean free path regions with the accuracy of a transport method in long mean free path regions. The Monte Carlo nature of the approach allows the algorithm to be massively parallelized. Work to date on the iSNB-DDMC method will be presented. This work was supported by Sandia National Laboratory - Albuquerque.
Quantum Monte Carlo Algorithms for Diagrammatic Vibrational Structure Calculations
NASA Astrophysics Data System (ADS)
Hermes, Matthew; Hirata, So
2015-06-01
Convergent hierarchies of theories for calculating many-body vibrational ground and excited-state wave functions, such as Møller-Plesset perturbation theory or coupled cluster theory, tend to rely on matrix-algebraic manipulations of large, high-dimensional arrays of anharmonic force constants, tasks which require large amounts of computer storage space and which are very difficult to implement in a parallel-scalable fashion. On the other hand, existing quantum Monte Carlo (QMC) methods for vibrational wave functions tend to lack robust techniques for obtaining excited-state energies, especially for large systems. By exploiting analytical identities for matrix elements of position operators in a harmonic oscillator basis, we have developed stochastic implementations of the size-extensive vibrational self-consistent field (MC-XVSCF) and size-extensive vibrational Møller-Plesset second-order perturbation (MC-XVMP2) theories which do not require storing the potential energy surface (PES). The programmable equations of MC-XVSCF and MC-XVMP2 take the form of a small number of high-dimensional integrals evaluated using Metropolis Monte Carlo techniques. The associated integrands require independent evaluations of only the value, not the derivatives, of the PES at many points, a task which is trivial to parallelize. However, unlike existing vibrational QMC methods, MC-XVSCF and MC-XVMP2 can calculate anharmonic frequencies directly, rather than as a small difference between two noisy total energies, and do not require user-selected coordinates or nodal surfaces. MC-XVSCF and MC-XVMP2 can also directly sample the PES in a given approximation without analytical or grid-based approximations, enabling us to quantify the errors induced by such approximations.
NASA Technical Reports Server (NTRS)
Queen, Eric M.; Omara, Thomas M.
1990-01-01
A realization of a stochastic atmosphere model for use in simulations is presented. The model provides pressure, density, temperature, and wind velocity as a function of latitude, longitude, and altitude, and is implemented in a three degree of freedom simulation package. This implementation is used in the Monte Carlo simulation of an aeroassisted orbital transfer maneuver and results are compared to those of a more traditional approach.
Combined Langevin dynamics/Monte-Carlo simulations of the non-equilibrium ferrofluid remagnetization
NASA Astrophysics Data System (ADS)
Berkov, D. V.; Gorn, N.; Stock, D.
2004-05-01
We present a powerful method for simulations of fast remagnetization processes in ferrofluids which combines the stochastic (Langevin) dynamics and Monte-Carlo method. Our Langevin equations for the description of ferrofluid dynamics include both the mechanical (translational and rotational particle motion) and magnetic (rotation of the magnetic moment with respect to the particle) degrees of freedom. As an application example we present new physical results concerning the dependence of the magnetization relaxation in ferrofluids after switching off the external field.
Accelerated Monte Carlo Methods for Coulomb Collisions
NASA Astrophysics Data System (ADS)
Rosin, Mark; Ricketson, Lee; Dimits, Andris; Caflisch, Russel; Cohen, Bruce
2014-03-01
We present a new highly efficient multi-level Monte Carlo (MLMC) simulation algorithm for Coulomb collisions in a plasma. The scheme, initially developed and used successfully for applications in financial mathematics, is applied here to kinetic plasmas for the first time. The method is based on a Langevin treatment of the Landau-Fokker-Planck equation and has a rich history derived from the works of Einstein and Chandrasekhar. The MLMC scheme successfully reduces the computational cost of achieving an RMS error ? in the numerical solution to collisional plasma problems from ?(?-3) - for the standard state-of-the-art Langevin and binary collision algorithms - to a theoretically optimal ?(?-2) scaling, when used in conjunction with an underlying Milstein discretization to the Langevin equation. In the test case presented here, the method accelerates simulations by factors of up to 100. We summarize the scheme, present some tricks for improving its efficiency yet further, and discuss the method's range of applicability. Work performed for US DOE by LLNL under contract DE-AC52- 07NA27344 and by UCLA under grant DE-FG02-05ER25710.
La modélisation par Reverse Monte Carlo (RMC)
NASA Astrophysics Data System (ADS)
McGreevy, R. L.
2003-09-01
La technique de modélisation par Reverse Monte Carlo (RMC) est une méthode générale de modélisation structurale à partir d'un ensemble de données expérimentales. Cette méthode étant très souple, elle peut s'appliquer à de nombreux types de données. Jusqu'à présent ces applications comprennent : la diffraction des neutrons (y compris la substitution isotopique), la diffraction des rayons X (y compris la diffusion anomale), la diffraction des électrons, la RMN (les techniques d'angle magique et de 2ème moment) et l'EXAFS. Les systèmes étudiés sont également d'une grande variété : liquides, verres, polymères, cristaux et matériaux magnétiques, par exemple. Ce cours présente les bases de la méthode RMC en signalant certaines des idées fausses répandues. L'accent sera mis sur le fait que les modèles structuraux obtenus par RMC ne sont ni'uniques' ni 'exacts' ; cependant ils sont souvent utiles à la compréhension soit de la structure du système, soit des relations entre structure et autres propriétés physiques.
Error modes in implicit Monte Carlo
Martin, William Russell,; Brown, F. B.
2001-01-01
The Implicit Monte Carlo (IMC) method of Fleck and Cummings [1] has been used for years to analyze radiative transfer problems, such as those encountered in stellar atmospheres or inertial confinement fusion. Larsen and Mercier [2] have shown that the IMC method violates a maximum principle that is satisfied by the exact solution to the radiative transfer equation. Except for [2] and related papers regarding the maximum principle, there have been no other published results regarding the analysis of errors or convergence properties for the IMC method. This work presents an exact error analysis for the IMC method by using the analytical solutions for infinite medium geometry (0-D) to determine closed form expressions for the errors. The goal is to gain insight regarding the errors inherent in the IMC method by relating the exact 0-D errors to multi-dimensional geometry. Additional work (not described herein) has shown that adding a leakage term (i.e., a 'buckling' term) to the 0-D equations has relatively little effect on the IMC errors analyzed in this paper, so that the 0-D errors should provide useful guidance for the errors observed in multi-dimensional simulations.
Improved method for implicit Monte Carlo
Brown, F. B.; Martin, W. R.
2001-01-01
The Implicit Monte Carlo (IMC) method has been used for over 30 years to analyze radiative transfer problems, such as those encountered in stellar atmospheres or inertial confinement fusion. Reference [2] provided an exact error analysis of IMC for 0-D problems and demonstrated that IMC can exhibit substantial errors when timesteps are large. These temporal errors are inherent in the method and are in addition to spatial discretization errors and approximations that address nonlinearities (due to variation of physical constants). In Reference [3], IMC and four other methods were analyzed in detail and compared on both theoretical grounds and the accuracy of numerical tests. As discussed in, two alternative schemes for solving the radiative transfer equations, the Carter-Forest (C-F) method and the Ahrens-Larsen (A-L) method, do not exhibit the errors found in IMC; for 0-D, both of these methods are exact for all time, while for 3-D, A-L is exact for all time and C-F is exact within a timestep. These methods can yield substantially superior results to IMC.
Realistic Monte Carlo Simulation of PEN Apparatus
NASA Astrophysics Data System (ADS)
Glaser, Charles; PEN Collaboration
2015-04-01
The PEN collaboration undertook to measure the ?+ -->e+?e(?) branching ratio with a relative uncertainty of 5 ×10-4 or less at the Paul Scherrer Institute. This observable is highly susceptible to small non V - A contributions, i.e, non-Standard Model physics. The detector system included a beam counter, mini TPC for beam tracking, an active degrader and stopping target, MWPCs and a plastic scintillator hodoscope for particle tracking and identification, and a spherical CsI EM calorimeter. GEANT 4 Monte Carlo simulation is integral to the analysis as it is used to generate fully realistic events for all pion and muon decay channels. The simulated events are constructed so as to match the pion beam profiles, divergence, and momentum distribution. Ensuring the placement of individual detector components at the sub-millimeter level and proper construction of active target waveforms and associated noise, enables us to more fully understand temporal and geometrical acceptances as well as energy, time, and positional resolutions and calibrations in the detector system. This ultimately leads to reliable discrimination of background events, thereby improving cut based or multivariate branching ratio extraction. Work supported by NSF Grants PHY-0970013, 1307328, and others.
Atomistic Monte Carlo Simulation of Lipid Membranes
Wüstner, Daniel; Sklenar, Heinz
2014-01-01
Biological membranes are complex assemblies of many different molecules of which analysis demands a variety of experimental and computational approaches. In this article, we explain challenges and advantages of atomistic Monte Carlo (MC) simulation of lipid membranes. We provide an introduction into the various move sets that are implemented in current MC methods for efficient conformational sampling of lipids and other molecules. In the second part, we demonstrate for a concrete example, how an atomistic local-move set can be implemented for MC simulations of phospholipid monomers and bilayer patches. We use our recently devised chain breakage/closure (CBC) local move set in the bond-/torsion angle space with the constant-bond-length approximation (CBLA) for the phospholipid dipalmitoylphosphatidylcholine (DPPC). We demonstrate rapid conformational equilibration for a single DPPC molecule, as assessed by calculation of molecular energies and entropies. We also show transition from a crystalline-like to a fluid DPPC bilayer by the CBC local-move MC method, as indicated by the electron density profile, head group orientation, area per lipid, and whole-lipid displacements. We discuss the potential of local-move MC methods in combination with molecular dynamics simulations, for example, for studying multi-component lipid membranes containing cholesterol. PMID:24469314
DETERMINING UNCERTAINTY IN PHYSICAL PARAMETER MEASUREMENTS BY MONTE CARLO SIMULATION
A statistical approach, often called Monte Carlo Simulation, has been used to examine propagation of error with measurement of several parameters important in predicting environmental transport of chemicals. These parameters are vapor pressure, water solubility, octanol-water par...
Low variance methods for Monte Carlo simulation of phonon transport
Péraud, Jean-Philippe M. (Jean-Philippe Michel)
2011-01-01
Computational studies in kinetic transport are of great use in micro and nanotechnologies. In this work, we focus on Monte Carlo methods for phonon transport, intended for studies in microscale heat transfer. After reviewing ...
Variance Reduction Techniques for Implicit Monte Carlo Simulations
Landman, Jacob Taylor
2013-09-19
The Implicit Monte Carlo (IMC) method is widely used for simulating thermal radiative transfer and solving the radiation transport equation. During an IMC run a grid network is constructed and particles are sourced into the problem to simulate...
Combinatorial nuclear level density by a Monte Carlo method
N. Cerf
1993-09-14
We present a new combinatorial method for the calculation of the nuclear level density. It is based on a Monte Carlo technique, in order to avoid a direct counting procedure which is generally impracticable for high-A nuclei. The Monte Carlo simulation, making use of the Metropolis sampling scheme, allows a computationally fast estimate of the level density for many fermion systems in large shell model spaces. We emphasize the advantages of this Monte Carlo approach, particularly concerning the prediction of the spin and parity distributions of the excited states, and compare our results with those derived from a traditional combinatorial or a statistical method. Such a Monte Carlo technique seems very promising to determine accurate level densities in a large energy range for nuclear reaction calculations.
An Analysis Tool for Flight Dynamics Monte Carlo Simulations
Restrepo, Carolina 1982-
2011-05-20
and analysis work to understand vehicle operating limits and identify circumstances that lead to mission failure. A Monte Carlo simulation approach that varies a wide range of physical parameters is typically used to generate thousands of test cases...
COMPARISON OF MONTE CARLO METHODS FOR NONLINEAR RADIATION TRANSPORT
W. R. MARTIN; F. B. BROWN
2001-03-01
Five Monte Carlo methods for solving the nonlinear thermal radiation transport equations are compared. The methods include the well-known Implicit Monte Carlo method (IMC) developed by Fleck and Cummings, an alternative to IMC developed by Carter and Forest, an ''exact'' method recently developed by Ahrens and Larsen, and two methods recently proposed by Martin and Brown. The five Monte Carlo methods are developed and applied to the radiation transport equation in a medium assuming local thermodynamic equilibrium. Conservation of energy is derived and used to define appropriate material energy update equations for each of the methods. Details of the Monte Carlo implementation are presented, both for the random walk simulation and the material energy update. Simulation results for all five methods are obtained for two infinite medium test problems and a 1-D test problem, all of which have analytical solutions. Conclusions regarding the relative merits of the various schemes are presented.
Bayesian inverse problems with Monte Carlo forward models
Bal, Guillaume
The full application of Bayesian inference to inverse problems requires exploration of a posterior distribution that typically does not possess a standard form. In this context, Markov chain Monte Carlo (MCMC) methods are ...
Monte Carlo simulations in X-ray imaging
NASA Astrophysics Data System (ADS)
Giersch, Jürgen; Durst, Jürgen
2008-06-01
Monte Carlo simulations have become crucial tools in many fields of X-ray imaging. They help to understand the influence of physical effects such as absorption, scattering and fluorescence of photons in different detector materials on image quality parameters. They allow studying new imaging concepts like photon counting, energy weighting or material reconstruction. Additionally, they can be applied to the fields of nuclear medicine to define virtual setups studying new geometries or image reconstruction algorithms. Furthermore, an implementation of the propagation physics of electrons and photons allows studying the behavior of (novel) X-ray generation concepts. This versatility of Monte Carlo simulations is illustrated with some examples done by the Monte Carlo simulation ROSI. An overview of the structure of ROSI is given as an example of a modern, well-proven, object-oriented, parallel computing Monte Carlo simulation for X-ray imaging.
A modified Monte Carlo model for the ionospheric heating rates
NASA Technical Reports Server (NTRS)
Mayr, H. G.; Fontheim, E. G.; Robertson, S. C.
1972-01-01
A Monte Carlo method is adopted as a basis for the derivation of the photoelectron heat input into the ionospheric plasma. This approach is modified in an attempt to minimize the computation time. The heat input distributions are computed for arbitrarily small source elements that are spaced at distances apart corresponding to the photoelectron dissipation range. By means of a nonlinear interpolation procedure their individual heating rate distributions are utilized to produce synthetic ones that fill the gaps between the Monte Carlo generated distributions. By varying these gaps and the corresponding number of Monte Carlo runs the accuracy of the results is tested to verify the validity of this procedure. It is concluded that this model can reduce the computation time by more than a factor of three, thus improving the feasibility of including Monte Carlo calculations in self-consistent ionosphere models.
Monte Carlo methods for light propagation in biological tissues
2014-01-08
Jan 8, 2014 ... 2010 Mathematics Subject Classification. ..... of K smaller cubes (voxels in the image processing terminology) {Vk,k = 0,...,K ?1}, whose volume ...... The noise coming from the Monte Carlo estimation of the score, of its gradient ...
Monte Carlo variance reduction approaches for non-Boltzmann tallies
Booth, T.E.
1992-12-01
Quantities that depend on the collective effects of groups of particles cannot be obtained from the standard Boltzmann transport equation. Monte Carlo estimates of these quantities are called non-Boltzmann tallies and have become increasingly important recently. Standard Monte Carlo variance reduction techniques were designed for tallies based on individual particles rather than groups of particles. Experience with non-Boltzmann tallies and analog Monte Carlo has demonstrated the severe limitations of analog Monte Carlo for many non-Boltzmann tallies. In fact, many calculations absolutely require variance reduction methods to achieve practical computation times. Three different approaches to variance reduction for non-Boltzmann tallies are described and shown to be unbiased. The advantages and disadvantages of each of the approaches are discussed.
Parallel Fission Bank Algorithms in Monte Carlo Criticality Calculations
Romano, Paul Kollath
In this work we describe a new method for parallelizing the source iterations in a Monte Carlo criticality calculation. Instead of having one global fission bank that needs to be synchronized, as is traditionally done, our ...
OBJECT KINETIC MONTE CARLO SIMULATIONS OF CASCADE ANNEALING IN TUNGSTEN
Nandipati, Giridhar; Setyawan, Wahyu; Heinisch, Howard L.; Roche, Kenneth J.; Kurtz, Richard J.; Wirth, Brian D.
2014-03-31
The objective of this work is to study the annealing of primary cascade damage created by primary knock-on atoms (PKAs) of various energies, at various temperatures in bulk tungsten using the object kinetic Monte Carlo (OKMC) method.
Combinatorial geometry domain decomposition strategies for Monte Carlo simulations
Li, G.; Zhang, B.; Deng, L.; Mo, Z.; Liu, Z.; Shangguan, D.; Ma, Y.; Li, S.; Hu, Z.
2013-07-01
Analysis and modeling of nuclear reactors can lead to memory overload for a single core processor when it comes to refined modeling. A method to solve this problem is called 'domain decomposition'. In the current work, domain decomposition algorithms for a combinatorial geometry Monte Carlo transport code are developed on the JCOGIN (J Combinatorial Geometry Monte Carlo transport INfrastructure). Tree-based decomposition and asynchronous communication of particle information between domains are described in the paper. Combination of domain decomposition and domain replication (particle parallelism) is demonstrated and compared with that of MERCURY code. A full-core reactor model is simulated to verify the domain decomposition algorithms using the Monte Carlo particle transport code JMCT (J Monte Carlo Transport Code), which has being developed on the JCOGIN infrastructure. Besides, influences of the domain decomposition algorithms to tally variances are discussed. (authors)
Study of the Transition Flow Regime using Monte Carlo Methods
NASA Technical Reports Server (NTRS)
Hassan, H. A.
1999-01-01
This NASA Cooperative Agreement presents a study of the Transition Flow Regime Using Monte Carlo Methods. The topics included in this final report are: 1) New Direct Simulation Monte Carlo (DSMC) procedures; 2) The DS3W and DS2A Programs; 3) Papers presented; 4) Miscellaneous Applications and Program Modifications; 5) Solution of Transitional Wake Flows at Mach 10; and 6) Turbulence Modeling of Shock-Dominated Fows with a k-Enstrophy Formulation.
Shift: A Massively Parallel Monte Carlo Radiation Transport Package
Pandya, Tara M; Johnson, Seth R; Davidson, Gregory G; Evans, Thomas M; Hamilton, Steven P
2015-01-01
This paper discusses the massively-parallel Monte Carlo radiation transport package, Shift, de- veloped at Oak Ridge National Laboratory. It reviews the capabilities, implementation, and parallel performance of this code package. Scaling results demonstrate very good strong and weak scaling behavior of the implemented algorithms. Benchmark results from various reactor problems show that Shift results compare well to other contemporary Monte Carlo codes and experimental results.
Multiscale Monte Carlo equilibration: pure Yang-Mills theory
Michael G. Endres; Richard C. Brower; William Detmold; Kostas Orginos; Andrew V. Pochinsky
2015-10-15
We present a multiscale thermalization algorithm for lattice gauge theory, which enables efficient parallel generation of uncorrelated gauge field configurations. The algorithm combines standard Monte Carlo techniques with ideas drawn from real space renormalization group and multigrid methods. We demonstrate the viability of the algorithm for pure Yang-Mills gauge theory for both heat bath and hybrid Monte Carlo evolution, and show that it ameliorates the problem of topological freezing up to controllable lattice spacing artifacts.
Development of Monte Carlo Capability for Orion Parachute Simulations
NASA Technical Reports Server (NTRS)
Moore, James W.
2011-01-01
Parachute test programs employ Monte Carlo simulation techniques to plan testing and make critical decisions related to parachute loads, rate-of-descent, or other parameters. This paper describes the development and use of a MATLAB-based Monte Carlo tool for three parachute drop test simulations currently used by NASA. The Decelerator System Simulation (DSS) is a legacy 6 Degree-of-Freedom (DOF) simulation used to predict parachute loads and descent trajectories. The Decelerator System Simulation Application (DSSA) is a 6-DOF simulation that is well suited for modeling aircraft extraction and descent of pallet-like test vehicles. The Drop Test Vehicle Simulation (DTVSim) is a 2-DOF trajectory simulation that is convenient for quick turn-around analysis tasks. These three tools have significantly different software architectures and do not share common input files or output data structures. Separate Monte Carlo tools were initially developed for each simulation. A recently-developed simulation output structure enables the use of the more sophisticated DSSA Monte Carlo tool with any of the core-simulations. The task of configuring the inputs for the nominal simulation is left to the existing tools. Once the nominal simulation is configured, the Monte Carlo tool perturbs the input set according to dispersion rules created by the analyst. These rules define the statistical distribution and parameters to be applied to each simulation input. Individual dispersed parameters are combined to create a dispersed set of simulation inputs. The Monte Carlo tool repeatedly executes the core-simulation with the dispersed inputs and stores the results for analysis. The analyst may define conditions on one or more output parameters at which to collect data slices. The tool provides a versatile interface for reviewing output of large Monte Carlo data sets while preserving the capability for detailed examination of individual dispersed trajectories. The Monte Carlo tool described in this paper has proven useful in planning several Crew Exploration Vehicle parachute tests.
Multiscale Monte Carlo equilibration: Pure Yang-Mills theory
Michael G. Endres; Richard C. Brower; William Detmold; Kostas Orginos; Andrew V. Pochinsky
2015-12-30
We present a multiscale thermalization algorithm for lattice gauge theory, which enables efficient parallel generation of uncorrelated gauge field configurations. The algorithm combines standard Monte Carlo techniques with ideas drawn from real space renormalization group and multigrid methods. We demonstrate the viability of the algorithm for pure Yang-Mills gauge theory for both heat bath and hybrid Monte Carlo evolution, and show that it ameliorates the problem of topological freezing up to controllable lattice spacing artifacts.
Multiscale Monte Carlo equilibration: Pure Yang-Mills theory
Endres, Michael G.; Brower, Richard C.; Orginos, Kostas; Detmold, William; Pochinsky, Andrew V.
2015-12-29
In this study, we present a multiscale thermalization algorithm for lattice gauge theory, which enables efficient parallel generation of uncorrelated gauge field configurations. The algorithm combines standard Monte Carlo techniques with ideas drawn from real space renormalization group and multigrid methods. We demonstrate the viability of the algorithm for pure Yang-Mills gauge theory for both heat bath and hybrid Monte Carlo evolution, and show that it ameliorates the problem of topological freezing up to controllable lattice spacing artifacts.
Monte Carlo methods and applications in nuclear physics
Carlson, J.
1990-01-01
Monte Carlo methods for studying few- and many-body quantum systems are introduced, with special emphasis given to their applications in nuclear physics. Variational and Green's function Monte Carlo methods are presented in some detail. The status of calculations of light nuclei is reviewed, including discussions of the three-nucleon-interaction, charge and magnetic form factors, the coulomb sum rule, and studies of low-energy radiative transitions. 58 refs., 12 figs.
Implications of Monte Carlo Statistical Errors in Criticality Safety Assessments
Pevey, Ronald E.
2005-09-15
Most criticality safety calculations are performed using Monte Carlo techniques because of Monte Carlo's ability to handle complex three-dimensional geometries. For Monte Carlo calculations, the more histories sampled, the lower the standard deviation of the resulting estimates. The common intuition is, therefore, that the more histories, the better; as a result, analysts tend to run Monte Carlo analyses as long as possible (or at least to a minimum acceptable uncertainty). For Monte Carlo criticality safety analyses, however, the optimization situation is complicated by the fact that procedures usually require that an extra margin of safety be added because of the statistical uncertainty of the Monte Carlo calculations. This additional safety margin affects the impact of the choice of the calculational standard deviation, both on production and on safety. This paper shows that, under the assumptions of normally distributed benchmarking calculational errors and exact compliance with the upper subcritical limit (USL), the standard deviation that optimizes production is zero, but there is a non-zero value of the calculational standard deviation that minimizes the risk of inadvertently labeling a supercritical configuration as subcritical. Furthermore, this value is shown to be a simple function of the typical benchmarking step outcomes--the bias, the standard deviation of the bias, the upper subcritical limit, and the number of standard deviations added to calculated k-effectives before comparison to the USL.
Extending Diffusion Monte Carlo to Internal Coordinates
NASA Astrophysics Data System (ADS)
Petit, Andrew S.; McCoy, Anne B.
2013-06-01
Diffusion Monte Carlo (DMC) is a powerful technique for studying the properties of molecules and clusters that undergo large-amplitude, zero-point vibrational motions. However, the overall applicability of the method is limited by the need to work in Cartesian coordinates and therefore have available a full-dimensional potential energy surface (PES). As a result, the development of a reduced-dimensional DMC methodology has the potential to significantly extend the range of problems that DMC can address by allowing the calculations to be performed in the subset of coordinates that is physically relevant to the questions being asked, thereby eliminating the need for a full-dimensional PES. As a first step towards this goal, we describe here an internal coordinate extension of DMC that places no constraints on the choice of internal coordinates other than requiring them all to be independent. Using H_3^+ and its isotopologues as model systems, we demonstrate that the methodology is capable of successfully describing the ground state properties of highly fluxional molecules as well as, in conjunction with the fixed-node approximation, the ?=1 vibrationally excited states. The calculations of the fundamentals of H_3^+ and its isotopologues provided general insights into the properties of the nodal surfaces of vibrationally excited states. Specifically, we will demonstrate that analysis of ground state probability distributions can point to the set of coordinates that are less strongly coupled and therefore more suitable for use as nodal coordinates in the fixed-node approximation. In particular, we show that nodal surfaces defined in terms of the curvilinear normal mode coordinates are reasonable for the fundamentals of H_2D^+ and D_2H^+ despite both molecules being highly fluxional.
Monte carlo sampling of fission multiplicity.
Hendricks, J. S.
2004-01-01
Two new methods have been developed for fission multiplicity modeling in Monte Carlo calculations. The traditional method of sampling neutron multiplicity from fission is to sample the number of neutrons above or below the average. For example, if there are 2.7 neutrons per fission, three would be chosen 70% of the time and two would be chosen 30% of the time. For many applications, particularly {sup 3}He coincidence counting, a better estimate of the true number of neutrons per fission is required. Generally, this number is estimated by sampling a Gaussian distribution about the average. However, because the tail of the Gaussian distribution is negative and negative neutrons cannot be produced, a slight positive bias can be found in the average value. For criticality calculations, the result of rejecting the negative neutrons is an increase in k{sub eff} of 0.1% in some cases. For spontaneous fission, where the average number of neutrons emitted from fission is low, the error also can be unacceptably large. If the Gaussian width approaches the average number of fissions, 10% too many fission neutrons are produced by not treating the negative Gaussian tail adequately. The first method to treat the Gaussian tail is to determine a correction offset, which then is subtracted from all sampled values of the number of neutrons produced. This offset depends on the average value for any given fission at any energy and must be computed efficiently at each fission from the non-integrable error function. The second method is to determine a corrected zero point so that all neutrons sampled between zero and the corrected zero point are killed to compensate for the negative Gaussian tail bias. Again, the zero point must be computed efficiently at each fission. Both methods give excellent results with a negligible computing time penalty. It is now possible to include the full effects of fission multiplicity without the negative Gaussian tail bias.
Monte Carlo study of microdosimetric diamond detectors.
Solevi, Paola; Magrin, Giulio; Moro, Davide; Mayer, Ramona
2015-09-21
Ion-beam therapy provides a high dose conformity and increased radiobiological effectiveness with respect to conventional radiation-therapy. Strict constraints on the maximum uncertainty on the biological weighted dose and consequently on the biological weighting factor require the determination of the radiation quality, defined as the types and energy spectra of the radiation at a specific point. However the experimental determination of radiation quality, in particular for an internal target, is not simple and the features of ion interactions and treatment delivery require dedicated and optimized detectors. Recently chemical vapor deposition (CVD) diamond detectors have been suggested as ion-beam therapy microdosimeters. Diamond detectors can be manufactured with small cross sections and thin shapes, ideal to cope with the high fluence rate. However the sensitive volume of solid state detectors significantly deviates from conventional microdosimeters, with a diameter that can be up to 1000 times the height. This difference requires a redefinition of the concept of sensitive thickness and a deep study of the secondary to primary radiation, of the wall effects and of the impact of the orientation of the detector with respect to the radiation field. The present work intends to study through Monte Carlo simulations the impact of the detector geometry on the determination of radiation quality quantities, in particular on the relative contribution of primary and secondary radiation. The dependence of microdosimetric quantities such as the unrestricted linear energy L and the lineal energy y are investigated for different detector cross sections, by varying the particle type (carbon ions and protons) and its energy. PMID:26309235
Monte Carlo study of microdosimetric diamond detectors
NASA Astrophysics Data System (ADS)
Solevi, Paola; Magrin, Giulio; Moro, Davide; Mayer, Ramona
2015-09-01
Ion-beam therapy provides a high dose conformity and increased radiobiological effectiveness with respect to conventional radiation-therapy. Strict constraints on the maximum uncertainty on the biological weighted dose and consequently on the biological weighting factor require the determination of the radiation quality, defined as the types and energy spectra of the radiation at a specific point. However the experimental determination of radiation quality, in particular for an internal target, is not simple and the features of ion interactions and treatment delivery require dedicated and optimized detectors. Recently chemical vapor deposition (CVD) diamond detectors have been suggested as ion-beam therapy microdosimeters. Diamond detectors can be manufactured with small cross sections and thin shapes, ideal to cope with the high fluence rate. However the sensitive volume of solid state detectors significantly deviates from conventional microdosimeters, with a diameter that can be up to 1000 times the height. This difference requires a redefinition of the concept of sensitive thickness and a deep study of the secondary to primary radiation, of the wall effects and of the impact of the orientation of the detector with respect to the radiation field. The present work intends to study through Monte Carlo simulations the impact of the detector geometry on the determination of radiation quality quantities, in particular on the relative contribution of primary and secondary radiation. The dependence of microdosimetric quantities such as the unrestricted linear energy L and the lineal energy y are investigated for different detector cross sections, by varying the particle type (carbon ions and protons) and its energy.
Monte-Carlo simulation of Callisto's exosphere
NASA Astrophysics Data System (ADS)
Vorburger, A.; Wurz, P.; Lammer, H.; Barabash, S.; Mousis, O.
2015-12-01
We model Callisto's exosphere based on its ice as well as non-ice surface via the use of a Monte-Carlo exosphere model. For the ice component we implement two putative compositions that have been computed from two possible extreme formation scenarios of the satellite. One composition represents the oxidizing state and is based on the assumption that the building blocks of Callisto were formed in the protosolar nebula and the other represents the reducing state of the gas, based on the assumption that the satellite accreted from solids condensed in the jovian sub-nebula. For the non-ice component we implemented the compositions of typical CI as well as L type chondrites. Both chondrite types have been suggested to represent Callisto's non-ice composition best. As release processes we consider surface sublimation, ion sputtering and photon-stimulated desorption. Particles are followed on their individual trajectories until they either escape Callisto's gravitational attraction, return to the surface, are ionized, or are fragmented. Our density profiles show that whereas the sublimated species dominate close to the surface on the sun-lit side, their density profiles (with the exception of H and H2) decrease much more rapidly than the sputtered particles. The Neutral gas and Ion Mass (NIM) spectrometer, which is part of the Particle Environment Package (PEP), will investigate Callisto's exosphere during the JUICE mission. Our simulations show that NIM will be able to detect sublimated and sputtered particles from both the ice and non-ice surface. NIM's measured chemical composition will allow us to distinguish between different formation scenarios.
Thermally driven atmospheric escape: Monte Carlo simulations for Titan's atmosphere
Zhigilei, Leonid V.
Thermally driven atmospheric escape: Monte Carlo simulations for Titan's atmosphere Orenthal J Carlo simulations a b s t r a c t Recent models of Titan's upper atmosphere were used to reproduce of Titan's atmosphere where the gas changes from being dominated by collisions to being dominated
Kinetic Monte Carlo approach to modeling dislocation mobility
Cai, Wei
Kinetic Monte Carlo approach to modeling dislocation mobility Wei Cai a , Vasily V. Bulatov b , Jo Carlo (kMC) approach to modeling dislocation motion, directly linking the energetics of dislocation kink of planar glide of screw dislocation in Si, an ideal test-bed for our method is first discussed, followed
Particle Physics Phenomenology 1. Introduction and Monte Carlo techniques
Sjöstrand, Torbjörn
Carlo techniques 2 Phase space and matrix elements 3 Evolution equations and final-state showers 4/81 #12;A tour to Monte Carlo . . . because Einstein was wrong: God does throw dice! Quantum mechanics: trace evolution of event structure Torbj¨orn Sj¨ostrand PPP 1: Introduction and MC techniques slide 6
Monte Carlo role in radiobiological modelling of radiotherapy outcomes.
El Naqa, Issam; Pater, Piotr; Seuntjens, Jan
2012-06-01
Radiobiological models are essential components of modern radiotherapy. They are increasingly applied to optimize and evaluate the quality of different treatment planning modalities. They are frequently used in designing new radiotherapy clinical trials by estimating the expected therapeutic ratio of new protocols. In radiobiology, the therapeutic ratio is estimated from the expected gain in tumour control probability (TCP) to the risk of normal tissue complication probability (NTCP). However, estimates of TCP/NTCP are currently based on the deterministic and simplistic linear-quadratic formalism with limited prediction power when applied prospectively. Given the complex and stochastic nature of the physical, chemical and biological interactions associated with spatial and temporal radiation induced effects in living tissues, it is conjectured that methods based on Monte Carlo (MC) analysis may provide better estimates of TCP/NTCP for radiotherapy treatment planning and trial design. Indeed, over the past few decades, methods based on MC have demonstrated superior performance for accurate simulation of radiation transport, tumour growth and particle track structures; however, successful application of modelling radiobiological response and outcomes in radiotherapy is still hampered with several challenges. In this review, we provide an overview of some of the main techniques used in radiobiological modelling for radiotherapy, with focus on the MC role as a promising computational vehicle. We highlight the current challenges, issues and future potentials of the MC approach towards a comprehensive systems-based framework in radiobiological modelling for radiotherapy. PMID:22571871
Smith, Leon E.; Gesh, Christopher J.; Pagh, Richard T.; Miller, Erin A.; Shaver, Mark W.; Ashbaker, Eric D.; Batdorf, Michael T.; Ellis, J. E.; Kaye, William R.; McConn, Ronald J.; Meriwether, George H.; Ressler, Jennifer J.; Valsan, Andrei B.; Wareing, Todd A.
2008-10-31
Radiation transport modeling methods used in the radiation detection community fall into one of two broad categories: stochastic (Monte Carlo) and deterministic. Monte Carlo methods are typically the tool of choice for simulating gamma-ray spectrometers operating in homeland and national security settings (e.g. portal monitoring of vehicles or isotope identification using handheld devices), but deterministic codes that discretize the linear Boltzmann transport equation in space, angle, and energy offer potential advantages in computational efficiency for many complex radiation detection problems. This paper describes the development of a scenario simulation framework based on deterministic algorithms. Key challenges include: formulating methods to automatically define an energy group structure that can support modeling of gamma-ray spectrometers ranging from low to high resolution; combining deterministic transport algorithms (e.g. ray-tracing and discrete ordinates) to mitigate ray effects for a wide range of problem types; and developing efficient and accurate methods to calculate gamma-ray spectrometer response functions from the deterministic angular flux solutions. The software framework aimed at addressing these challenges is described and results from test problems that compare coupled deterministic-Monte Carlo methods and purely Monte Carlo approaches are provided.
Finding organic vapors - a Monte Carlo approach
NASA Astrophysics Data System (ADS)
Vuollekoski, Henri; Boy, Michael; Kerminen, Veli-Matti; Kulmala, Markku
2010-05-01
Aerosols have an important role in regulating the climate both directly by absorbing and scattering solar radiation, as well as indirectly by acting as cloud condensation nuclei. While it is known that their net effect on radiative forcing is negative, several key aspects remain mysterious. There exist plenty of known primary sources of particles due to both natural and man-made origin - for example desert dust, volcanic activity and tire debris. On the other hand, it has been shown that the formation of secondary particles, by nucleation from precursor vapors, is a frequent, global phenomenon. However, the very earliest steps in new particle formation - nucleation and early growth by condensation - have many big question marks on them. While several studies have indicated the importance of a sufficient concentration of sulphuric acid vapor for the process, it has also been noted that this is usually not enough. Heads have therefore turned to organic vapors, which in their multitude could explain various observed characteristics of new particle formation. But alas, the vast number of organic compounds, their complex chemistry and properties that make them difficult to measure, have complicated the quantifying task. Nevertheless, evidence of organic contribution in particles of all size classes has been found. In particular, a significant organic constituent in the very finest particles suggests the presence of a high concentration of very low-volatile organic vapors. In this study, new particle formation in the boreal forest environment of Hyytiälä, Finland, is investigated in a process model. Our goal is to quantify the concentration, to find the diurnal profile and to get hints of the identity of some organic vapors taking part in new particle formation. Previous studies on the subject have relied on data analysis of the growth rate of the observed particles. However, due to the coarse nature of the methods used to calculate growth rates, this approach has its drawbacks in accuracy, the inability to find diurnal variation and the lack of size resolution. Here, we aim to shed some light onto the problem by applying an ad hoc Monte Carlo algorithm to a well established aerosol dynamical model, the University of Helsinki Multicomponent Aerosol model (UHMA). By performing a side-by-side comparison with measurement data within the algorithm, this approach has the significant advantage of decreasing the amount of manual labor. But more importantly, by basing the comparison on particle number size distribution data - a quantity that can be quite reliably measured - the accuracy of the results is good.
Probability Forecasting Using Monte Carlo Simulation
NASA Astrophysics Data System (ADS)
Duncan, M.; Frisbee, J.; Wysack, J.
2014-09-01
Space Situational Awareness (SSA) is defined as the knowledge and characterization of all aspects of space. SSA is now a fundamental and critical component of space operations. Increased dependence on our space assets has in turn lead to a greater need for accurate, near real-time knowledge of all space activities. With the growth of the orbital debris population, satellite operators are performing collision avoidance maneuvers more frequently. Frequent maneuver execution expends fuel and reduces the operational lifetime of the spacecraft. Thus the need for new, more sophisticated collision threat characterization methods must be implemented. The collision probability metric is used operationally to quantify the collision risk. The collision probability is typically calculated days into the future, so that high risk and potential high risk conjunction events are identified early enough to develop an appropriate course of action. As the time horizon to the conjunction event is reduced, the collision probability changes. A significant change in the collision probability will change the satellite mission stakeholder's course of action. So constructing a method for estimating how the collision probability will evolve improves operations by providing satellite operators with a new piece of information, namely an estimate or 'forecast' of how the risk will change as time to the event is reduced. Collision probability forecasting is a predictive process where the future risk of a conjunction event is estimated. The method utilizes a Monte Carlo simulation that produces a likelihood distribution for a given collision threshold. Using known state and state uncertainty information, the simulation generates a set possible trajectories for a given space object pair. Each new trajectory produces a unique event geometry at the time of close approach. Given state uncertainty information for both objects, a collision probability value can be computed for every trail. This yields a collision probability distribution given known, predicted uncertainty. This paper presents the details of the collision probability forecasting method. We examine various conjunction event scenarios and numerically demonstrate the utility of this approach in typical event scenarios. We explore the utility of a probability-based track scenario simulation that models expected tracking data frequency as the tasking levels are increased. The resulting orbital uncertainty is subsequently used in the forecasting algorithm.
An Unbiased Hessian Representation for Monte Carlo PDFs
Stefano Carrazza; Stefano Forte; Zahari Kassabov; Jose Ignacio Latorre; Juan Rojo
2015-08-05
We develop a methodology for the construction of a Hessian representation of Monte Carlo sets of parton distributions, based on the use of a subset of the Monte Carlo PDF replicas as an unbiased linear basis, and of a genetic algorithm for the determination of the optimal basis. We validate the methodology by first showing that it faithfully reproduces a native Monte Carlo PDF set (NNPDF3.0), and then, that if applied to Hessian PDF set (MMHT14) which was transformed into a Monte Carlo set, it gives back the starting PDFs with minimal information loss. We then show that, when applied to a large Monte Carlo PDF set obtained as combination of several underlying sets, the methodology leads to a Hessian representation in terms of a rather smaller set of parameters (CMC-H PDFs), thereby providing an alternative implementation of the recently suggested Meta-PDF idea and a Hessian version of the recently suggested PDF compression algorithm (CMC-PDFs). The mc2hessian conversion code is made publicly available together with (through LHAPDF6) a Hessian representations of the NNPDF3.0 set, and the CMC-H PDF set.
A new method to assess Monte Carlo convergence
Forster, R.A.; Booth, T.E.; Pederson, S.P.
1993-05-01
The central limit theorem can be applied to a Monte Carlo solution if the following two requirements are satisfied: (1) the random variable has a finite mean and a finite variance; and (2) the number N of independent observations grows large. When these are satisfied, a confidence interval based on the normal distribution with a specified coverage probability can be formed. The first requirement is generally satisfied by the knowledge of the type of Monte Carlo tally being used. The Monte Carlo practitioner has only a limited number of marginally quantifiable methods that use sampled values to assess the fulfillment of the second requirement; e.g., statistical error reduction proportional to 1{radical}N with error magnitude guidelines. No consideration is given to what has not yet been sampled. A new method is presented here to assess the convergence of Monte Carlo solutions by analyzing the shape of the empirical probability density function (PDF) of history scores, f(x), where the random variable x is the score from one particle history and {integral}{sub {minus}{infinity}}{sup {infinity}} f(x) dx = 1. Since f(x) is seldom known explicitly, Monte Carlo particle random walks sample f(x) implicitly. Unless there is a largest possible history score, the empirical f(x) must eventually decrease more steeply than l/x{sup 3} for the second moment ({integral}{sub {minus}{infinity}}{sup {infinity}} x{sup 2}f(x) dx) to exist.
A modified Monte Carlo model for the ionospheric heating rates.
NASA Technical Reports Server (NTRS)
Mayr, H. G.; Fontheim, E. G.; Robertson, S. C.
1973-01-01
A Monte Carlo method is adopted as a basis for the derivation of the photoelectron-heat input into the ionospheric plasma. Since a great number of Monte Carlo runs are required normally for the computation of the heating rates, this approach is modified in an attempt to minimize the computation time. The heat-input distributions are computed for arbitrarily small source elements that are spaced apart at distances corresponding to the photoelectron dissipation range. By means of a nonlinear interpolation procedure their individual heating-rate distributions are utilized to produce synthetic ones that fill the gaps between the Monte Carlo generated distributions. By varying these gaps and the corresponding number of Monte Carlo runs the accuracy of the results is tested to verify the validity of this procedure. It is concluded that this model can reduce the computation time by as much as an order of magnitude, thus improving the feasibility of including Monte Carlo calculations in self-consistent ionosphere models.
Quantum Monte-Carlo method applied to Non-Markovian barrier transmission
G. Hupin; D. Lacroix
2010-01-05
In nuclear fusion and fission, fluctuation and dissipation arise due to the coupling of collective degrees of freedom with internal excitations. Close to the barrier, both quantum, statistical and non-Markovian effects are expected to be important. In this work, a new approach based on quantum Monte-Carlo addressing this problem is presented. The exact dynamics of a system coupled to an environment is replaced by a set of stochastic evolutions of the system density. The quantum Monte-Carlo method is applied to systems with quadratic potentials. In all range of temperature and coupling, the stochastic method matches the exact evolution showing that non-Markovian effects can be simulated accurately. A comparison with other theories like Nakajima-Zwanzig or Time-ConvolutionLess ones shows that only the latter can be competitive if the expansion in terms of coupling constant is made at least to fourth order. A systematic study of the inverted parabola case is made at different temperatures and coupling constants. The asymptotic passing probability is estimated in different approaches including the Markovian limit. Large differences with the exact result are seen in the latter case or when only second order in the coupling strength is considered as it is generally assumed in nuclear transport models. On opposite, if fourth order in the coupling or quantum Monte-Carlo method is used, a perfect agreement is obtained.
Monte Carlo dose calculations in advanced radiotherapy
NASA Astrophysics Data System (ADS)
Bush, Karl Kenneth
The remarkable accuracy of Monte Carlo (MC) dose calculation algorithms has led to the widely accepted view that these methods should and will play a central role in the radiotherapy treatment verification and planning of the future. The advantages of using MC clinically are particularly evident for radiation fields passing through inhomogeneities, such as lung and air cavities, and for small fields, including those used in today's advanced intensity modulated radiotherapy techniques. Many investigators have reported significant dosimetric differences between MC and conventional dose calculations in such complex situations, and have demonstrated experimentally the unmatched ability of MC calculations in modeling charged particle disequilibrium. The advantages of using MC dose calculations do come at a cost. The nature of MC dose calculations require a highly detailed, in-depth representation of the physical system (accelerator head geometry/composition, anatomical patient geometry/composition and particle interaction physics) to allow accurate modeling of external beam radiation therapy treatments. To perform such simulations is computationally demanding and has only recently become feasible within mainstream radiotherapy practices. In addition, the output of the accelerator head simulation can be highly sensitive to inaccuracies within a model that may not be known with sufficient detail. The goal of this dissertation is to both improve and advance the implementation of MC dose calculations in modern external beam radiotherapy. To begin, a novel method is proposed to fine-tune the output of an accelerator model to better represent the measured output. In this method an intensity distribution of the electron beam incident on the model is inferred by employing a simulated annealing algorithm. The method allows an investigation of arbitrary electron beam intensity distributions and is not restricted to the commonly assumed Gaussian intensity. In a second component of this dissertation the design, implementation and evaluation of a technique for reducing a latent variance inherent from the recycling of phase space particle tracks in a simulation is presented. In the technique a random azimuthal rotation about the beam's central axis is applied to each recycled particle, achieving a significant reduction of the latent variance. In a third component, the dissertation presents the first MC modeling of Varian's new RapidArc delivery system and a comparison of dose calculations with the Eclipse treatment planning system. A total of four arc plans are compared including an oropharynx patient phantom containing tissue inhomogeneities. Finally, in a step toward introducing MC dose calculation into the planning of treatments such as RapidArc, a technique is presented to feasibly generate and store a large set of MC calculated dose distributions. A novel 3-D dyadic multi-resolution (MR) decomposition algorithm is presented and the compressibility of the dose data using this algorithm is investigated. The presented MC beamlet generation method, in conjunction with the presented 3-D data MR decomposition, represents a viable means to introduce MC dose calculation in the planning and optimization stages of advanced radiotherapy.
SPQR: a Monte Carlo reactor kinetics code. [LMFBR
Cramer, S.N.; Dodds, H.L.
1980-02-01
The SPQR Monte Carlo code has been developed to analyze fast reactor core accident problems where conventional methods are considered inadequate. The code is based on the adiabatic approximation of the quasi-static method. This initial version contains no automatic material motion or feedback. An existing Monte Carlo code is used to calculate the shape functions and the integral quantities needed in the kinetics module. Several sample problems have been devised and analyzed. Due to the large statistical uncertainty associated with the calculation of reactivity in accident simulations, the results, especially at later times, differ greatly from deterministic methods. It was also found that in large uncoupled systems, the Monte Carlo method has difficulty in handling asymmetric perturbations.
Tool for Rapid Analysis of Monte Carlo Simulations
NASA Technical Reports Server (NTRS)
Restrepo, Carolina; McCall, Kurt E.; Hurtado, John E.
2011-01-01
Designing a spacecraft, or any other complex engineering system, requires extensive simulation and analysis work. Oftentimes, the large amounts of simulation data generated are very di cult and time consuming to analyze, with the added risk of overlooking potentially critical problems in the design. The authors have developed a generic data analysis tool that can quickly sort through large data sets and point an analyst to the areas in the data set that cause specific types of failures. The Tool for Rapid Analysis of Monte Carlo simulations (TRAM) has been used in recent design and analysis work for the Orion vehicle, greatly decreasing the time it takes to evaluate performance requirements. A previous version of this tool was developed to automatically identify driving design variables in Monte Carlo data sets. This paper describes a new, parallel version, of TRAM implemented on a graphical processing unit, and presents analysis results for NASA's Orion Monte Carlo data to demonstrate its capabilities.
Quantum Monte Carlo Simulations of Tunneling in Quantum Adiabatic Optimization
Lucas T. Brady; Wim van Dam
2015-09-17
We explore to what extent path-integral quantum Monte Carlo methods can efficiently simulate the tunneling behavior of quantum adiabatic optimization algorithms. Specifically we look at symmetric cost functions defined over n bits with a single potential barrier that a successful optimization algorithm will have to tunnel through. The height and width of this barrier depend on n, and by tuning these dependencies, we can make the optimization algorithm succeed or fail in polynomial time. In this article we compare the strength of quantum adiabatic tunneling with that of path-integral quantum Monte Carlo methods. We find numerical evidence that quantum Monte Carlo algorithms will succeed in the same regimes where quantum adiabatic optimization succeeds.
The Monte Carlo method in quantum field theory
Colin Morningstar
2007-02-20
This series of six lectures is an introduction to using the Monte Carlo method to carry out nonperturbative studies in quantum field theories. Path integrals in quantum field theory are reviewed, and their evaluation by the Monte Carlo method with Markov-chain based importance sampling is presented. Properties of Markov chains are discussed in detail and several proofs are presented, culminating in the fundamental limit theorem for irreducible Markov chains. The example of a real scalar field theory is used to illustrate the Metropolis-Hastings method and to demonstrate the effectiveness of an action-preserving (microcanonical) local updating algorithm in reducing autocorrelations. The goal of these lectures is to provide the beginner with the basic skills needed to start carrying out Monte Carlo studies in quantum field theories, as well as to present the underlying theoretical foundations of the method.
Monte Carlo Methods for Tempo Tracking and Rhythm Quantization
Cemgil, A T; 10.1613/jair.1121
2011-01-01
We present a probabilistic generative model for timing deviations in expressive music performance. The structure of the proposed model is equivalent to a switching state space model. The switch variables correspond to discrete note locations as in a musical score. The continuous hidden variables denote the tempo. We formulate two well known music recognition problems, namely tempo tracking and automatic transcription (rhythm quantization) as filtering and maximum a posteriori (MAP) state estimation tasks. Exact computation of posterior features such as the MAP state is intractable in this model class, so we introduce Monte Carlo methods for integration and optimization. We compare Markov Chain Monte Carlo (MCMC) methods (such as Gibbs sampling, simulated annealing and iterative improvement) and sequential Monte Carlo methods (particle filters). Our simulation results suggest better results with sequential methods. The methods can be applied in both online and batch scenarios such as tempo tracking and transcr...
Quantum Monte Carlo calculation of entanglement Renyi entropies for generic quantum systems
Stephan Humeniuk; Tommaso Roscilde
2012-03-26
We present a general scheme for the calculation of the Renyi entropy of a subsystem in quantum many-body models that can be efficiently simulated via quantum Monte Carlo. When the simulation is performed at very low temperature, the above approach delivers the entanglement Renyi entropy of the subsystem, and it allows to explore the crossover to the thermal Renyi entropy as the temperature is increased. We implement this scheme explicitly within the Stochastic Series expansion as well as within path-integral Monte Carlo, and apply it to quantum spin and quantum rotor models. In the case of quantum spins, we show that relevant models in two dimensions with reduced symmetry (XX model or hardcore bosons, transverse-field Ising model at the quantum critical point) exhibit an area law for the scaling of the entanglement entropy.
Willert, Jeffrey Park, H.
2014-11-01
In this article we explore the possibility of replacing Standard Monte Carlo (SMC) transport sweeps within a Moment-Based Accelerated Thermal Radiative Transfer (TRT) algorithm with a Residual Monte Carlo (RMC) formulation. Previous Moment-Based Accelerated TRT implementations have encountered trouble when stochastic noise from SMC transport sweeps accumulates over several iterations and pollutes the low-order system. With RMC we hope to significantly lower the build-up of statistical error at a much lower cost. First, we display encouraging results for a zero-dimensional test problem. Then, we demonstrate that we can achieve a lower degree of error in two one-dimensional test problems by employing an RMC transport sweep with multiple orders of magnitude fewer particles per sweep. We find that by reformulating the high-order problem, we can compute more accurate solutions at a fraction of the cost.
Study of nuclear pairing with Configuration-Space Monte-Carlo approach
Mark Lingle; Alexander Volya
2015-03-20
Pairing correlations in nuclei play a decisive role in determining nuclear drip-lines, binding energies, and many collective properties. In this work a new Configuration-Space Monte-Carlo (CSMC) method for treating nuclear pairing correlations is developed, implemented, and demonstrated. In CSMC the Hamiltonian matrix is stochastically generated in Krylov subspace, resulting in the Monte-Carlo version of Lanczos-like diagonalization. The advantages of this approach over other techniques are discussed; the absence of the fermionic sign problem, probabilistic interpretation of quantum-mechanical amplitudes, and ability to handle truly large-scale problems with defined precision and error control, are noteworthy merits of CSMC. The features of our CSMC approach are shown using models and realistic examples. Special attention is given to difficult limits: situations with non-constant pairing strengths, cases with nearly degenerate excited states, limits when pairing correlations in finite systems are weak, and problems when the relevant configuration space is large.
A non-Monte Carlo approach to analyzing 1D Anderson localization in dispersive metamaterials
NASA Astrophysics Data System (ADS)
Kissel, Glen J.
2015-09-01
Monte Carlo simulations have long been used to study Anderson localization in models of one-dimensional random stacks. Because such simulations use substantial computational resources and because the randomness of random number generators for such simulations has been called into question, a non-Monte Carlo approach is of interest. This paper uses a non-Monte Carlo methodology, limited to discrete random variables, to determine the Lyapunov exponent, or its reciprocal, known as the localization length, for a one-dimensional random stack model, proposed by Asatryan, et al., consisting of various combinations of negative, imaginary and positive index materials that include the effects of dispersion and absorption, as well as off-axis incidence and polarization effects. Dielectric permittivity and magnetic permeability are the two variables randomized in the models. In the paper, Furstenberg's integral formula is used to calculate the Lyapunov exponent of an infinite product of random matrices modeling the one-dimensional stack. The integral formula requires integration with respect to the probability distribution of the randomized layer parameters, as well as integration with respect to the so-called invariant probability measure of the direction of the vector propagated by the long chain of random matrices. The non-Monte Carlo approach uses a numerical procedure of Froyland and Aihara which calculates the invariant measure as the left eigenvector of a certain sparse row-stochastic matrix, thus avoiding the use of any random number generator. The results show excellent agreement with the Monte Carlo generated simulations which make use of continuous random variables, while frequently providing reductions in computation time.
Spin-Orbit Interactions in Electronic Structure Quantum Monte Carlo
Melton, Cody; Guo, Shi; Ambrosetti, Alberto; Pederiva, Francesco; Mitas, Lubos
2015-01-01
We develop generalization of the fixed-phase diffusion Monte Carlo method for Hamiltonians which explicitly depend on particle spins such as for spin-orbit interactions. The method is formulated in zero variance manner and is similar to treatment of nonlocal operators in commonly used static- spin calculations. Tests on atomic and molecular systems show that it is very accurate, on par with the fixed-node method. This opens electronic structure quantum Monte Carlo methods to a vast research area of quantum phenomena in which spin-related interactions play an important role.
Monte-Carlo Reaction Rate Evaluation for Astrophysics
Coc, A.; Fitzgerald, R.
2010-06-01
We present a new evaluation of thermonuclear reaction rates for astrophysics involving proton and alpha-particle induced reactions, in the target mass range between A = 14 and 40, including many radioactive targets. A method based on Monte Carlo techniques is used to evaluate thermonuclear reaction rates and their uncertainties. At variance with previous evaluations, the low, median and high rates are statistically defined and a lognormal approximation to the rate distribution is given. This provides improved input for astrophysical model calculations using also the Monte Carlo method to estimate uncertainties on isotopic abundances.
Bold Diagrammatic Monte Carlo for Fermionic and Fermionized Systems
NASA Astrophysics Data System (ADS)
Svistunov, Boris
2013-03-01
In three different fermionic cases--repulsive Hubbard model, resonant fermions, and fermionized spins-1/2 (on triangular lattice)--we observe the phenomenon of sign blessing: Feynman diagrammatic series features finite convergence radius despite factorial growth of the number of diagrams with diagram order. Bold diagrammatic Monte Carlo technique allows us to sample millions of skeleton Feynman diagrams. With the universal fermionization trick we can fermionize essentially any (bosonic, spin, mixed, etc.) lattice system. The combination of fermionization and Bold diagrammatic Monte Carlo yields a universal first-principle approach to strongly correlated lattice systems, provided the sign blessing is a generic fermionic phenomenon. Supported by NSF and DARPA
Event-driven Monte Carlo algorithm for general potentials
Etienne P. Bernard; Werner Krauth
2011-11-29
We extend the event-chain Monte Carlo algorithm from hard-sphere interactions to the micro-canonical ensemble (constant potential energy) for general potentials. This event-driven Monte Carlo algorithm is non-local, rejection-free, and allows for the breaking of detailed balance. The algorithm uses a discretized potential, but its running speed is asymptotically independent of the discretization. We implement the algorithm for the cut-off linear potential, and discuss its possible implementation directly in the continuum limit.
A review of best practices for Monte Carlo criticality calculations
Brown, Forrest B
2009-01-01
Monte Carlo methods have been used to compute k{sub eff} and the fundamental mode eigenfunction of critical systems since the 1950s. While such calculations have become routine using standard codes such as MCNP and SCALE/KENO, there still remain 3 concerns that must be addressed to perform calculations correctly: convergence of k{sub eff} and the fission distribution, bias in k{sub eff} and tally results, and bias in statistics on tally results. This paper provides a review of the fundamental problems inherent in Monte Carlo criticality calculations. To provide guidance to practitioners, suggested best practices for avoiding these problems are discussed and illustrated by examples.
ITER Neutronics Modeling Using Hybrid Monte Carlo/Deterministic and CAD-Based Monte Carlo Methods
Ibrahim, A.; Mosher, Scott W; Evans, Thomas M; Peplow, Douglas E.; Sawan, M.; Wilson, P.; Wagner, John C; Heltemes, Thad
2011-01-01
The immense size and complex geometry of the ITER experimental fusion reactor require the development of special techniques that can accurately and efficiently perform neutronics simulations with minimal human effort. This paper shows the effect of the hybrid Monte Carlo (MC)/deterministic techniques - Consistent Adjoint Driven Importance Sampling (CADIS) and Forward-Weighted CADIS (FW-CADIS) - in enhancing the efficiency of the neutronics modeling of ITER and demonstrates the applicability of coupling these methods with computer-aided-design-based MC. Three quantities were calculated in this analysis: the total nuclear heating in the inboard leg of the toroidal field coils (TFCs), the prompt dose outside the biological shield, and the total neutron and gamma fluxes over a mesh tally covering the entire reactor. The use of FW-CADIS in estimating the nuclear heating in the inboard TFCs resulted in a factor of ~ 275 increase in the MC figure of merit (FOM) compared with analog MC and a factor of ~ 9 compared with the traditional methods of variance reduction. By providing a factor of ~ 21 000 increase in the MC FOM, the radiation dose calculation showed how the CADIS method can be effectively used in the simulation of problems that are practically impossible using analog MC. The total flux calculation demonstrated the ability of FW-CADIS to simultaneously enhance the MC statistical precision throughout the entire ITER geometry. Collectively, these calculations demonstrate the ability of the hybrid techniques to accurately model very challenging shielding problems in reasonable execution times.
Error estimations and their biases in Monte Carlo eigenvalue calculations
Ueki, Taro; Mori, Takamasa; Nakagawa, Masayuki
1997-01-01
In the Monte Carlo eigenvalue calculation of neutron transport, the eigenvalue is calculated as the average of multiplication factors from cycles, which are called the cycle k{sub eff}`s. Biases in the estimators of the variance and intercycle covariances in Monte Carlo eigenvalue calculations are analyzed. The relations among the real and apparent values of variances and intercycle covariances are derived, where real refers to a true value that is calculated from independently repeated Monte Carlo runs and apparent refers to the expected value of estimates from a single Monte Carlo run. Next, iterative methods based on the foregoing relations are proposed to estimate the standard deviation of the eigenvalue. The methods work well for the cases in which the ratios of the real to apparent values of variances are between 1.4 and 3.1. Even in the case where the foregoing ratio is >5, >70% of the standard deviation estimates fall within 40% from the true value.
A Forcebalance Monte Carlo Simulation of the Surface
Attard, Phil
A Forcebalance Monte Carlo Simulation of the Surface Tension of a Hardsphere Fluid Phil Attard. Moule, Mol. Phys. 78, 943959 (1993). Abstract. A forcebalance simulation method is in troduced, where potentials. Equilibrium corresponds to the balance of internal and external forces, which allows mechanical
Monte Carlo event generators for hadron-hadron collisions
Knowles, I.G.; Protopopescu, S.D.
1993-06-01
A brief review of Monte Carlo event generators for simulating hadron-hadron collisions is presented. Particular emphasis is placed on comparisons of the approaches used to describe physics elements and identifying their relative merits and weaknesses. This review summarizes a more detailed report.
Monte Carlo Capabilities of the SCALE Code System
NASA Astrophysics Data System (ADS)
Rearden, B. T.; Petrie, L. M.; Peplow, D. E.; Bekar, K. B.; Wiarda, D.; Celik, C.; Perfetti, C. M.; Ibrahim, A. M.; Hart, S. W. D.; Dunn, M. E.
2014-06-01
SCALE is a widely used suite of tools for nuclear systems modeling and simulation that provides comprehensive, verified and validated, user-friendly capabilities for criticality safety, reactor physics, radiation shielding, and sensitivity and uncertainty analysis. For more than 30 years, regulators, licensees, and research institutions around the world have used SCALE for nuclear safety analysis and design. SCALE provides a "plug-and-play" framework that includes three deterministic and three Monte Carlo radiation transport solvers that can be selected based on the desired solution, including hybrid deterministic/Monte Carlo simulations. SCALE includes the latest nuclear data libraries for continuous-energy and multigroup radiation transport as well as activation, depletion, and decay calculations. SCALE's graphical user interfaces assist with accurate system modeling, visualization, and convenient access to desired results. SCALE 6.2, to be released in 2014, will provide several new capabilities and significant improvements in many existing features, especially with expanded continuous-energy Monte Carlo capabilities for criticality safety, shielding, depletion, and sensitivity and uncertainty analysis. An overview of the Monte Carlo capabilities of SCALE is provided here, with emphasis on new features for SCALE 6.2.
Multi-microcomputer system for Monte-Carlo calculations
Berg, B; Krasemann, H
1981-01-01
The authors propose a microcomputer system that allows parallel processing for Monte Carlo calculations in lattice gauge theories, simulations of high energy physics experiments and many other fields of current interest. The master-n-slave multiprocessor system is based on the Motorola MC 6800 microprocessor. One attraction of this processor is that it allows up to 16 M Byte random access memory.
MONTE CARLO SIMULATION OF ELECTRON DYNAMICS IN QUANTUM CASCADE LASERS
Knezevic, Irena
MONTE CARLO SIMULATION OF ELECTRON DYNAMICS IN QUANTUM CASCADE LASERS by Xujiao Gao A dissertation-of-the-art quantum cascade lasers . . . . . . . . . . . . . . . . . . . 3 1.1.2 Theoretical approaches for QCL . . . . . . . . . . . . . . . . . . . . . . . . . 38 #12;v Page 3 X-valley Leakage in GaAs/AlGaAs Quantum Cascade Lasers . . . . . . . . . . . . 40 3
Monte-Carlo Tree Search (MCTS) for Computer Go
Bouzy, Bruno
Monte-Carlo Tree Search (MCTS) for Computer Go Bruno Bouzy bruno.bouzy@parisdescartes.fr Université Paris Descartes AOA class #12;MCTS for Computer Go 2 Outline The game of Go: a 9x9 game The « old;MCTS for Computer Go 3 The game of Go #12;MCTS for Computer Go 4 The game of Go 4000 years Originated
Criticality: a Monte-Carlo Heuristic for Go Remi Coulom
Coulom, Rémi - Groupe de Recherche sur l'Apprentissage Automatique, Université Charles de Gaulle
Criticality: a Monte-Carlo Heuristic for Go Programs R´emi Coulom Universit´e Charles de Gaulle for Go Programs 2 / 9 #12;Introduction Criticality Heuristic Experiments with Crazy Stone Conclusion´emi Coulom Criticality: a MC Heuristic for Go Programs 3 / 9 #12;Introduction Criticality Heuristic
Monte Carlo simulation of the shape space model of immunology
NASA Astrophysics Data System (ADS)
Dasgupta, Subinay
1992-11-01
The shape space model of de Boer, Segel and Perelson for the immune system is studied with a probabilistic updating rule by Monte Carlo simulation. A suitable mathematical form is chosen for the probability of increase of B-cell concentration depending on the concentration around the mirror image site. The results obtained agree reasonably with the results obtained by deterministic cellular automata.
Observations on variational and projector Monte Carlo methods
NASA Astrophysics Data System (ADS)
Umrigar, C. J.
2015-10-01
Variational Monte Carlo and various projector Monte Carlo (PMC) methods are presented in a unified manner. Similarities and differences between the methods and choices made in designing the methods are discussed. Both methods where the Monte Carlo walk is performed in a discrete space and methods where it is performed in a continuous space are considered. It is pointed out that the usual prescription for importance sampling may not be advantageous depending on the particular quantum Monte Carlo method used and the observables of interest, so alternate prescriptions are presented. The nature of the sign problem is discussed for various versions of PMC methods. A prescription for an exact PMC method in real space, i.e., a method that does not make a fixed-node or similar approximation and does not have a finite basis error, is presented. This method is likely to be practical for systems with a small number of electrons. Approximate PMC methods that are applicable to larger systems and go beyond the fixed-node approximation are also discussed.
A new method to assess Monte Carlo convergence
Forster, R.A.; Booth, T.E.; Pederson, S.P.
1993-01-01
The central limit theorem can be applied to a Monte Carlo solution if the following two requirements are satisfied: (1) the random variable has a finite mean and a finite variance; and (2) the number N of independent observations grows large. When these are satisfied, a confidence interval based on the normal distribution with a specified coverage probability can be formed. The first requirement is generally satisfied by the knowledge of the type of Monte Carlo tally being used. The Monte Carlo practitioner has only a limited number of marginally quantifiable methods that use sampled values to assess the fulfillment of the second requirement; e.g., statistical error reduction proportional to 1[radical]N with error magnitude guidelines. No consideration is given to what has not yet been sampled. A new method is presented here to assess the convergence of Monte Carlo solutions by analyzing the shape of the empirical probability density function (PDF) of history scores, f(x), where the random variable x is the score from one particle history and [integral][sub [minus][infinity
Parallel Monte Carlo simulation of multilattice thin film growth
NASA Astrophysics Data System (ADS)
Shu, J. W.; Lu, Qin; Wong, Wai-on; Huang, Han-chen
2001-07-01
This paper describe a new parallel algorithm for the multi-lattice Monte Carlo atomistic simulator for thin film deposition (ADEPT), implemented on parallel computer using the PVM (Parallel Virtual Machine) message passing library. This parallel algorithm is based on domain decomposition with overlapping and asynchronous communication. Multiple lattices are represented by a single reference lattice through one-to-one mappings, with resulting computational demands being comparable to those in the single-lattice Monte Carlo model. Asynchronous communication and domain overlapping techniques are used to reduce the waiting time and communication time among parallel processors. Results show that the algorithm is highly efficient with large number of processors. The algorithm was implemented on a parallel machine with 50 processors, and it is suitable for parallel Monte Carlo simulation of thin film growth with either a distributed memory parallel computer or a shared memory machine with message passing libraries. In this paper, the significant communication time in parallel MC simulation of thin film growth is effectively reduced by adopting domain decomposition with overlapping between sub-domains and asynchronous communication among processors. The overhead of communication does not increase evidently and speedup shows an ascending tendency when the number of processor increases. A near linear increase in computing speed was achieved with number of processors increases and there is no theoretical limit on the number of processors to be used. The techniques developed in this work are also suitable for the implementation of the Monte Carlo code on other parallel systems.
Don't Trust Parallel Monte Carlo! P. Hellekalek
Li, Yaohang
The generation of random numbers is one of the fundamental tasks of numerical practice. Isn't it also oneDon't Trust Parallel Monte Carlo! P. Hellekalek Dept. of Mathematics University of Salzburg A-5020 a finite sequence of random numbers as long as we want. If we want to do parallelization, we just take
Exploring Mass Perception with Markov Chain Monte Carlo
ERIC Educational Resources Information Center
Cohen, Andrew L.; Ross, Michael G.
2009-01-01
Several previous studies have examined the ability to judge the relative mass of objects in idealized collisions. With a newly developed technique of psychological Markov chain Monte Carlo sampling (A. N. Sanborn & T. L. Griffiths, 2008), this work explores participants; perceptions of different collision mass ratios. The results reveal…
Monte Carlo Simulations of Light Propagation in Apples
Technology Transfer Automated Retrieval System (TEKTRAN)
This paper reports on the investigation of light propagation in fresh apples in the visible and short-wave near-infrared region using Monte Carlo simulations. Optical properties of ‘Golden Delicious’ apples were determined over the spectral range of 500-1100 nm using a hyperspectral imaging method, ...
Self Regenerative Markov Chain Monte Carlo with Sujit K. Sahu
Sahu, Sujit K
Self Regenerative Markov Chain Monte Carlo with Adaptation Sujit K. Sahu Faculty of Mathematical propose and study a new MCMC algorithm which we call self-regenerative (SR). This algorithm belongs. This regenerative property gives the name to the algorithm and permits a simple analysis of it. The regenerative
A Variational Monte Carlo Approach to Atomic Structure
ERIC Educational Resources Information Center
Davis, Stephen L.
2007-01-01
The practicality and usefulness of variational Monte Carlo calculations to atomic structure are demonstrated. It is found to succeed in quantitatively illustrating electron shielding, effective nuclear charge, l-dependence of the orbital energies, and singlet-tripetenergy splitting and ionization energy trends in atomic structure theory.
Imaginary time correlations within Quantum Monte Carlo methods Markus Holzmann
van Tiggelen, Bart
Imaginary time correlations within Quantum Monte Carlo methods Markus Holzmann LPMMC, CNRS-UJF, BP integral calculations also give access to imaginary time correlations which contain important information about the real time evolution of the quantum system in the linear response regime, experimentally
Monte Carlo sampling from the quantum state space. II
Yi-Lin Seah; Jiangwei Shang; Hui Khoon Ng; David John Nott; Berthold-Georg Englert
2015-04-27
High-quality random samples of quantum states are needed for a variety of tasks in quantum information and quantum computation. Searching the high-dimensional quantum state space for a global maximum of an objective function with many local maxima or evaluating an integral over a region in the quantum state space are but two exemplary applications of many. These tasks can only be performed reliably and efficiently with Monte Carlo methods, which involve good samplings of the parameter space in accordance with the relevant target distribution. We show how the Markov-chain Monte Carlo method known as Hamiltonian Monte Carlo, or hybrid Monte Carlo, can be adapted to this context. It is applicable when an efficient parameterization of the state space is available. The resulting random walk is entirely inside the physical parameter space, and the Hamiltonian dynamics enable us to take big steps, thereby avoiding strong correlations between successive sample points while enjoying a high acceptance rate. We use examples of single and double qubit measurements for illustration.
1 Markov Chain Monte Carlo (MCMC) By Steven F. Arnold
Babu, G. Jogesh
1 Markov Chain Monte Carlo (MCMC) By Steven F. Arnold Professor of Statistics-Penn State University chain are called the states of the Markov chain. A stationary distribution for a Markov chain is a distribution over the states such that if we start the Markov chain in , we stay in . A limiting distribution
A Monte Carlo Approach for Football Play Generation Kennard Laviers
Sukthankar, Gita Reese
A Monte Carlo Approach for Football Play Generation Kennard Laviers School of EECS U. of Central, adversarial games and demonstrate its utility at gen- erating American football plays for Rush Football 2008. In football, like in many other multi-agent games, the actions of all of the agents are not equally crucial
An Improved Monte Carlo Algorithm for Elastic Electron Backscattering
Dimov, Ivan
An Improved Monte Carlo Algorithm for Elastic Electron Backscattering from Surfaces Ivan T. Dimov of the backscattering of electrons from metal targets is subject of extensive theoreticel and experimental work in sur for elastic electron backscattering from surfaces" (1999), is based upon direct simulation of the physical
MONTE CARLO EXPLORATIONS OF POLYGONAL KNOT SPACES KENNETH C. MILLETT
California at Santa Barbara, University of
1 MONTE CARLO EXPLORATIONS OF POLYGONAL KNOT SPACES KENNETH C. MILLETT Department of Mathematics Polygonal knots are embeddings of polygons in three space. For each n, the collection of embedded nÂgons determines a subset of Euclidean space whose structure is the subject of this paper. Which knots can
Microbial contamination in poultry chillers estimated by Monte Carlo simulations
Technology Transfer Automated Retrieval System (TEKTRAN)
The risk of microbial contamination during poultry processing may be reduced by the operating characteristics of the chiller. The performance of air chillers and immersion chillers were compared in terms of pre-chill and post-chill contamination using Monte Carlo simulations. Three parameters were u...
Pluto: A Monte Carlo Simulation Tool for Hadronic Physics
I. Froehlich; L. Cazon Boado; T. Galatyuk; V. Hejny; R. Holzmann; M. Kagarlis; W. Kuehn; J. G. Messchendorp; V. Metag; M. -A. Pleier; W. Przygoda; B. Ramstein; J. Ritman; P. Salabura; J. Stroth; M. Sudol
2007-11-06
Pluto is a Monte-Carlo event generator designed for hadronic interactions from Pion production threshold to intermediate energies of a few GeV per nucleon, as well as for studies of heavy ion reactions. This report gives an overview of the design of the package, the included models and the user interface.
Improved geometry representations for Monte Carlo radiation transport.
Martin, Matthew Ryan
2004-08-01
ITS (Integrated Tiger Series) permits a state-of-the-art Monte Carlo solution of linear time-integrated coupled electron/photon radiation transport problems with or without the presence of macroscopic electric and magnetic fields of arbitrary spatial dependence. ITS allows designers to predict product performance in radiation environments.
A Review of Monte Carlo Tests of Cluster Analysis.
ERIC Educational Resources Information Center
Milligan, Glenn W.
1981-01-01
Monte Carlo validation studies of clustering algorithms, including Ward's minimum variance hierarchical method, are reviewed. Caution concerning the uncritical selection of Ward's method for recovering cluster structure is advised. Alternative explanations for differential recovery performance are explored and recommendations are made for future…
Probability Distributions of School Enrollment Predictions Using Monte Carlo Simulation
ERIC Educational Resources Information Center
Denham, Carolyn H.
1973-01-01
A major problem in most predictions of school enrollment is the forecaster's failure to express adequately his certainty or uncertainty in his estimates. Describes a method whereby a forecaster can prepare probability distributions of enrollment predictions. The Monte Carlo computer simulation calculates enrollments by the multivariable method,…
Optimally Combining Sampling Techniques for Monte Carlo Rendering
Kazhdan, Michael
Computer Science Department Stanford University Abstract Monte Carlo integration is a powerful technique one sampling technique to estimate an integral with low variance. Normally this is accomplished several techniques. We do not construct new sampling methods--all the samples we use come from one
Optimally Combining Sampling Techniques for Monte Carlo Rendering
Stanford University
Computer Science Department Stanford University Abstract Monte Carlo integration is a powerful technique often need more than one sampling technique to estimate an integral with low variance. Normally by combining samples from several techniques. We do not construct new sampling methods---all the samples we use
Markov Chain Monte Carlo Algorithms: Theory and Practice
Rosenthal, Jeffrey S.
research papers, and the phrase "Markov chain Monte Carlo" elicits over three hundred thousand hits knowledge of the theory of Markov chains or probability, and there has been some divorce between, and con- tinues to have, important implications for the practical use of MCMC. In this paper, we
Markov Chain Monte Carlo Algorithms: Theory and Practice
Rosenthal, Jeffrey S.
research papers, and the phrase ``Markov chain Monte Carlo'' elicits over three hundred thousand hits knowledge of the theory of Markov chains or probability, and there has been some divorce between, and conÂ tinues to have, important implications for the practical use of MCMC. In this paper, we
ENVIRONMENTAL MODELING: 1 APPLICATIONS: MONTE CARLO SENSITIVITY SIMULATIONS
Dimov, Ivan
SIMULATIONS TO THE PROBLEM OF AIR POLLUTION TRANSPORT 3 1.1 The Danish Eulerian Model #12;Chapter 1 APPLICATIONS: MONTE CARLO SENSITIVITY SIMULATIONS TO THE PROBLEM OF AIR POLLUTION of pollutants in a real-live scenario of air-pollution transport over Europe. First, the developed technique
The Use of Monte Carlo Techniques to Teach Probability.
ERIC Educational Resources Information Center
Newell, G. J.; MacFarlane, J. D.
1985-01-01
Presents sports-oriented examples (cricket and football) in which Monte Carlo methods are used on microcomputers to teach probability concepts. Both examples include computer programs (with listings) which utilize the microcomputer's random number generator. Instructional strategies, with further challenges to help students understand the role of…
Systemic risk in banking networks without Monte Carlo simulation
Hurd, Thomas R.
Systemic risk in banking networks without Monte Carlo simulation James P. Gleeson1 , T. R. Hurd2 An analytical approach to calculating the expected size of contagion events in models of banking networks to be initiated by the default of one or more banks, and includes liquidity risk effects. Theoretical results
Applications of the Monte Carlo radiation transport toolkit at LLNL
NASA Astrophysics Data System (ADS)
Sale, Kenneth E.; Bergstrom, Paul M., Jr.; Buck, Richard M.; Cullen, Dermot; Fujino, D.; Hartmann-Siantar, Christine
1999-09-01
Modern Monte Carlo radiation transport codes can be applied to model most applications of radiation, from optical to TeV photons, from thermal neutrons to heavy ions. Simulations can include any desired level of detail in three-dimensional geometries using the right level of detail in the reaction physics. The technology areas to which we have applied these codes include medical applications, defense, safety and security programs, nuclear safeguards and industrial and research system design and control. The main reason such applications are interesting is that by using these tools substantial savings of time and effort (i.e. money) can be realized. In addition it is possible to separate out and investigate computationally effects which can not be isolated and studied in experiments. In model calculations, just as in real life, one must take care in order to get the correct answer to the right question. Advancing computing technology allows extensions of Monte Carlo applications in two directions. First, as computers become more powerful more problems can be accurately modeled. Second, as computing power becomes cheaper Monte Carlo methods become accessible more widely. An overview of the set of Monte Carlo radiation transport tools in use a LLNL will be presented along with a few examples of applications and future directions.
A Monte Carlo Approach for Adaptive Testing with Content Constraints
ERIC Educational Resources Information Center
Belov, Dmitry I.; Armstrong, Ronald D.; Weissman, Alexander
2008-01-01
This article presents a new algorithm for computerized adaptive testing (CAT) when content constraints are present. The algorithm is based on shadow CAT methodology to meet content constraints but applies Monte Carlo methods and provides the following advantages over shadow CAT: (a) lower maximum item exposure rates, (b) higher utilization of the…
Titrating Polyelectrolytes --Variational Calculations and Monte Carlo Simulations
Söderberg, Bo
LU TP 951 May 1995 Titrating Polyelectrolytes -- Variational Calculations and Monte Carlo properties of a titrating polyelectrolyte in a discrete representation. In the variational treatment.e. titratable groups in a polymer will exchange protons with the solution and the polymer net charge will vary
Thermal Properties of Supercritical Carbon Dioxide by Monte Carlo Simulations
Lisal, Martin
Thermal Properties of Supercritical Carbon Dioxide by Monte Carlo Simulations C.M. COLINAa,b, *, C and speed of sound for carbon dioxide (CO2) in the supercritical region, using the fluctuation method based: Fluctuations; Carbon dioxide; 2CLJQ; JouleThomson coefficient; Speed of sound INTRODUCTION Simulation methods
MCMs: Early History and The Basics Monte Carlo Methods
Mascagni, Michael
MCMs: Early History and The Basics Monte Carlo Methods: Early History and The Basics Prof. Michael Mascagni Department of Computer Science Department of Mathematics Department of Scientific Computing: http://www.cs.fsu.edu/mascagni #12;MCMs: Early History and The Basics Outline of the Talk Early History
Difficulties in vector-parallel processing of Monte Carlo codes
Higuchi, Kenji; Asai, Kiyoshi; Hasegawa, Yukihiro
1997-09-01
Experiences with vectorization of production-level Monte Carlo codes such as KENO-IV, MCNP, VIM, and MORSE have shown that it is difficult to attain high speedup ratios on vector processors because of indirect addressing, nests of conditional branches, short vector length, cache misses, and operations for realization of robustness and generality. A previous work has already shown that the first, second, and third difficulties can be resolved by using special computer hardware for vector processing of Monte Carlo codes. Here, the fourth and fifth difficulties are discussed in detail using the results for a vectorized version of the MORSE code. As for the fourth difficulty, it is shown that the cache miss-hit ratio affects execution times of the vectorized Monte Carlo codes and the ratio strongly depends on the number of the particles simultaneously tracked. As for the fifth difficulty, it is shown that remarkable speedup ratios are obtained by removing operations that are not essential to the specific problem being solved. These experiences have shown that if a production-level Monte Carlo code system had a capability to selectively construct source coding that complements the input data, then the resulting code could achieve much higher performance.
Monte Carlo simulation by computer for life-cycle costing
NASA Technical Reports Server (NTRS)
Gralow, F. H.; Larson, W. J.
1969-01-01
Prediction of behavior and support requirements during the entire life cycle of a system enables accurate cost estimates by using the Monte Carlo simulation by computer. The system reduces the ultimate cost to the procuring agency because it takes into consideration the costs of initial procurement, operation, and maintenance.
Heavy-tailed random error in quantum Monte Carlo
J. R. Trail
2009-09-30
The combination of continuum Many-Body Quantum physics and Monte Carlo methods provide a powerful and well established approach to first principles calculations for large systems. Replacing the exact solution of the problem with a statistical estimate requires a measure of the random error in the estimate for it to be useful. Such a measure of confidence is usually provided by assuming the Central Limit Theorem to hold true. In what follows it is demonstrated that, for the most popular implementation of the Variational Monte Carlo method, the Central Limit Theorem has limited validity, or is invalid and must be replaced by a Generalised Central Limit Theorem. Estimates of the total energy and the variance of the local energy are examined in detail, and shown to exhibit uncontrolled statistical errors through an explicit derivation of the distribution of the random error. Several examples are given of estimated quantities for which the Central Limit Theorem is not valid. The approach used is generally applicable to characterising the random error of estimates, and to Quantum Monte Carlo methods beyond Variational Monte Carlo.
On the Gap-Tooth direct simulation Monte Carlo method
Armour, Jessica D
2012-01-01
This thesis develops and evaluates Gap-tooth DSMC (GT-DSMC), a direct Monte Carlo simulation procedure for dilute gases combined with the Gap-tooth method of Gear, Li, and Kevrekidis. The latter was proposed as a means of ...
Tempo Tracking and Rhythm Quantization by Sequential Monte Carlo
Cemgil, A. Taylan
Tempo Tracking and Rhythm Quantization by Sequential Monte Carlo Ali Taylan Cemgil and Bert Kappen known music recognition problems, namely tempo tracking and automatic transcription (rhythm quantization variables is integrated out. The resulting model is suitable for realtime tempo tracking and tran- scription
Romano, Paul K. (Paul Kollath)
2013-01-01
Monte Carlo particle transport methods are being considered as a viable option for high-fidelity simulation of nuclear reactors. While Monte Carlo methods offer several potential advantages over deterministic methods, there ...
Guan, Fada 1982-
2012-04-27
Monte Carlo method has been successfully applied in simulating the particles transport problems. Most of the Monte Carlo simulation tools are static and they can only be used to perform the static simulations for the ...
SCOUT: A Fast Monte-Carlo Modeling Tool of Scintillation Camera Output.
Hunter, William C J; Barrett, Harrison H; Lewellen, Thomas K; Miyaoka, Robert S; Muzi, John P; Li, Xiaoli; McDougald, Wendy; Macdonald, Lawrence R
2010-01-01
We have developed a Monte-Carlo photon-tracking and readout simulator called SCOUT to study the stochastic behavior of signals output from a simplified rectangular scintillation-camera design. SCOUT models the salient processes affecting signal generation, transport, and readout. Presently, we compare output signal statistics from SCOUT to experimental results for both a discrete and a monolithic camera. We also benchmark the speed of this simulation tool and compare it to existing simulation tools. We find this modeling tool to be relatively fast and predictive of experimental results. Depending on the modeled camera geometry, we found SCOUT to be 4 to 140 times faster than other modeling tools. PMID:22072297
Si(100)2×1 Epitaxy: A Kinetic Monte Carlo Simulation of the Surface Growth
NASA Astrophysics Data System (ADS)
Günther, Vivien; Mauß, Fabian
We have carried out Kinetic Monte Carlo (KMC) simulations on the epitaxial growth of silicon (100)2 × 1 as a function of surface temperature (570-770 °C). The KMC algorithm including almost 130 reactions such as silane adsorption, SiHx decomposition and diffusion of adsorbed species supplies an exhaustive stochastic model reproducing the surface growth of silicon (100)2 × 1 during silane gas phase epitaxy. The model provides a good representation of experimental observations and theoretical knowledge. Model predictions of hydrogen coverage are in good agreement with experimental data.
Monte Carlo Tests of SLE Predictions for the 2D Self-Avoiding Walk
Tom Kennedy
2001-12-21
The conjecture that the scaling limit of the two-dimensional self-avoiding walk (SAW) in a half plane is given by the stochastic Loewner evolution (SLE) with $\\kappa=8/3$ leads to explicit predictions about the SAW. A remarkable feature of these predictions is that they yield not just critical exponents, but probability distributions for certain random variables associated with the self-avoiding walk. We test two of these predictions with Monte Carlo simulations and find excellent agreement, thus providing numerical support to the conjecture that the scaling limit of the SAW is SLE$_{8/3}$.
Global Monte Carlo Simulation with High Order Polynomial Expansions
William R. Martin; James Paul Holloway; Kaushik Banerjee; Jesse Cheatham; Jeremy Conlin
2007-12-13
The functional expansion technique (FET) was recently developed for Monte Carlo simulation. The basic idea of the FET is to expand a Monte Carlo tally in terms of a high order expansion, the coefficients of which can be estimated via the usual random walk process in a conventional Monte Carlo code. If the expansion basis is chosen carefully, the lowest order coefficient is simply the conventional histogram tally, corresponding to a flat mode. This research project studied the applicability of using the FET to estimate the fission source, from which fission sites can be sampled for the next generation. The idea is that individual fission sites contribute to expansion modes that may span the geometry being considered, possibly increasing the communication across a loosely coupled system and thereby improving convergence over the conventional fission bank approach used in most production Monte Carlo codes. The project examined a number of basis functions, including global Legendre polynomials as well as “local” piecewise polynomials such as finite element hat functions and higher order versions. The global FET showed an improvement in convergence over the conventional fission bank approach. The local FET methods showed some advantages versus global polynomials in handling geometries with discontinuous material properties. The conventional finite element hat functions had the disadvantage that the expansion coefficients could not be estimated directly but had to be obtained by solving a linear system whose matrix elements were estimated. An alternative fission matrix-based response matrix algorithm was formulated. Studies were made of two alternative applications of the FET, one based on the kernel density estimator and one based on Arnoldi’s method of minimized iterations. Preliminary results for both methods indicate improvements in fission source convergence. These developments indicate that the FET has promise for speeding up Monte Carlo fission source convergence.
ForPeerReview Lattice Kinetic Monte Carlo modeling of germanium solid
Florida, University of
ForPeerReview Lattice Kinetic Monte Carlo modeling of germanium solid phase epitaxial growth phase epitaxial growth, Semiconductors, Lattice Kinetic Monte Carlo, Germanium Wiley-VCH physica status solidi #12;ForPeerReview physica status solidi Lattice Kinetic Monte Carlo modeling of germanium solid
Monte Carlo ray tracing in optical canopy reflectance modelling M. I. Disney1*
Jones, Peter JS
1 Monte Carlo ray tracing in optical canopy reflectance modelling M. I. Disney1* , P. Lewis1 , P. R modelling. Their utility, and, more specifically, Monte Carlo ray tracing for the numerical simulation by fundamental properties such as radiation interception are discussed. Keywords: Monte Carlo ray tracing, canopy
A Markov-Chain Monte Carlo Approach to Simultaneous Localization and Mapping
Szepesvari, Csaba
A Markov-Chain Monte Carlo Approach to Simultaneous Localization and Mapping P´eter Torma Andr. of Computing Sciences University of Alberta, Canada Abstract A Markov-chain Monte Carlo based algo- rithm a so- lution to this problem based on a Markov-chain Monte Carlo (see, e.g., Andrieu et al., 2003
Population Monte Carlo algorithms Yukito Iba The Institute of Statistical Mathematics
Iba, Yukito
279 ¤ Population Monte Carlo algorithms Yukito Iba The Institute of Statistical Mathematics iba algorithm Summary We give a cross-disciplinary survey on "population" Monte Carlo algorithms-disciplinary survey on "population" Monte Carlo algorithms. These al- gorithms, which are developed in various fields
NASA Astrophysics Data System (ADS)
Armas-Pérez, Julio C.; Londono-Hurtado, Alejandro; Guzmán, Orlando; Hernández-Ortiz, Juan P.; de Pablo, Juan J.
2015-07-01
A theoretically informed coarse-grained Monte Carlo method is proposed for studying liquid crystals. The free energy functional of the system is described in the framework of the Landau-de Gennes formalism. The alignment field and its gradients are approximated by finite differences, and the free energy is minimized through a stochastic sampling technique. The validity of the proposed method is established by comparing the results of the proposed approach to those of traditional free energy minimization techniques. Its usefulness is illustrated in the context of three systems, namely, a nematic liquid crystal confined in a slit channel, a nematic liquid crystal droplet, and a chiral liquid crystal in the bulk. It is found that for systems that exhibit multiple metastable morphologies, the proposed Monte Carlo method is generally able to identify lower free energy states that are often missed by traditional approaches. Importantly, the Monte Carlo method identifies such states from random initial configurations, thereby obviating the need for educated initial guesses that can be difficult to formulate.
Armas-Pérez, Julio C; Londono-Hurtado, Alejandro; Guzmán, Orlando; Hernández-Ortiz, Juan P; de Pablo, Juan J
2015-07-28
A theoretically informed coarse-grained Monte Carlo method is proposed for studying liquid crystals. The free energy functional of the system is described in the framework of the Landau-de Gennes formalism. The alignment field and its gradients are approximated by finite differences, and the free energy is minimized through a stochastic sampling technique. The validity of the proposed method is established by comparing the results of the proposed approach to those of traditional free energy minimization techniques. Its usefulness is illustrated in the context of three systems, namely, a nematic liquid crystal confined in a slit channel, a nematic liquid crystal droplet, and a chiral liquid crystal in the bulk. It is found that for systems that exhibit multiple metastable morphologies, the proposed Monte Carlo method is generally able to identify lower free energy states that are often missed by traditional approaches. Importantly, the Monte Carlo method identifies such states from random initial configurations, thereby obviating the need for educated initial guesses that can be difficult to formulate. PMID:26233107
A Monte Carlo EM algorithm for de novo motif discovery in biomolecular sequences.
Bi, Chengpeng
2009-01-01
Motif discovery methods play pivotal roles in deciphering the genetic regulatory codes (i.e., motifs) in genomes as well as in locating conserved domains in protein sequences. The Expectation Maximization (EM) algorithm is one of the most popular methods used in de novo motif discovery. Based on the position weight matrix (PWM) updating technique, this paper presents a Monte Carlo version of the EM motif-finding algorithm that carries out stochastic sampling in local alignment space to overcome the conventional EM's main drawback of being trapped in a local optimum. The newly implemented algorithm is named as Monte Carlo EM Motif Discovery Algorithm (MCEMDA). MCEMDA starts from an initial model, and then it iteratively performs Monte Carlo simulation and parameter update until convergence. A log-likelihood profiling technique together with the top-k strategy is introduced to cope with the phase shifts and multiple modal issues in motif discovery problem. A novel grouping motif alignment (GMA) algorithm is designed to select motifs by clustering a population of candidate local alignments and successfully applied to subtle motif discovery. MCEMDA compares favorably to other popular PWM-based and word enumerative motif algorithms tested using simulated (l, d)-motif cases, documented prokaryotic, and eukaryotic DNA motif sequences. Finally, MCEMDA is applied to detect large blocks of conserved domains using protein benchmarks and exhibits its excellent capacity while compared with other multiple sequence alignment methods. PMID:19644166
A Fast Monte Carlo Simulation for the International Linear Collider Detector
Furse, D.; /Georgia Tech
2005-12-15
The following paper contains details concerning the motivation for, implementation and performance of a Java-based fast Monte Carlo simulation for a detector designed to be used in the International Linear Collider. This simulation, presently included in the SLAC ILC group's org.lcsim package, reads in standard model or SUSY events in STDHEP file format, stochastically simulates the blurring in physics measurements caused by intrinsic detector error, and writes out an LCIO format file containing a set of final particles statistically similar to those that would have found by a full Monte Carlo simulation. In addition to the reconstructed particles themselves, descriptions of the calorimeter hit clusters and tracks that these particles would have produced are also included in the LCIO output. These output files can then be put through various analysis codes in order to characterize the effectiveness of a hypothetical detector at extracting relevant physical information about an event. Such a tool is extremely useful in preliminary detector research and development, as full simulations are extremely cumbersome and taxing on processor resources; a fast, efficient Monte Carlo can facilitate and even make possible detector physics studies that would be very impractical with the full simulation by sacrificing what is in many cases inappropriate attention to detail for valuable gains in time required for results.
Mukumoto, Nobutaka; Tsujii, Katsutomo; Saito, Susumu; Yasunaga, Masayoshi; Takegawa, Hidek; Yamamoto, Tokihiro; Numasaki, Hodaka; Teshima, Teruki
2009-10-01
Purpose: To develop an infrastructure for the integrated Monte Carlo verification system (MCVS) to verify the accuracy of conventional dose calculations, which often fail to accurately predict dose distributions, mainly due to inhomogeneities in the patient's anatomy, for example, in lung and bone. Methods and Materials: The MCVS consists of the graphical user interface (GUI) based on a computational environment for radiotherapy research (CERR) with MATLAB language. The MCVS GUI acts as an interface between the MCVS and a commercial treatment planning system to import the treatment plan, create MC input files, and analyze MC output dose files. The MCVS consists of the EGSnrc MC codes, which include EGSnrc/BEAMnrc to simulate the treatment head and EGSnrc/DOSXYZnrc to calculate the dose distributions in the patient/phantom. In order to improve computation time without approximations, an in-house cluster system was constructed. Results: The phase-space data of a 6-MV photon beam from a Varian Clinac unit was developed and used to establish several benchmarks under homogeneous conditions. The MC results agreed with the ionization chamber measurements to within 1%. The MCVS GUI could import and display the radiotherapy treatment plan created by the MC method and various treatment planning systems, such as RTOG and DICOM-RT formats. Dose distributions could be analyzed by using dose profiles and dose volume histograms and compared on the same platform. With the cluster system, calculation time was improved in line with the increase in the number of central processing units (CPUs) at a computation efficiency of more than 98%. Conclusions: Development of the MCVS was successful for performing MC simulations and analyzing dose distributions.
Monte Carlo simulation of particle acceleration at astrophysical shocks
NASA Technical Reports Server (NTRS)
Campbell, Roy K.
1989-01-01
A Monte Carlo code was developed for the simulation of particle acceleration at astrophysical shocks. The code is implemented in Turbo Pascal on a PC. It is modularized and structured in such a way that modification and maintenance are relatively painless. Monte Carlo simulations of particle acceleration at shocks follow the trajectories of individual particles as they scatter repeatedly across the shock front, gaining energy with each crossing. The particles are assumed to scatter from magnetohydrodynamic (MHD) turbulence on both sides of the shock. A scattering law is used which is related to the assumed form of the turbulence, and the particle and shock parameters. High energy cosmic ray spectra derived from Monte Carlo simulations have observed power law behavior just as the spectra derived from analytic calculations based on a diffusion equation. This high energy behavior is not sensitive to the scattering law used. In contrast with Monte Carlo calculations diffusive calculations rely on the initial injection of supra-thermal particles into the shock environment. Monte Carlo simulations are the only known way to describe the extraction of particles directly from the thermal pool. This was the triumph of the Monte Carlo approach. The question of acceleration efficiency is an important one in the shock acceleration game. The efficiency of shock waves efficient to account for the observed flux of high energy galactic cosmic rays was examined. The efficiency of the acceleration process depends on the thermal particle pick-up and hence the low energy scattering in detail. One of the goals is the self-consistent derivation of the accelerated particle spectra and the MHD turbulence spectra. Presumably the upstream turbulence, which scatters the particles so they can be accelerated, is excited by the streaming accelerated particles and the needed downstream turbulence is convected from the upstream region. The present code is to be modified to include a better description of particle scattering (pitch-angle instead of hard-sphere) and as iterative procedure for treating the self-excitation of the MHD turbulence.
Müller, Florian Jenny, Patrick Meyer, Daniel W.
2013-10-01
Monte Carlo (MC) is a well known method for quantifying uncertainty arising for example in subsurface flow problems. Although robust and easy to implement, MC suffers from slow convergence. Extending MC by means of multigrid techniques yields the multilevel Monte Carlo (MLMC) method. MLMC has proven to greatly accelerate MC for several applications including stochastic ordinary differential equations in finance, elliptic stochastic partial differential equations and also hyperbolic problems. In this study, MLMC is combined with a streamline-based solver to assess uncertain two phase flow and Buckley–Leverett transport in random heterogeneous porous media. The performance of MLMC is compared to MC for a two dimensional reservoir with a multi-point Gaussian logarithmic permeability field. The influence of the variance and the correlation length of the logarithmic permeability on the MLMC performance is studied.
GPU accelerated Monte Carlo simulation of Brownian motors dynamics with CUDA
NASA Astrophysics Data System (ADS)
Spiechowicz, J.; Kostur, M.; Machura, L.
2015-06-01
This work presents an updated and extended guide on methods of a proper acceleration of the Monte Carlo integration of stochastic differential equations with the commonly available NVIDIA Graphics Processing Units using the CUDA programming environment. We outline the general aspects of the scientific computing on graphics cards and demonstrate them with two models of a well known phenomenon of the noise induced transport of Brownian motors in periodic structures. As a source of fluctuations in the considered systems we selected the three most commonly occurring noises: the Gaussian white noise, the white Poissonian noise and the dichotomous process also known as a random telegraph signal. The detailed discussion on various aspects of the applied numerical schemes is also presented. The measured speedup can be of the astonishing order of about 3000 when compared to a typical CPU. This number significantly expands the range of problems solvable by use of stochastic simulations, allowing even an interactive research in some cases.
Goal-oriented sensitivity analysis for lattice kinetic Monte Carlo simulations
Arampatzis, Georgios; Department of Mathematics and Statistics, University of Massachusetts, Amherst, Massachusetts 01003 ; Katsoulakis, Markos A.
2014-03-28
In this paper we propose a new class of coupling methods for the sensitivity analysis of high dimensional stochastic systems and in particular for lattice Kinetic Monte Carlo (KMC). Sensitivity analysis for stochastic systems is typically based on approximating continuous derivatives with respect to model parameters by the mean value of samples from a finite difference scheme. Instead of using independent samples the proposed algorithm reduces the variance of the estimator by developing a strongly correlated-“coupled”- stochastic process for both the perturbed and unperturbed stochastic processes, defined in a common state space. The novelty of our construction is that the new coupled process depends on the targeted observables, e.g., coverage, Hamiltonian, spatial correlations, surface roughness, etc., hence we refer to the proposed method as goal-oriented sensitivity analysis. In particular, the rates of the coupled Continuous Time Markov Chain are obtained as solutions to a goal-oriented optimization problem, depending on the observable of interest, by considering the minimization functional of the corresponding variance. We show that this functional can be used as a diagnostic tool for the design and evaluation of different classes of couplings. Furthermore, the resulting KMC sensitivity algorithm has an easy implementation that is based on the Bortz–Kalos–Lebowitz algorithm's philosophy, where events are divided in classes depending on level sets of the observable of interest. Finally, we demonstrate in several examples including adsorption, desorption, and diffusion Kinetic Monte Carlo that for the same confidence interval and observable, the proposed goal-oriented algorithm can be two orders of magnitude faster than existing coupling algorithms for spatial KMC such as the Common Random Number approach. We also provide a complete implementation of the proposed sensitivity analysis algorithms, including various spatial KMC examples, in a supplementary MATLAB source code.
Monte Carlo Integration Using Spatial Structure of Markov Random Field
NASA Astrophysics Data System (ADS)
Yasuda, Muneki
2015-03-01
Monte Carlo integration (MCI) techniques are important in various fields. In this study, a new MCI technique for Markov random fields (MRFs) is proposed. MCI consists of two successive parts: the first involves sampling using a technique such as the Markov chain Monte Carlo method, and the second involves an averaging operation using the obtained sample points. In the averaging operation, a simple sample averaging technique is often employed. The method proposed in this paper improves the averaging operation by addressing the spatial structure of the MRF and is mathematically guaranteed to statistically outperform standard MCI using the simple sample averaging operation. Moreover, the proposed method can be improved in a systematic manner and is numerically verified by numerical simulations using planar Ising models. In the latter part of this paper, the proposed method is applied to the inverse Ising problem and we observe that it outperforms the maximum pseudo-likelihood estimation.
Kinetic Monte Carlo Studies of Hydrogen Abstraction from Graphite
H. M. Cuppen; L. Hornekaer
2008-07-01
We present Monte Carlo simulations on Eley-Rideal abstraction reactions of atomic hydrogen chemisorbed on graphite. The results are obtained via a hybrid approach where energy barriers derived from density functional theory calculations are used as input to Monte Carlo simulations. By comparing with experimental data, we discriminate between contributions from different Eley-Rideal mechanisms. A combination of two different mechanisms yields good quantitative and qualitative agreement between the experimentally derived and the simulated Eley-Rideal abstraction cross sections and surface configurations. These two mechanisms include a direct Eley-Rideal reaction with fast diffusing H atoms and a dimer mediated Eley-Rideal mechanism with increased cross section at low coverage. Such a dimer mediated Eley-Rideal mechanism has not previously been proposed and serves as an alternative explanation to the steering behavior often given as the cause of the coverage dependence observed in Eley-Rideal reaction cross sections.
Minimising biases in full configuration interaction quantum Monte Carlo.
Vigor, W A; Spencer, J S; Bearpark, M J; Thom, A J W
2015-03-14
We show that Full Configuration Interaction Quantum Monte Carlo (FCIQMC) is a Markov chain in its present form. We construct the Markov matrix of FCIQMC for a two determinant system and hence compute the stationary distribution. These solutions are used to quantify the dependence of the population dynamics on the parameters defining the Markov chain. Despite the simplicity of a system with only two determinants, it still reveals a population control bias inherent to the FCIQMC algorithm. We investigate the effect of simulation parameters on the population control bias for the neon atom and suggest simulation setups to, in general, minimise the bias. We show a reweight ing scheme to remove the bias caused by population control commonly used in diffusion Monte Carlo [Umrigar et al., J. Chem. Phys. 99, 2865 (1993)] is effective and recommend its use as a post processing step. PMID:25770522
Monte Carlo simulations of plutonium gamma-ray spectra
Koenig, Z.M.; Carlson, J.B.; Wang, Tzu-Fang; Ruhter, W.D.
1993-07-16
Monte Carlo calculations were investigated as a means of simulating the gamma-ray spectra of Pu. These simulated spectra will be used to develop and evaluate gamma-ray analysis techniques for various nondestructive measurements. Simulated spectra of calculational standards can be used for code intercomparisons, to understand systematic biases and to estimate minimum detection levels of existing and proposed nondestructive analysis instruments. The capability to simulate gamma-ray spectra from HPGe detectors could significantly reduce the costs of preparing large numbers of real reference materials. MCNP was used for the Monte Carlo transport of the photons. Results from the MCNP calculations were folded in with a detector response function for a realistic spectrum. Plutonium spectrum peaks were produced with Lorentzian shapes, for the x-rays, and Gaussian distributions. The MGA code determined the Pu isotopes and specific power of this calculated spectrum and compared it to a similar analysis on a measured spectrum.
Monte Carlo Simulation of the Milagro Gamma-ray Observatory
NASA Astrophysics Data System (ADS)
Vasileiou, V.
The Milagro gamma-ray observatory is a water-Cherenkov detector capable of observing air showers produced by very high energy gamma-rays. The sensitivity and performance of the detector is determined by a detailed Monte Carlo simulation and verified through the observation of gamma-ray sources and the isotropic cosmic-ray background. Corsika is used for simulating the extensive air showers produced by either hadrons (background) and gamma rays (signal). A GEANT4 based application is used for simulating the response of the Milagro detector to the air shower particles reaching the ground. The GEANT4 simulation includes a detailed description of the optical properties of the detector and the response of the photomultiplier tubes. Details and results from the Milagro Monte Carlo simulation will be presented.
Monte Carlo Simulations on a 9-node PC Cluster
NASA Astrophysics Data System (ADS)
Gouriou, J.
Monte Carlo simulation methods are frequently used in the fields of medical physics, dosimetry and metrology of ionising radiation. Nevertheless, the main drawback of this technique is to be computationally slow, because the statistical uncertainty of the result improves only as the square root of the computational time. We present a method, which allows to reduce by a factor 10 to 20 the used effective running time. In practice, the aim was to reduce the calculation time in the LNHB metrological applications from several weeks to a few days. This approach includes the use of a PC-cluster, under Linux operating system and PVM parallel library (version 3.4). The Monte Carlo codes EGS4, MCNP and PENELOPE have been implemented on this platform and for the two last ones adapted for running under the PVM environment. The maximum observed speedup is ranging from a factor 13 to 18 according to the codes and the problems to be simulated.
Computer Monte Carlo simulation in quantitative resource estimation
Root, D.H.; Menzie, W.D.; Scott, W.A.
1992-01-01
The method of making quantitative assessments of mineral resources sufficiently detailed for economic analysis is outlined in three steps. The steps are (1) determination of types of deposits that may be present in an area, (2) estimation of the numbers of deposits of the permissible deposit types, and (3) combination by Monte Carlo simulation of the estimated numbers of deposits with the historical grades and tonnages of these deposits to produce a probability distribution of the quantities of contained metal. Two examples of the estimation of the number of deposits (step 2) are given. The first example is for mercury deposits in southwestern Alaska and the second is for lode tin deposits in the Seward Peninsula. The flow of the Monte Carlo simulation program is presented with particular attention to the dependencies between grades and tonnages of deposits and between grades of different metals in the same deposit. ?? 1992 Oxford University Press.
Hydrologic Data Assimilation Using Particle Markov Chain Monte Carlo Simulation
NASA Astrophysics Data System (ADS)
Vrugt, J. A.
2012-12-01
In this presentation, I will introduce theory, concepts and simulation results of a novel data assimilation scheme for joint inference of model parameters and state variables. This Particle-DREAM method combines the strengths of sequential Monte Carlo (SMC) sampling and Markov chain Monte Carlo (MCMC) simulation and is especially designed for treatment of forcing, parameter, model structural and calibration data error. Two different variants of Particle-DREAM are presented to satisfy assumptions regarding the temporal behavior of the model parameters. A few different case studies are used to illustrate Particle-DREAM and demonstrate that this method requires far fewer particles than current state-of-the-art (hydrologic) particle filters to closely track the evolving target distribution of interest. The method provides important insights into the information content of the calibration data and non-stationarity of model parameters.
grmonty: A MONTE CARLO CODE FOR RELATIVISTIC RADIATIVE TRANSPORT
Dolence, Joshua C.; Gammie, Charles F.; Leung, Po Kin; Moscibrodzka, Monika
2009-10-01
We describe a Monte Carlo radiative transport code intended for calculating spectra of hot, optically thin plasmas in full general relativity. The version we describe here is designed to model hot accretion flows in the Kerr metric and therefore incorporates synchrotron emission and absorption, and Compton scattering. The code can be readily generalized, however, to account for other radiative processes and an arbitrary spacetime. We describe a suite of test problems, and demonstrate the expected N {sup -1/2} convergence rate, where N is the number of Monte Carlo samples. Finally, we illustrate the capabilities of the code with a model calculation, a spectrum of the slowly accreting black hole Sgr A* based on data provided by a numerical general relativistic MHD model of the accreting plasma.
Efficient, automated Monte Carlo methods for radiation transport
Kong Rong; Ambrose, Martin; Spanier, Jerome
2008-11-20
Monte Carlo simulations provide an indispensible model for solving radiative transport problems, but their slow convergence inhibits their use as an everyday computational tool. In this paper, we present two new ideas for accelerating the convergence of Monte Carlo algorithms based upon an efficient algorithm that couples simulations of forward and adjoint transport equations. Forward random walks are first processed in stages, each using a fixed sample size, and information from stage k is used to alter the sampling and weighting procedure in stage k+1. This produces rapid geometric convergence and accounts for dramatic gains in the efficiency of the forward computation. In case still greater accuracy is required in the forward solution, information from an adjoint simulation can be added to extend the geometric learning of the forward solution. The resulting new approach should find widespread use when fast, accurate simulations of the transport equation are needed.
Engineering local optimality in quantum Monte Carlo algorithms
Pollet, Lode . E-mail: pollet@itp.phys.ethz.ch; Houcke, Kris Van; Rombouts, Stefan M.A.
2007-08-10
Quantum Monte Carlo algorithms based on a world-line representation such as the worm algorithm and the directed loop algorithm are among the most powerful numerical techniques for the simulation of non-frustrated spin models and of bosonic models. Both algorithms work in the grand-canonical ensemble and can have a winding number larger than zero. However, they retain a lot of intrinsic degrees of freedom which can be used to optimize the algorithm. We let us guide by the rigorous statements on the globally optimal form of Markov chain Monte Carlo simulations in order to devise a locally optimal formulation of the worm algorithm while incorporating ideas from the directed loop algorithm. We provide numerical examples for the soft-core Bose-Hubbard model and various spin-S models.
The MCLIB library: Monte Carlo simulation of neutron scattering instruments
Seeger, P.A.
1995-09-01
Monte Carlo is a method to integrate over a large number of variables. Random numbers are used to select a value for each variable, and the integrand is evaluated. The process is repeated a large number of times and the resulting values are averaged. For a neutron transport problem, first select a neutron from the source distribution, and project it through the instrument using either deterministic or probabilistic algorithms to describe its interaction whenever it hits something, and then (if it hits the detector) tally it in a histogram representing where and when it was detected. This is intended to simulate the process of running an actual experiment (but it is much slower). This report describes the philosophy and structure of MCLIB, a Fortran library of Monte Carlo subroutines which has been developed for design of neutron scattering instruments. A pair of programs (LQDGEOM and MC{_}RUN) which use the library are shown as an example.
Radiotherapy Monte Carlo simulation using cloud computing technology.
Poole, C M; Cornelius, I; Trapp, J V; Langton, C M
2012-12-01
Cloud computing allows for vast computational resources to be leveraged quickly and easily in bursts as and when required. Here we describe a technique that allows for Monte Carlo radiotherapy dose calculations to be performed using GEANT4 and executed in the cloud, with relative simulation cost and completion time evaluated as a function of machine count. As expected, simulation completion time decreases as 1/n for n parallel machines, and relative simulation cost is found to be optimal where n is a factor of the total simulation time in hours. Using the technique, we demonstrate the potential usefulness of cloud computing as a solution for rapid Monte Carlo simulation for radiotherapy dose calculation without the need for dedicated local computer hardware as a proof of principal. PMID:23188699
Estimation of beryllium ground state energy by Monte Carlo simulation
NASA Astrophysics Data System (ADS)
Kabir, K. M. Ariful; Halder, Amal
2015-05-01
Quantum Monte Carlo method represent a powerful and broadly applicable computational tool for finding very accurate solution of the stationary Schrödinger equation for atoms, molecules, solids and a variety of model systems. Using variational Monte Carlo method we have calculated the ground state energy of the Beryllium atom. Our calculation are based on using a modified four parameters trial wave function which leads to good result comparing with the few parameters trial wave functions presented before. Based on random Numbers we can generate a large sample of electron locations to estimate the ground state energy of Beryllium. Our calculation gives good estimation for the ground state energy of the Beryllium atom comparing with the corresponding exact data.
Ab initio Monte Carlo investigation of small lithium clusters.
Srinivas, S.
1999-06-16
Structural and thermal properties of small lithium clusters are studied using ab initio-based Monte Carlo simulations. The ab initio scheme uses a Hartree-Fock/density functional treatment of the electronic structure combined with a jump-walking Monte Carlo sampling of nuclear configurations. Structural forms of Li{sub 8} and Li{sub 9}{sup +} clusters are obtained and their thermal properties analyzed in terms of probability distributions of the cluster potential energy, average potential energy and configurational heat capacity all considered as a function of the cluster temperature. Details of the gradual evolution with temperature of the structural forms sampled are examined. Temperatures characterizing the onset of structural changes and isomer coexistence are identified for both clusters.
Monte Carlo Strategies for Selecting Parameter Values in Simulation Experiments.
Leigh, Jessica W; Bryant, David
2015-09-01
Simulation experiments are used widely throughout evolutionary biology and bioinformatics to compare models, promote methods, and test hypotheses. The biggest practical constraint on simulation experiments is the computational demand, particularly as the number of parameters increases. Given the extraordinary success of Monte Carlo methods for conducting inference in phylogenetics, and indeed throughout the sciences, we investigate ways in which Monte Carlo framework can be used to carry out simulation experiments more efficiently. The key idea is to sample parameter values for the experiments, rather than iterate through them exhaustively. Exhaustive analyses become completely infeasible when the number of parameters gets too large, whereas sampled approaches can fare better in higher dimensions. We illustrate the framework with applications to phylogenetics and genetic archaeology. PMID:26012871
Large-cell Monte Carlo renormalization of irreversible growth processes
NASA Technical Reports Server (NTRS)
Nakanishi, H.; Family, F.
1985-01-01
Monte Carlo sampling is applied to a recently formulated direct-cell renormalization method for irreversible, disorderly growth processes. Large-cell Monte Carlo renormalization is carried out for various nonequilibrium problems based on the formulation dealing with relative probabilities. Specifically, the method is demonstrated by application to the 'true' self-avoiding walk and the Eden model of growing animals for d = 2, 3, and 4 and to the invasion percolation problem for d = 2 and 3. The results are asymptotically in agreement with expectations; however, unexpected complications arise, suggesting the possibility of crossovers, and in any case, demonstrating the danger of using small cells alone, because of the very slow convergence as the cell size b is extrapolated to infinity. The difficulty of applying the present method to the diffusion-limited-aggregation model, is commented on.
Kinetic Monte Carlo simulations of radiation induced segregation and precipitation
NASA Astrophysics Data System (ADS)
Soisson, Frédéric
2006-03-01
Kinetics of radiation induced segregation and precipitation in binary alloys are studied by Monte Carlo simulations. The simulations are based on a simple atomic model of diffusion under electron irradiation, which takes into account the creation of point defects, the recombination of close vacancy-interstitial pairs and the point defect annihilation at sinks. They can reproduce the coupling between point defect fluxes towards sinks and atomic fluxes, which controls the segregation tendency. In pure metals and ideal solid solutions, the Monte Carlo results are found to be in very good agreement with classical models based on rate equations. In alloys with an unmixing tendency, we show how the interaction between the point defect distribution, the solute segregation and the precipitation driving force can generate complex microstructural evolutions, which depend on the very details of atomic-scale diffusion properties.
Sign problem and Monte Carlo calculations beyond Lefschetz thimbles
Andrei Alexandru; Gokce Basar; Paulo F. Bedaque; Gregory W. Ridgway; Neill C. Warrington
2015-12-29
We point out that Monte Carlo simulations of theories with severe sign problems can be profitably performed over manifolds in complex space different from the one with fixed imaginary part of the action. We describe a family of such manifolds that interpolate between the tangent space at one critical point, where the sign problem is milder compared to the real plane but in some cases still severe, and the union of relevant thimbles, where the sign problem is mild but a multimodal distribution function complicates the Monte Carlo sampling. We exemplify this approach using a simple 0 + 1 dimensional fermion model previously used on sign problem studies and show that it can solve the model for some parameter values where a solution using Lefshetz thimbles was elusive.
Monte Carlo methods for light propagation in biological tissues.
Vinckenbosch, Laura; Lacaux, Céline; Tindel, Samy; Thomassin, Magalie; Obara, Tiphaine
2015-11-01
Light propagation in turbid media is driven by the equation of radiative transfer. We give a formal probabilistic representation of its solution in the framework of biological tissues and we implement algorithms based on Monte Carlo methods in order to estimate the quantity of light that is received by a homogeneous tissue when emitted by an optic fiber. A variance reduction method is studied and implemented, as well as a Markov chain Monte Carlo method based on the Metropolis-Hastings algorithm. The resulting estimating methods are then compared to the so-called Wang-Prahl (or Wang) method. Finally, the formal representation allows to derive a non-linear optimization algorithm close to Levenberg-Marquardt that is used for the estimation of the scattering and absorption coefficients of the tissue from measurements. PMID:26362232
Variance reduction for Fokker-Planck based particle Monte Carlo schemes
NASA Astrophysics Data System (ADS)
Gorji, M. Hossein; Andric, Nemanja; Jenny, Patrick
2015-08-01
Recently, Fokker-Planck based particle Monte Carlo schemes have been proposed and evaluated for simulations of rarefied gas flows [1-3]. In this paper, the variance reduction for particle Monte Carlo simulations based on the Fokker-Planck model is considered. First, deviational based schemes were derived and reviewed, and it is shown that these deviational methods are not appropriate for practical Fokker-Planck based rarefied gas flow simulations. This is due to the fact that the deviational schemes considered in this study lead either to instabilities in the case of two-weight methods or to large statistical errors if the direct sampling method is applied. Motivated by this conclusion, we developed a novel scheme based on correlated stochastic processes. The main idea here is to synthesize an additional stochastic process with a known solution, which is simultaneously solved together with the main one. By correlating the two processes, the statistical errors can dramatically be reduced; especially for low Mach numbers. To assess the methods, homogeneous relaxation, planar Couette and lid-driven cavity flows were considered. For these test cases, it could be demonstrated that variance reduction based on parallel processes is very robust and effective.
Paul P.H. Wilson
2005-07-30
The development of Monte Carlo techniques for isotopic inventory analysis has been explored in order to facilitate the modeling of systems with flowing streams of material through varying neutron irradiation environments. This represents a novel application of Monte Carlo methods to a field that has traditionally relied on deterministic solutions to systems of first-order differential equations. The Monte Carlo techniques were based largely on the known modeling techniques of Monte Carlo radiation transport, but with important differences, particularly in the area of variance reduction and efficiency measurement. The software that was developed to implement and test these methods now provides a basis for validating approximate modeling techniques that are available to deterministic methodologies. The Monte Carlo methods have been shown to be effective in reproducing the solutions of simple problems that are possible using both stochastic and deterministic methods. The Monte Carlo methods are also effective for tracking flows of materials through complex systems including the ability to model removal of individual elements or isotopes in the system. Computational performance is best for flows that have characteristic times that are large fractions of the system lifetime. As the characteristic times become short, leading to thousands or millions of passes through the system, the computational performance drops significantly. Further research is underway to determine modeling techniques to improve performance within this range of problems. This report describes the technical development of Monte Carlo techniques for isotopic inventory analysis. The primary motivation for this solution methodology is the ability to model systems of flowing material being exposed to varying and stochastically varying radiation environments. The methodology was developed in three stages: analog methods which model each atom with true reaction probabilities (Section 2), non-analog methods which bias the probability distributions while adjusting atom weights to preserve a fair game (Section 3), and efficiency measures to provide local and global measures of the effectiveness of the non-analog methods (Section 4). Following this development, the MCise (Monte Carlo isotope simulation engine) software was used to explore the efficiency of different modeling techniques (Section 5).
Fast Monte Carlo, slow protein kinetics and perfect loop closure
NASA Astrophysics Data System (ADS)
Wedemeyer, William Joseph
This thesis presents experimental studies of proteins and computational methods which may help in simulations of proteins. The experimental chapters focus on the folding and unfolding of bovine pancreatic ribonuclease A. Methods are developed for tracking the cis-trans isomerization of individual prolines under folding and unfolding conditions, and for identifying critical folding structures by assessing the effects of individual incorrect X-Pro isomers on the conformational folding. The major ?-hairpin region is identified as more critical than the C-terminal hydrophobic core. Site- directed mutagenesis of three nearby tyrosines to phenylalanine indicates that tyrosyl hydrogen bonds are essential to rapid conformational folding. Another experimental chapter presents an analytic solution of the kinetics of competitive binding, which is applied to estimating the association and dissociation rate constants of hirudin and thrombin. An extension of this method is proposed to obtain kinetic rate constants for the conformational folding and unfolding of individual parts of a protein. The analytic solution is found to be roughly one-hundred-fold more efficient than the best numerical integrators. The theoretical chapters present methods potentially useful in protein simulations. The loop closure problem is solved geometrically, allowing the protein to be broken into segments which move quasi-independently. Two bootstrap Monte Carlo methods are developed for sampling functions that are characterized by high anisotropy, e.g. long, narrow valleys. Two chapters are devoted to smoothing methods; the first develops a method for exploiting smoothing to evaluate the energy in order N (not N2) time, while the second examines the limitations of one smoothing method, the Diffusion Equation Method, and suggests improvements to its smoothing transformation and reversing procedure. One chapter develops a highly optimized simulation package for lattice heteropolymers by careful choice of data structures and by treating the Metropolis acceptance criterion itself as a stochastic process. Lastly, a integrated software package, PROSE, is developed to perform molecular simulations. The routines are written in C for high performance, but embedded in scripting languages for convenience. The package is modular and object-oriented to test new algorithms rapidly. A graphical user interface is provided for visualization and to assist non-programmers.
Three-dimensional Monte Carlo model for BWR fluence calculations
Sitaraman, S.; Rogers, D.R.; Kruger, R.M.
1997-12-01
A three-dimensional Monte Carlo model has been developed for accurate boiling water reactor (BWR) neutron and gamma fluence calculations using continuous-energy MCNP. Unlike earlier light water reactor models being run in the fixed-source mode with simplified geometries, this model is run in the criticality mode with actual geometry and material composition. The MCNP fast flux shapes in the downcomer region are qualified against measurements and are compared with results from two-dimensional deterministic DORT calculations.
Quantum Monte Carlo study of porphyrin transition metal complexes
NASA Astrophysics Data System (ADS)
Koseki, Jun; Maezono, Ryo; Tachikawa, Masanori; Towler, M. D.; Needs, R. J.
2008-08-01
Diffusion quantum Monte Carlo (DMC) calculations for transition metal (M) porphyrin complexes (MPo, M=Ni,Cu,Zn) are reported. We calculate the binding energies of the transition metal atoms to the porphin molecule. Our DMC results are in reasonable agreement with those obtained from density functional theory calculations using the B3LYP hybrid exchange-correlation functional. Our study shows that such calculations are feasible with the DMC method.
Direct Monte Carlo Simulations of Hypersonic Viscous Interactions Including Separation
NASA Technical Reports Server (NTRS)
Moss, James N.; Rault, Didier F. G.; Price, Joseph M.
1993-01-01
Results of calculations obtained using the direct simulation Monte Carlo method for Mach 25 flow over a control surface are presented. The numerical simulations are for a 35-deg compression ramp at a low-density wind-tunnel test condition. Calculations obtained using both two- and three-dimensional solutions are reviewed, and a qualitative comparison is made with the oil flow pictures highlight separation and three-dimensional flow structure.
Towards a Revised Monte Carlo Neutral Particle Surface Interaction Model
D.P. Stotler
2005-06-09
The components of the neutral- and plasma-surface interaction model used in the Monte Carlo neutral transport code DEGAS 2 are reviewed. The idealized surfaces and processes handled by that model are inadequate for accurately simulating neutral transport behavior in present day and future fusion devices. We identify some of the physical processes missing from the model, such as mixed materials and implanted hydrogen, and make some suggestions for improving the model.
Monte Carlo simulation experiments on box-type radon dosimeter
NASA Astrophysics Data System (ADS)
Jamil, Khalid; Kamran, Muhammad; Illahi, Ahsan; Manzoor, Shahid
2014-11-01
Epidemiological studies show that inhalation of radon gas (222Rn) may be carcinogenic especially to mine workers, people living in closed indoor energy conserved environments and underground dwellers. It is, therefore, of paramount importance to measure the 222Rn concentrations (Bq/m3) in indoors environments. For this purpose, box-type passive radon dosimeters employing ion track detector like CR-39 are widely used. Fraction of the number of radon alphas emitted in the volume of the box type dosimeter resulting in latent track formation on CR-39 is the latent track registration efficiency. Latent track registration efficiency is ultimately required to evaluate the radon concentration which consequently determines the effective dose and the radiological hazards. In this research, Monte Carlo simulation experiments were carried out to study the alpha latent track registration efficiency for box type radon dosimeter as a function of dosimeter's dimensions and range of alpha particles in air. Two different self developed Monte Carlo simulation techniques were employed namely: (a) Surface ratio (SURA) method and (b) Ray hitting (RAHI) method. Monte Carlo simulation experiments revealed that there are two types of efficiencies i.e. intrinsic efficiency (?int) and alpha hit efficiency (?hit). The ?int depends upon only on the dimensions of the dosimeter and ?hit depends both upon dimensions of the dosimeter and range of the alpha particles. The total latent track registration efficiency is the product of both intrinsic and hit efficiencies. It has been concluded that if diagonal length of box type dosimeter is kept smaller than the range of alpha particle then hit efficiency is achieved as 100%. Nevertheless the intrinsic efficiency keeps playing its role. The Monte Carlo simulation experimental results have been found helpful to understand the intricate track registration mechanisms in the box type dosimeter. This paper explains that how radon concentration from the experimentally obtained etched track density can be obtained. The program based on RAHI method is also given in this paper.
Regenerative Markov Chain Monte Carlo for any distribution.
Minh, D.
2012-01-01
While Markov chain Monte Carlo (MCMC) methods are frequently used for difficult calculations in a wide range of scientific disciplines, they suffer from a serious limitation: their samples are not independent and identically distributed. Consequently, estimates of expectations are biased if the initial value of the chain is not drawn from the target distribution. Regenerative simulation provides an elegant solution to this problem. In this article, we propose a simple regenerative MCMC algorithm to generate variates for any distribution
Surface tension of lipid-water interfaces: Monte Carlo simulations
Scott, H.L.; Lee, C.Y.
1980-11-15
The results of several computer simulations are presented using a model for the head group water interface of a lipid bilayer. Simulations were performed using the Monte Carlo method, enhanced by a ''half-umbrella sampling'' algorithm developed in an earlier paper. The results can be used in a comparative way, along with the earlier results for pure water, to estimate the effective dipole moment of a typical phospholipid in a bilayer in excess water. 15 references.
Monte Carlo calculation of patient organ doses from computed tomography.
Oono, Takeshi; Araki, Fujio; Tsuduki, Shoya; Kawasaki, Keiichi
2014-01-01
In this study, we aimed to evaluate quantitatively the patient organ dose from computed tomography (CT) using Monte Carlo calculations. A multidetector CT unit (Aquilion 16, TOSHIBA Medical Systems) was modeled with the GMctdospp (IMPS, Germany) software based on the EGSnrc Monte Carlo code. The X-ray spectrum and the configuration of the bowtie filter for the Monte Carlo modeling were determined from the chamber measurements for the half-value layer (HVL) of aluminum and the dose profile (off-center ratio, OCR) in air. The calculated HVL and OCR were compared with measured values for body irradiation with 120 kVp. The Monte Carlo-calculated patient dose distribution was converted to the absorbed dose measured by a Farmer chamber with a (60)Co calibration factor at the center of a CT water phantom. The patient dose was evaluated from dose-volume histograms for the internal organs in the pelvis. The calculated Al HVL was in agreement within 0.3% with the measured value of 5.2 mm. The calculated dose profile in air matched the measured value within 5% in a range of 15 cm from the central axis. The mean doses for soft tissues were 23.5, 23.8, and 27.9 mGy for the prostate, rectum, and bladder, respectively, under exposure conditions of 120 kVp, 200 mA, a beam pitch of 0.938, and beam collimation of 32 mm. For bones of the femur and pelvis, the mean doses were 56.1 and 63.6 mGy, respectively. The doses for bone increased by up to 2-3 times that of soft tissue, corresponding to the ratio of their mass-energy absorption coefficients. PMID:24293361
Optimization of Ballistic Deflection Transistors by Monte Carlo Simulations
NASA Astrophysics Data System (ADS)
Millithaler, J.-F.; Iñiguez-de-la-Torre, I.; Mateos, J.; González, T.; Margala, M.
2015-10-01
This paper presents an optimization of the current-voltage characteristic of Ballistic Deflection Transistors. The implementation of an adequate surface charge model in a Monte Carlo tool shows a very good agreement with the available experimental data and allows us to predict the influence of different parameters, like temperature, channel and trench dimensions on the device output. These results are of importance for further use of this device in logical circuit applications.
Recent advances in the Mercury Monte Carlo particle transport code
Brantley, P. S.; Dawson, S. A.; McKinley, M. S.; O'Brien, M. J.; Stevens, D. E.; Beck, B. R.; Jurgenson, E. D.; Ebbers, C. A.; Hall, J. M.
2013-07-01
We review recent physics and computational science advances in the Mercury Monte Carlo particle transport code under development at Lawrence Livermore National Laboratory. We describe recent efforts to enable a nuclear resonance fluorescence capability in the Mercury photon transport. We also describe recent work to implement a probability of extinction capability into Mercury. We review the results of current parallel scaling and threading efforts that enable the code to run on millions of MPI processes. (authors)
Monte Carlo Study on Anomalous Carrier Diffusion in Inhomogeneous Semiconductors
NASA Astrophysics Data System (ADS)
Mori, N.; Hill, R. J. A.; Patanè, A.; Eaves, L.
2015-10-01
We perform ensemble Monte Carlo simulations of electron diffusion in high mobility inhomogeneous InAs layers. Electrons move ballistically for short times while moving diffusively for sufficiently long times. We find that electrons show anomalous diffusion in the intermediate time domain. Our study suggests that electrons in inhomogeneous InAs could be used to experimentally explore generalized random walk phenomena, which, some studies assert, also occur naturally in the motion of animal foraging paths.
OBJECT KINETIC MONTE CARLO SIMULATIONS OF MICROSTRUCTURE EVOLUTION
Nandipati, Giridhar; Setyawan, Wahyu; Heinisch, Howard L.; Roche, Kenneth J.; Kurtz, Richard J.; Wirth, Brian D.
2013-09-30
The objective is to report the development of the flexible object kinetic Monte Carlo (OKMC) simulation code KSOME (kinetic simulation of microstructure evolution) which can be used to simulate microstructure evolution of complex systems under irradiation. In this report we briefly describe the capabilities of KSOME and present preliminary results for short term annealing of single cascades in tungsten at various primary-knock-on atom (PKA) energies and temperatures.
Autocorrelation and Dominance Ratio in Monte Carlo Criticality Calculations
Ueki, Taro; Brown, Forrest B.; Parsons, D. Kent; Kornreich, Drew E.
2003-11-15
The cycle-to-cycle correlation (autocorrelation) in Monte Carlo criticality calculations is analyzed concerning the dominance ratio of fission kernels. The mathematical analysis focuses on how the eigenfunctions of a fission kernel decay if operated on by the cycle-to-cycle error propagation operator of the Monte Carlo stationary source distribution. The analytical results obtained can be summarized as follows: When the dominance ratio of a fission kernel is close to unity, autocorrelation of the k-effective tallies is weak and may be negligible, while the autocorrelation of the source distribution is strong and decays slowly. The practical implication is that when one analyzes a critical reactor with a large dominance ratio by Monte Carlo methods, the confidence interval estimation of the fission rate and other quantities at individual locations must account for the strong autocorrelation. Numerical results are presented for sample problems with a dominance ratio of 0.85-0.99, where Shannon and relative entropies are utilized to exclude the influence of initial nonstationarity.
Comparison of deterministic and Monte Carlo methods in shielding design.
Oliveira, A D; Oliveira, C
2005-01-01
In shielding calculation, deterministic methods have some advantages and also some disadvantages relative to other kind of codes, such as Monte Carlo. The main advantage is the short computer time needed to find solutions while the disadvantages are related to the often-used build-up factor that is extrapolated from high to low energies or with unknown geometrical conditions, which can lead to significant errors in shielding results. The aim of this work is to investigate how good are some deterministic methods to calculating low-energy shielding, using attenuation coefficients and build-up factor corrections. Commercial software MicroShield 5.05 has been used as the deterministic code while MCNP has been used as the Monte Carlo code. Point and cylindrical sources with slab shield have been defined allowing comparison between the capability of both Monte Carlo and deterministic methods in a day-by-day shielding calculation using sensitivity analysis of significant parameters, such as energy and geometrical conditions. PMID:16381723
A Wigner Monte Carlo approach to density functional theory
Sellier, J.M. Dimov, I.
2014-08-01
In order to simulate quantum N-body systems, stationary and time-dependent density functional theories rely on the capacity of calculating the single-electron wave-functions of a system from which one obtains the total electron density (Kohn–Sham systems). In this paper, we introduce the use of the Wigner Monte Carlo method in ab-initio calculations. This approach allows time-dependent simulations of chemical systems in the presence of reflective and absorbing boundary conditions. It also enables an intuitive comprehension of chemical systems in terms of the Wigner formalism based on the concept of phase-space. Finally, being based on a Monte Carlo method, it scales very well on parallel machines paving the way towards the time-dependent simulation of very complex molecules. A validation is performed by studying the electron distribution of three different systems, a Lithium atom, a Boron atom and a hydrogenic molecule. For the sake of simplicity, we start from initial conditions not too far from equilibrium and show that the systems reach a stationary regime, as expected (despite no restriction is imposed in the choice of the initial conditions). We also show a good agreement with the standard density functional theory for the hydrogenic molecule. These results demonstrate that the combination of the Wigner Monte Carlo method and Kohn–Sham systems provides a reliable computational tool which could, eventually, be applied to more sophisticated problems.
Valence-bond quantum Monte Carlo algorithms defined on trees.
Deschner, Andreas; Sørensen, Erik S
2014-09-01
We present a class of algorithms for performing valence-bond quantum Monte Carlo of quantum spin models. Valence-bond quantum Monte Carlo is a projective T=0 Monte Carlo method based on sampling of a set of operator strings that can be viewed as forming a treelike structure. The algorithms presented here utilize the notion of a worm that moves up and down this tree and changes the associated operator string. In quite general terms, we derive a set of equations whose solutions correspond to a whole class of algorithms. As specific examples of this class of algorithms, we focus on two cases. The bouncing worm algorithm, for which updates are always accepted by allowing the worm to bounce up and down the tree, and the driven worm algorithm, where a single parameter controls how far up the tree the worm reaches before turning around. The latter algorithm involves only a single bounce where the worm turns from going up the tree to going down. The presence of the control parameter necessitates the introduction of an acceptance probability for the update. PMID:25314561
Monte Carlo simulations of particle acceleration at oblique shocks
NASA Technical Reports Server (NTRS)
Baring, Matthew G.; Ellison, Donald C.; Jones, Frank C.
1994-01-01
The Fermi shock acceleration mechanism may be responsible for the production of high-energy cosmic rays in a wide variety of environments. Modeling of this phenomenon has largely focused on plane-parallel shocks, and one of the most promising techniques for its study is the Monte Carlo simulation of particle transport in shocked fluid flows. One of the principal problems in shock acceleration theory is the mechanism and efficiency of injection of particles from the thermal gas into the accelerated population. The Monte Carlo technique is ideally suited to addressing the injection problem directly, and previous applications of it to the quasi-parallel Earth bow shock led to very successful modeling of proton and heavy ion spectra, as well as other observed quantities. Recently this technique has been extended to oblique shock geometries, in which the upstream magnetic field makes a significant angle Theta(sub B1) to the shock normal. Spectral resutls from test particle Monte Carlo simulations of cosmic-ray acceleration at oblique, nonrelativistic shocks are presented. The results show that low Mach number shocks have injection efficiencies that are relatively insensitive to (though not independent of) the shock obliquity, but that there is a dramatic drop in efficiency for shocks of Mach number 30 or more as the obliquity increases above 15 deg. Cosmic-ray distributions just upstream of the shock reveal prominent bumps at energies below the thermal peak; these disappear far upstream but might be observable features close to astrophysical shocks.
Chemical accuracy from quantum Monte Carlo for the benzene dimer
NASA Astrophysics Data System (ADS)
Azadi, Sam; Cohen, R. E.
2015-09-01
We report an accurate study of interactions between benzene molecules using variational quantum Monte Carlo (VMC) and diffusion quantum Monte Carlo (DMC) methods. We compare these results with density functional theory using different van der Waals functionals. In our quantum Monte Carlo (QMC) calculations, we use accurate correlated trial wave functions including three-body Jastrow factors and backflow transformations. We consider two benzene molecules in the parallel displaced geometry, and find that by highly optimizing the wave function and introducing more dynamical correlation into the wave function, we compute the weak chemical binding energy between aromatic rings accurately. We find optimal VMC and DMC binding energies of -2.3(4) and -2.7(3) kcal/mol, respectively. The best estimate of the coupled-cluster theory through perturbative triplets/complete basis set limit is -2.65(2) kcal/mol [Miliordos et al., J. Phys. Chem. A 118, 7568 (2014)]. Our results indicate that QMC methods give chemical accuracy for weakly bound van der Waals molecular interactions, comparable to results from the best quantum chemistry methods.
Using hierarchical octrees in Monte Carlo radiative transfer simulations
NASA Astrophysics Data System (ADS)
Saftly, W.; Camps, P.; Baes, M.; Gordon, K. D.; Vandewoude, S.; Rahimi, A.; Stalevski, M.
2013-06-01
A crucial aspect of 3D Monte Carlo radiative transfer is the choice of the spatial grid used to partition the dusty medium. We critically investigate the use of octree grids in Monte Carlo dust radiative transfer, with two different octree construction algorithms (regular and barycentric subdivision) and three different octree traversal algorithms (top-down, neighbour list, and the bookkeeping method). In general, regular octree grids need higher levels of subdivision compared to the barycentric grids for a fixed maximum cell mass threshold criterion. The total number of grid cells, however, depends on the geometry of the model. Surprisingly, regular octree grid simulations turn out to be 10 to 20% more efficient in run time than the barycentric grid simulations, even for those cases where the latter contain fewer grid cells than the former. Furthermore, we find that storing neighbour lists for each cell in an octree, ordered according to decreasing overlap area, is worth the additional memory and implementation overhead: using neighbour lists can cut down the grid traversal by 20% compared to the traditional top-down method. In conclusion, the combination of a regular node subdivision and the neighbour list method results in the most efficient octree structure for Monte Carlo radiative transfer simulations.
Propagation of light: Coherent or Monte-Carlo computation ?
Jacques Moret-Bailly
2011-10-03
Wrong Monte-Carlo computations are used to study the propagation of light in low pressure gas of nebulae. We recall that the incoherent interactions required for Monte Carlo calculations and hindering coherent interactions are due to collisions that disappear at low pressure. Incoherent interactions blur the images while coherent do not. We introduce coherent optical effects or substitute them for Monte Carlo calculations in published papers, improving the results and avoiding the introduction of "new physics". The spectral radiance of novae has the magnitude of the radiance of lasers, and large column densities are available in the nebulae. Several types of coherent interactions (superradiance, multiphoton effects, etc..), well studied using lasers, work in nebulae as in laboratories. The relatively thin shell of plasma containing atoms around a Str\\^omgren sphere is superradiant, so that the limb of the sphere is seen as a circle which may be dotted into an even number of "pearls". The superradiant beams induce a multiphotonic scattering of the light rays emitted by the star, improving the brightness of the limb and darkening the star. Impulsive Stimulated Raman Scatterings (ISRS) in excited atomic hydrogen shift the frequencies of electromagnetic waves: UV-X lines of the Sun are red- or blue-shifted, the microwaves exchanged with the Pioneer 10 and 11 probes are blueshifted (no anomalous acceleration needed), the far stars are redshifted. Without any "new physics", coherent spectroscopy works as a magic stick to explain many observations.
Dynamic Monte Carlo description of thermal desorption processes
NASA Astrophysics Data System (ADS)
Weinketz, Sieghard
1994-07-01
The applicability of the dynamic Monte Carlo method of Fichthorn and Weinberg, in which the time evolution of a system is described in terms of the absolute number of different microscopic possible events and their associated transition rates, is discussed for the case of thermal desorption simulations. It is shown that the definition of the time increment at each successful event leads naturally to the macroscopic differential equation of desorption, in the case of simple first- and second-order processes in which the only possible events are desorption and diffusion. This equivalence is numerically demonstrated for a second-order case. In the sequence, the equivalence of this method with the Monte Carlo method of Sales and Zgrablich for more complex desorption processes, allowing for lateral interactions between adsorbates, is shown, even though the dynamic Monte Carlo method does not bear their limitation of a rapid surface diffusion condition, thus being able to describe a more complex ``kinetics'' of surface reactive processes, and therefore be applied to a wider class of phenomena, such as surface catalysis.
New Quantum Monte Carlo Approach to the Holstein Model
NASA Astrophysics Data System (ADS)
Hohenadler, Martin; Evertz, Hans Gerd; von der Linden, Wolfgang
2004-03-01
Based on the canonical Lang-Firsov transformation of the Hamiltonian, we develop a novel quantum Monte Carlo algorithm for the Holstein model. Separation of the fermionic degrees of freedom by a reweighting of the probability distribution leads to a dramatic reduction in computational effort, and a principle component representation of the phonon degrees of freedom allows us to sample completely uncorrelated phonon configurations. Tests for the case of a single electron reveal that, despite a minus-sign problem resulting from the Lang-Firsov transformation, the combination of these elements enables us to perform efficient simulations for a wide range of temperature, phonon frequency and electron-phonon coupling in one to three dimensions, and on clusters large enough to avoid significant finite-size effects. In contrast to existing world-line methods, which have been used successfully to study the Holstein model with one or two electrons, the present method can be generalized to the many-electron case. The algorithm resembles closely to the determinant quantum Monte Carlo method, which is restricted to relatively large phonon frequencies by very strong autocorrelations, while the major advantage of our approach is the uncorrelated sampling of the phonons. This allows us to study the regime of small but finite phonon frequencies, into which transition metal oxides, such as the perovskite manganites, fall. In particular, we focus on the effects of quantum phonons, which have been completely neglected in existing Monte Carlo simulations for these materials.
Improved diffusion coefficients generated from Monte Carlo codes
Herman, B. R.; Forget, B.; Smith, K.; Aviles, B. N.
2013-07-01
Monte Carlo codes are becoming more widely used for reactor analysis. Some of these applications involve the generation of diffusion theory parameters including macroscopic cross sections and diffusion coefficients. Two approximations used to generate diffusion coefficients are assessed using the Monte Carlo code MC21. The first is the method of homogenization; whether to weight either fine-group transport cross sections or fine-group diffusion coefficients when collapsing to few-group diffusion coefficients. The second is a fundamental approximation made to the energy-dependent P1 equations to derive the energy-dependent diffusion equations. Standard Monte Carlo codes usually generate a flux-weighted transport cross section with no correction to the diffusion approximation. Results indicate that this causes noticeable tilting in reconstructed pin powers in simple test lattices with L2 norm error of 3.6%. This error is reduced significantly to 0.27% when weighting fine-group diffusion coefficients by the flux and applying a correction to the diffusion approximation. Noticeable tilting in reconstructed fluxes and pin powers was reduced when applying these corrections. (authors)
Chemical accuracy from quantum Monte Carlo for the benzene dimer.
Azadi, Sam; Cohen, R E
2015-09-14
We report an accurate study of interactions between benzene molecules using variational quantum Monte Carlo (VMC) and diffusion quantum Monte Carlo (DMC) methods. We compare these results with density functional theory using different van der Waals functionals. In our quantum Monte Carlo (QMC) calculations, we use accurate correlated trial wave functions including three-body Jastrow factors and backflow transformations. We consider two benzene molecules in the parallel displaced geometry, and find that by highly optimizing the wave function and introducing more dynamical correlation into the wave function, we compute the weak chemical binding energy between aromatic rings accurately. We find optimal VMC and DMC binding energies of -2.3(4) and -2.7(3) kcal/mol, respectively. The best estimate of the coupled-cluster theory through perturbative triplets/complete basis set limit is -2.65(2) kcal/mol [Miliordos et al., J. Phys. Chem. A 118, 7568 (2014)]. Our results indicate that QMC methods give chemical accuracy for weakly bound van der Waals molecular interactions, comparable to results from the best quantum chemistry methods. PMID:26374029
Pattern Recognition for a Flight Dynamics Monte Carlo Simulation
NASA Technical Reports Server (NTRS)
Restrepo, Carolina; Hurtado, John E.
2011-01-01
The design, analysis, and verification and validation of a spacecraft relies heavily on Monte Carlo simulations. Modern computational techniques are able to generate large amounts of Monte Carlo data but flight dynamics engineers lack the time and resources to analyze it all. The growing amounts of data combined with the diminished available time of engineers motivates the need to automate the analysis process. Pattern recognition algorithms are an innovative way of analyzing flight dynamics data efficiently. They can search large data sets for specific patterns and highlight critical variables so analysts can focus their analysis efforts. This work combines a few tractable pattern recognition algorithms with basic flight dynamics concepts to build a practical analysis tool for Monte Carlo simulations. Current results show that this tool can quickly and automatically identify individual design parameters, and most importantly, specific combinations of parameters that should be avoided in order to prevent specific system failures. The current version uses a kernel density estimation algorithm and a sequential feature selection algorithm combined with a k-nearest neighbor classifier to find and rank important design parameters. This provides an increased level of confidence in the analysis and saves a significant amount of time.
Russell, Stuart
OH et al.: MARKOV CHAIN MONTE CARLO DATA ASSOCIATION FOR MULTIPLE-TARGET TRACKING 1 Markov Chain Monte Carlo Data Association for Multiple-Target Tracking Songhwai Oh, Stuart Russell, and Shankar association problems arising in multiple-target tracking in a cluttered environment. When the number
Monte Carlo simulations of TL and OSL in nanodosimetric materials and feldspars
Chen, Reuven
history: Available online 19 December 2014 Keywords: Thermoluminescence LM-OSL Monte Carlo simulations of carrying out Monte Carlo simulations for thermoluminescence (TL) and optically stimulated luminescence (OSL Carlo methods for the study of thermoluminescence (TL) were presented in the papers by Mandowski (2001
NASA Technical Reports Server (NTRS)
Ponomarev, Artem; Cucinotta, F.
2011-01-01
To create a generalized mechanistic model of DNA damage in human cells that will generate analytical and image data corresponding to experimentally observed DNA damage foci and will help to improve the experimental foci yields by simulating spatial foci patterns and resolving problems with quantitative image analysis. Material and Methods: The analysis of patterns of RIFs (radiation-induced foci) produced by low- and high-LET (linear energy transfer) radiation was conducted by using a Monte Carlo model that combines the heavy ion track structure with characteristics of the human genome on the level of chromosomes. The foci patterns were also simulated in the maximum projection plane for flat nuclei. Some data analysis was done with the help of image segmentation software that identifies individual classes of RIFs and colocolized RIFs, which is of importance to some experimental assays that assign DNA damage a dual phosphorescent signal. Results: The model predicts the spatial and genomic distributions of DNA DSBs (double strand breaks) and associated RIFs in a human cell nucleus for a particular dose of either low- or high-LET radiation. We used the model to do analyses for different irradiation scenarios. In the beam-parallel-to-the-disk-of-a-flattened-nucleus scenario we found that the foci appeared to be merged due to their high density, while, in the perpendicular-beam scenario, the foci appeared as one bright spot per hit. The statistics and spatial distribution of regions of densely arranged foci, termed DNA foci chains, were predicted numerically using this model. Another analysis was done to evaluate the number of ion hits per nucleus, which were visible from streaks of closely located foci. In another analysis, our image segmentaiton software determined foci yields directly from images with single-class or colocolized foci. Conclusions: We showed that DSB clustering needs to be taken into account to determine the true DNA damage foci yield, which helps to determine the DSB yield. Using the model analysis, a researcher can refine the DSB yield per nucleus per particle. We showed that purely geometric artifacts, present in the experimental images, can be analytically resolved with the model, and that the quantization of track hits and DSB yields can be provided to the experimentalists who use enumeration of radiation-induced foci in immunofluorescence experiments using proteins that detect DNA damage. An automated image segmentaiton software can prove useful in a faster and more precise object counting for colocolized foci images.
A Bayesian Monte Carlo Markov Chain Method for the Analysis of GPS Position Time Series
NASA Astrophysics Data System (ADS)
Olivares, German; Teferle, Norman
2013-04-01
Position time series from continuous GPS are an essential tool in many areas of the geosciences and are, for example, used to quantify long-term movements due to processes such as plate tectonics or glacial isostatic adjustment. It is now widely established that the stochastic properties of the time series do not follow a random behavior and this affects parameter estimates and associated uncertainties. Consequently, a comprehensive knowledge of the stochastic character of the position time series is crucial in order to obtain realistic error bounds and for this a number of methods have already been applied successfully. We present a new Bayesian Monte Carlo Markov Chain (MCMC) method to simultaneously estimate the model and the stochastic parameters of the noise in GPS position time series. This method provides a sample of the likelihood function and thereby, using Monte Carlo integration, all parameters and their uncertainties are estimated simultaneously. One advantage of the MCMC method is that the computational time increases linearly with the number of parameters, hence being very suitable for dealing with a high number of parameters. A second advantage is that the properties of the estimator used in this method do not depend on the stationarity of the time series. At least on a theoretical level, no other estimator has been shown to have this feature. Furthermore, the MCMC method provides a means to detect multi-modality of the parameter estimates. We present an evaluation of the new MCMC method through comparison with widely used optimization and empirical methods for the analysis of GPS position time series.
Parallel Monte Carlo Synthetic Acceleration methods for discrete transport problems
NASA Astrophysics Data System (ADS)
Slattery, Stuart R.
This work researches and develops Monte Carlo Synthetic Acceleration (MCSA) methods as a new class of solution techniques for discrete neutron transport and fluid flow problems. Monte Carlo Synthetic Acceleration methods use a traditional Monte Carlo process to approximate the solution to the discrete problem as a means of accelerating traditional fixed-point methods. To apply these methods to neutronics and fluid flow and determine the feasibility of these methods on modern hardware, three complementary research and development exercises are performed. First, solutions to the SPN discretization of the linear Boltzmann neutron transport equation are obtained using MCSA with a difficult criticality calculation for a light water reactor fuel assembly used as the driving problem. To enable MCSA as a solution technique a group of modern preconditioning strategies are researched. MCSA when compared to conventional Krylov methods demonstrated improved iterative performance over GMRES by converging in fewer iterations when using the same preconditioning. Second, solutions to the compressible Navier-Stokes equations were obtained by developing the Forward-Automated Newton-MCSA (FANM) method for nonlinear systems based on Newton's method. Three difficult fluid benchmark problems in both convective and driven flow regimes were used to drive the research and development of the method. For 8 out of 12 benchmark cases, it was found that FANM had better iterative performance than the Newton-Krylov method by converging the nonlinear residual in fewer linear solver iterations with the same preconditioning. Third, a new domain decomposed algorithm to parallelize MCSA aimed at leveraging leadership-class computing facilities was developed by utilizing parallel strategies from the radiation transport community. The new algorithm utilizes the Multiple-Set Overlapping-Domain strategy in an attempt to reduce parallel overhead and add a natural element of replication to the algorithm. It was found that for the current implementation of MCSA, both weak and strong scaling improved on that observed for production implementations of Krylov methods.
Properties of reactive oxygen species by quantum Monte Carlo
Zen, Andrea; Trout, Bernhardt L.; Guidoni, Leonardo
2014-07-07
The electronic properties of the oxygen molecule, in its singlet and triplet states, and of many small oxygen-containing radicals and anions have important roles in different fields of chemistry, biology, and atmospheric science. Nevertheless, the electronic structure of such species is a challenge for ab initio computational approaches because of the difficulties to correctly describe the statical and dynamical correlation effects in presence of one or more unpaired electrons. Only the highest-level quantum chemical approaches can yield reliable characterizations of their molecular properties, such as binding energies, equilibrium structures, molecular vibrations, charge distribution, and polarizabilities. In this work we use the variational Monte Carlo (VMC) and the lattice regularized Monte Carlo (LRDMC) methods to investigate the equilibrium geometries and molecular properties of oxygen and oxygen reactive species. Quantum Monte Carlo methods are used in combination with the Jastrow Antisymmetrized Geminal Power (JAGP) wave function ansatz, which has been recently shown to effectively describe the statical and dynamical correlation of different molecular systems. In particular, we have studied the oxygen molecule, the superoxide anion, the nitric oxide radical and anion, the hydroxyl and hydroperoxyl radicals and their corresponding anions, and the hydrotrioxyl radical. Overall, the methodology was able to correctly describe the geometrical and electronic properties of these systems, through compact but fully-optimised basis sets and with a computational cost which scales as N{sup 3} ? N{sup 4}, where N is the number of electrons. This work is therefore opening the way to the accurate study of the energetics and of the reactivity of large and complex oxygen species by first principles.
Monte Carlo Simulations of Model Molecular and Polymer Fluids
NASA Astrophysics Data System (ADS)
Hu, Wen-Ching
Three different rigid rod fluids, rigid rods on a cubic lattice, off-lattice spherocylinders with restricted orientational freedom and off-lattice spherocylinders with total orientational freedom have been investigated by Monte Carlo simulations. Comparisons have been made between simulation results and theoretical predictions, such as DiMarzio's theory for rigid rods on a cubic lattice and Scaled Particle Theory (SPT) for spherocylinders with restricted orientational freedom. The study concludes that rigid rods on a cubic lattice show an isotropic-nematic transition only when a modified potential, no close contact allowed for a non-parallel pair of rods, is applied. The spherocylinders with both restricted and total orientational freedom exhibit an isotropic-nematic transition. The imposition of restricted orientational freedom reduces the transition density. The SPT shows good agreement with the simulations for spherocylinders with restricted orientational freedom. The simulations for the spherocylinders with restricted orientational freedom in a slitlike pore have been performed and various slit width dependent properties have been investigated. Polymer liquid crystals modeled as jointed spherocylinders have also been studied using Monte Carlo simulations. From the simulations for binary mixtures of hard spheres and spherocylinders, it has been concluded that these mixtures show positive constant pressure excess free energy and negative constant packing fraction excess free energy due to size effects. The SPT predictions are consistent with the simulations for binary mixtures of hard spheres. The mixing properties two binary mixtures, ethylbenzene -anisol and ethylbenzene-(2,6-dimethyl)anisol, have been investigated using molecular mechanics in Monte Carlo simulations. Comparisons between simulations and experiments have also been made. The experimental data indicate a positive heat of mixing for ethylbenzene-anisol and negative for ethylbenzene-(2,6 -dimethyl)anisol system. The simulations do not predict the heat of mixing very well due to the errors caused by many factors, such as system size effects and nonoptimized force field parameters.
Monte Carlo simulation of light propagation in the adult brain
NASA Astrophysics Data System (ADS)
Mudra, Regina M.; Nadler, Andreas; Keller, Emanuella; Niederer, Peter
2004-06-01
When near infrared spectroscopy (NIRS) is applied noninvasively to the adult head for brain monitoring, extra-cerebral bone and surface tissue exert a substantial influence on the cerebral signal. Most attempts to subtract extra-cerebral contamination involve spatially resolved spectroscopy (SRS). However, inter-individual variability of anatomy restrict the reliability of SRS. We simulated the light propagation with Monte Carlo techniques on the basis of anatomical structures determined from 3D-magnetic resonance imaging (MRI) exhibiting a voxel resolution of 0.8 x 0.8 x 0.8 mm3 for three different pairs of T1/T2 values each. The MRI data were used to define the material light absorption and dispersion coefficient for each voxel. The resulting spatial matrix was applied in the Monte Carlo Simulation to determine the light propagation in the cerebral cortex and overlaying structures. The accuracy of the Monte Carlo Simulation was furthermore increased by using a constant optical path length for the photons which was less than the median optical path length of the different materials. Based on our simulations we found a differential pathlength factor (DPF) of 6.15 which is close to with the value of 5.9 found in the literature for a distance of 4.5cm between the external sensors. Furthermore, we weighted the spatial probability distribution of the photons within the different tissues with the probabilities of the relative blood volume within the tissue. The results show that 50% of the NIRS signal is determined by the grey matter of the cerebral cortex which allows us to conclude that NIRS can produce meaningful cerebral blood flow measurements providing that the necessary corrections for extracerebral contamination are included.
Anisotropic seismic inversion using a multigrid Monte Carlo approach
NASA Astrophysics Data System (ADS)
Mewes, Armin; Kulessa, Bernd; McKinley, John D.; Binley, Andrew M.
2010-10-01
We propose a new approach for the inversion of anisotropic P-wave data based on Monte Carlo methods combined with a multigrid approach. Simulated annealing facilitates objective minimization of the functional characterizing the misfit between observed and predicted traveltimes, as controlled by the Thomsen anisotropy parameters (?, ?). Cycling between finer and coarser grids enhances the computational efficiency of the inversion process, thus accelerating the convergence of the solution while acting as a regularization technique of the inverse problem. Multigrid perturbation samples the probability density function without the requirements for the user to adjust tuning parameters. This increases the probability that the preferred global, rather than a poor local, minimum is attained. Undertaking multigrid refinement and Monte Carlo search in parallel produces more robust convergence than does the initially more intuitive approach of completing them sequentially. We demonstrate the usefulness of the new multigrid Monte Carlo (MGMC) scheme by applying it to (a) synthetic, noise-contaminated data reflecting an isotropic subsurface of constant slowness, horizontally layered geologic media and discrete subsurface anomalies; and (b) a crosshole seismic data set acquired by previous authors at the Reskajeage test site in Cornwall, UK. Inverted distributions of slowness (s) and the Thomson anisotropy parameters (?, ?) compare favourably with those obtained previously using a popular matrix-based method. Reconstruction of the Thomsen ? parameter is particularly robust compared to that of slowness and the Thomsen ? parameter, even in the face of complex subsurface anomalies. The Thomsen ? and ? parameters have enhanced sensitivities to bulk-fabric and fracture-based anisotropies in the TI medium at Reskajeage. Because reconstruction of slowness (s) is intimately linked to that ? and ? in the MGMC scheme, inverted images of phase velocity reflect the integrated effects of these two modes of anisotropy. The new MGMC technique thus promises to facilitate rapid inversion of crosshole P-wave data for seismic slownesses and the Thomsen anisotropy parameters, with minimal user input in the inversion process.
CSnrc: Correlated sampling Monte Carlo calculations using EGSnrc
Buckley, Lesley A.; Kawrakow, I.; Rogers, D.W.O.
2004-12-01
CSnrc, a new user-code for the EGSnrc Monte Carlo system is described. This user-code improves the efficiency when calculating ratios of doses from similar geometries. It uses a correlated sampling variance reduction technique. CSnrc is developed from an existing EGSnrc user-code CAVRZnrc and improves upon the correlated sampling algorithm used in an earlier version of the code written for the EGS4 Monte Carlo system. Improvements over the EGS4 version of the algorithm avoid repetition of sections of particle tracks. The new code includes a rectangular phantom geometry not available in other EGSnrc cylindrical codes. Comparison to CAVRZnrc shows gains in efficiency of up to a factor of 64 for a variety of test geometries when computing the ratio of doses to the cavity for two geometries. CSnrc is well suited to in-phantom calculations and is used to calculate the central electrode correction factor P{sub cel} in high-energy photon and electron beams. Current dosimetry protocols base the value of P{sub cel} on earlier Monte Carlo calculations. The current CSnrc calculations achieve 0.02% statistical uncertainties on P{sub cel}, much lower than those previously published. The current values of P{sub cel} compare well with the values used in dosimetry protocols for photon beams. For electrons beams, CSnrc calculations are reported at the reference depth used in recent protocols and show up to a 0.2% correction for a graphite electrode, a correction currently ignored by dosimetry protocols. The calculations show that for a 1 mm diameter aluminum central electrode, the correction factor differs somewhat from the values used in both the IAEA TRS-398 code of practice and the AAPM's TG-51 protocol.
Monte Carlo Tools for charged Higgs boson production
K. Kovarik
2014-12-18
In this short review we discuss two implementations of the charged Higgs boson production process in association with a top quark in Monte Carlo event generators at next-to-leading order in QCD. We introduce the MC@NLO and the POWHEG method of matching next-to-leading order matrix elements with parton showers and compare both methods analyzing the charged Higgs boson production process in association with a top quark. We shortly discuss the case of a light charged Higgs boson where the associated charged Higgs production interferes with the charged Higgs production via t tbar-production and subsequent decay of the top quark.
Graphics Processing Unit Accelerated Hirsch-Fye Quantum Monte Carlo
NASA Astrophysics Data System (ADS)
Moore, Conrad; Abu Asal, Sameer; Rajagoplan, Kaushik; Poliakoff, David; Caprino, Joseph; Tomko, Karen; Thakur, Bhupender; Yang, Shuxiang; Moreno, Juana; Jarrell, Mark
2012-02-01
In Dynamical Mean Field Theory and its cluster extensions, such as the Dynamic Cluster Algorithm, the bottleneck of the algorithm is solving the self-consistency equations with an impurity solver. Hirsch-Fye Quantum Monte Carlo is one of the most commonly used impurity and cluster solvers. This work implements optimizations of the algorithm, such as enabling large data re-use, suitable for the Graphics Processing Unit (GPU) architecture. The GPU's sheer number of concurrent parallel computations and large bandwidth to many shared memories takes advantage of the inherent parallelism in the Green function update and measurement routines, and can substantially improve the efficiency of the Hirsch-Fye impurity solver.
Studying the information content of TMDs using Monte Carlo generators
Avakian, H.; Matevosyan, H.; Pasquini, B.; Schweitzer, P.
2015-02-05
Theoretical advances in studies of the nucleon structure have been spurred by recent measurements of spin and/or azimuthal asymmetries worldwide. One of the main challenges still remaining is the extraction of the parton distribution functions, generalized to describe transverse momentum and spatial distributions of partons from these observables with no or minimal model dependence. In this topical review we present the latest developments in the field with emphasis on requirements for Monte Carlo event generators, indispensable for studies of the complex 3D nucleon structure, and discuss examples of possible applications.
Analysis of real-time networks with monte carlo methods
NASA Astrophysics Data System (ADS)
Mauclair, C.; Durrieu, G.
2013-12-01
Communication networks in embedded systems are ever more large and complex. A better understanding of the dynamics of these networks is necessary to use them at best and lower costs. Todays tools are able to compute upper bounds of end-to-end delays that a packet being sent through the network could suffer. However, in the case of asynchronous networks, those worst end-to-end delay (WEED) cases are rarely observed in practice or through simulations due to the scarce situations that lead to worst case scenarios. A novel approach based on Monte Carlo methods is suggested to study the effects of the asynchrony on the performances.
Monte Carlo simulation of a noisy quantum channel with memory
NASA Astrophysics Data System (ADS)
Akhalwaya, Ismail; Moodley, Mervlyn; Petruccione, Francesco
2015-10-01
The classical capacity of quantum channels is well understood for channels with uncorrelated noise. For the case of correlated noise, however, there are still open questions. We calculate the classical capacity of a forgetful channel constructed by Markov switching between two depolarizing channels. Techniques have previously been applied to approximate the output entropy of this channel and thus its capacity. In this paper, we use a Metropolis-Hastings Monte Carlo approach to numerically calculate the entropy. The algorithm is implemented in parallel and its performance is studied and optimized. The effects of memory on the capacity are explored and previous results are confirmed to higher precision.
Monte Carlo Simulations of the two-dimensional dipolar fluid
Caillol, Jean-Michel
2015-01-01
We study a two-dimensional fluid of dipolar hard disks by Monte Carlo simulations in a square with periodic boundary conditions and on the surface of a sphere. The theory of the dielectric constant and the asymptotic behaviour of the equilibrium pair correlation function in the fluid phase is derived for both geometries. After having established the equivalence of the two methods we study the stability of the liquid phase in the canonical ensemble. We give evidence of a phase made of living polymers at low temperatures and provide a tentative phase diagram.
MONTE CARLO ADVANCES FOR THE EOLUS ASCI PROJECT
J. S. HENDRICK; G. W. MCKINNEY; L. J. COX
2000-01-01
The Eolus ASCI project includes parallel, 3-D transport simulation for various nuclear applications. The codes developed within this project provide neutral and charged particle transport, detailed interaction physics, numerous source and tally capabilities, and general geometry packages. One such code is MCNPW which is a general purpose, 3-dimensional, time-dependent, continuous-energy Monte Carlo fully-coupled N-Particle transport code. Significant advances are also being made in the areas of modern software engineering and parallel computing. These advances are described in detail.
MCNP{trademark} Monte Carlo: A precis of MCNP
Adams, K.J.
1996-06-01
MCNP{trademark} is a general purpose three-dimensional time-dependent neutron, photon, and electron transport code. It is highly portable and user-oriented, and backed by stringent software quality assurance practices and extensive experimental benchmarks. The cross section database is based upon the best evaluations available. MCNP incorporates state-of-the-art analog and adaptive Monte Carlo techniques. The code is documented in a 600 page manual which is augmented by numerous Los Alamos technical reports which detail various aspects of the code. MCNP represents over a megahour of development and refinement over the past 50 years and an ongoing commitment to excellence.
Current status of the PSG Monte Carlo neutron transport code
Leppaenen, J.
2006-07-01
PSG is a new Monte Carlo neutron transport code, developed at the Technical Research Centre of Finland (VTT). The code is mainly intended for fuel assembly-level reactor physics calculations, such as group constant generation for deterministic reactor simulator codes. This paper presents the current status of the project and the essential capabilities of the code. Although the main application of PSG is in lattice calculations, the geometry is not restricted in two dimensions. This paper presents the validation of PSG against the experimental results of the three-dimensional MOX fuelled VENUS-2 reactor dosimetry benchmark. (authors)
A Monte Carlo Simulation on Clustering Dynamics of Social Amoebae
Yipeng Yang; Y. Charles Li
2013-05-13
A discrete model for computer simulations of the clustering dynamics of Social Amoebae is presented. This model incorporates the wavelike propagation of extracellular signaling cAMP, the sporadic firing of cells at early stage of aggregation, the signal relaying as a response to stimulus, the inertia and purposeful random walk of the cell movement. A Monte Carlo simulation is run which shows the existence of potential equilibriums of mean and variance of aggregation time. The simulation result of this model could well reproduce many phenomena observed by actual experiments.
Application of Direct Simulation Monte Carlo to Satellite Contamination Studies
NASA Technical Reports Server (NTRS)
Rault, Didier F. G.; Woronwicz, Michael S.
1995-01-01
A novel method is presented to estimate contaminant levels around spacecraft and satellites of arbitrarily complex geometry. The method uses a three-dimensional direct simulation Monte Carlo algorithm to characterize the contaminant cloud surrounding the space platform, and a computer-assisted design preprocessor to define the space-platform geometry. The method is applied to the Upper Atmosphere Research Satellite to estimate the contaminant flux incident on the optics of the halogen occultation experiment (HALOE) telescope. Results are presented in terms of contaminant cloud structure, molecular velocity distribution at HALOE aperture, and code performance.
Monte Carlo stability analysis of strained layer superlattice interfaces
Dodson, B.W.
1985-01-01
We have developed a procedure, based on conventional Monte Carlo methods, to investigate the limits of stability of a strained layer superlattice (SLS) system as a function of lattice mismatch and layer thickness. The method is demonstrated by the analysis of two-dimensional Lennard-Jones SLS systems, for which the regime of absolute SLS stability is mapped out. Extension of the technique to three-dimensional silicon-like model systems is discussed, and appropriate model potentials for stability analysis of the Si/SiGe system are introduced.
Element Agglomeration Algebraic Multilevel Monte-Carlo Library
2015-02-19
ElagMC is a parallel C++ library for Multilevel Monte Carlo simulations with algebraically constructed coarse spaces. ElagMC enables Multilevel variance reduction techniques in the context of general unstructured meshes by using the specialized element-based agglomeration techniques implemented in ELAG (the Element-Agglomeration Algebraic Multigrid and Upscaling Library developed by U. Villa and P. Vassilevski and currently under review for public release). The ElabMC library can support different type of deterministic problems, including mixed finite element discretizations of subsurface flow problems.
Monte Carlo simulations of charge transport in heterogeneous organic semiconductors
NASA Astrophysics Data System (ADS)
Aung, Pyie Phyo; Khanal, Kiran; Luettmer-Strathmann, Jutta
2015-03-01
The efficiency of organic solar cells depends on the morphology and electronic properties of the active layer. Research teams have been experimenting with different conducting materials to achieve more efficient solar panels. In this work, we perform Monte Carlo simulations to study charge transport in heterogeneous materials. We have developed a coarse-grained lattice model of polymeric photovoltaics and use it to generate active layers with ordered and disordered regions. We determine carrier mobilities for a range of conditions to investigate the effect of the morphology on charge transport.
Adaptively Learning an Importance Function Using Transport Constrained Monte Carlo
Booth, T.E.
1998-06-22
It is well known that a Monte Carlo estimate can be obtained with zero-variance if an exact importance function for the estimate is known. There are many ways that one might iteratively seek to obtain an ever more exact importance function. This paper describes a method that has obtained ever more exact importance functions that empirically produce an error that is dropping exponentially with computer time. The method described herein constrains the importance function to satisfy the (adjoint) Boltzmann transport equation. This constraint is provided by using the known form of the solution, usually referred to as the Case eigenfunction solution.
Communication: Water on hexagonal boron nitride from diffusion Monte Carlo
Al-Hamdani, Yasmine S.; Ma, Ming; Michaelides, Angelos; Alfè, Dario; Lilienfeld, O. Anatole von
2015-05-14
Despite a recent flurry of experimental and simulation studies, an accurate estimate of the interaction strength of water molecules with hexagonal boron nitride is lacking. Here, we report quantum Monte Carlo results for the adsorption of a water monomer on a periodic hexagonal boron nitride sheet, which yield a water monomer interaction energy of ?84 ± 5 meV. We use the results to evaluate the performance of several widely used density functional theory (DFT) exchange correlation functionals and find that they all deviate substantially. Differences in interaction energies between different adsorption sites are however better reproduced by DFT.
Directed-loop Monte Carlo simulations of vertex models.
Syljuåsen, Olav F; Zvonarev, M B
2004-01-01
We show how the directed-loop Monte Carlo algorithm can be applied to study vertex models. The algorithm is employed to calculate the arrow polarization in the six-vertex model with the domain wall boundary conditions. The model exhibits spatially separated ordered and "disordered" regions. We show how the boundary between these regions depends on parameters of the model. We give some predictions on the behavior of the polarization in the thermodynamic limit and discuss the relation to the Arctic Circle theorem. PMID:15324140
Quantitative Monte Carlo-based holmium-166 SPECT reconstruction
Elschot, Mattijs; Smits, Maarten L. J.; Nijsen, Johannes F. W.; Lam, Marnix G. E. H.; Zonnenberg, Bernard A.; Bosch, Maurice A. A. J. van den; Jong, Hugo W. A. M. de; Viergever, Max A.
2013-11-15
Purpose: Quantitative imaging of the radionuclide distribution is of increasing interest for microsphere radioembolization (RE) of liver malignancies, to aid treatment planning and dosimetry. For this purpose, holmium-166 ({sup 166}Ho) microspheres have been developed, which can be visualized with a gamma camera. The objective of this work is to develop and evaluate a new reconstruction method for quantitative {sup 166}Ho SPECT, including Monte Carlo-based modeling of photon contributions from the full energy spectrum.Methods: A fast Monte Carlo (MC) simulator was developed for simulation of {sup 166}Ho projection images and incorporated in a statistical reconstruction algorithm (SPECT-fMC). Photon scatter and attenuation for all photons sampled from the full {sup 166}Ho energy spectrum were modeled during reconstruction by Monte Carlo simulations. The energy- and distance-dependent collimator-detector response was modeled using precalculated convolution kernels. Phantom experiments were performed to quantitatively evaluate image contrast, image noise, count errors, and activity recovery coefficients (ARCs) of SPECT-fMC in comparison with those of an energy window-based method for correction of down-scattered high-energy photons (SPECT-DSW) and a previously presented hybrid method that combines MC simulation of photopeak scatter with energy window-based estimation of down-scattered high-energy contributions (SPECT-ppMC+DSW). Additionally, the impact of SPECT-fMC on whole-body recovered activities (A{sup est}) and estimated radiation absorbed doses was evaluated using clinical SPECT data of six {sup 166}Ho RE patients.Results: At the same noise level, SPECT-fMC images showed substantially higher contrast than SPECT-DSW and SPECT-ppMC+DSW in spheres ?17 mm in diameter. The count error was reduced from 29% (SPECT-DSW) and 25% (SPECT-ppMC+DSW) to 12% (SPECT-fMC). ARCs in five spherical volumes of 1.96–106.21 ml were improved from 32%–63% (SPECT-DSW) and 50%–80% (SPECT-ppMC+DSW) to 76%–103% (SPECT-fMC). Furthermore, SPECT-fMC recovered whole-body activities were most accurate (A{sup est}= 1.06 × A ? 5.90 MBq, R{sup 2}= 0.97) and SPECT-fMC tumor absorbed doses were significantly higher than with SPECT-DSW (p = 0.031) and SPECT-ppMC+DSW (p = 0.031).Conclusions: The quantitative accuracy of {sup 166}Ho SPECT is improved by Monte Carlo-based modeling of the image degrading factors. Consequently, the proposed reconstruction method enables accurate estimation of the radiation absorbed dose in clinical practice.
Computed radiography simulation using the Monte Carlo code MCNPX.
Correa, S C A; Souza, E M; Silva, A X; Cassiano, D H; Lopes, R T
2010-09-01
Simulating X-ray images has been of great interest in recent years as it makes possible an analysis of how X-ray images are affected owing to relevant operating parameters. In this paper, a procedure for simulating computed radiographic images using the Monte Carlo code MCNPX is proposed. The sensitivity curve of the BaFBr image plate detector as well as the characteristic noise of a 16-bit computed radiography system were considered during the methodology's development. The results obtained confirm that the proposed procedure for simulating computed radiographic images is satisfactory, as it allows obtaining results comparable with experimental data. PMID:20227885
3D Monte Carlo radiation transfer modelling of photodynamic therapy
NASA Astrophysics Data System (ADS)
Campbell, C. Louise; Christison, Craig; Brown, C. Tom A.; Wood, Kenneth; Valentine, Ronan M.; Moseley, Harry
2015-06-01
The effects of ageing and skin type on Photodynamic Therapy (PDT) for different treatment methods have been theoretically investigated. A multilayered Monte Carlo Radiation Transfer model is presented where both daylight activated PDT and conventional PDT are compared. It was found that light penetrates deeper through older skin with a lighter complexion, which translates into a deeper effective treatment depth. The effect of ageing was found to be larger for darker skin types. The investigation further strengthens the usage of daylight as a potential light source for PDT where effective treatment depths of about 2 mm can be achieved.
Neutronic calculations for CANDU thorium systems using Monte Carlo techniques
NASA Astrophysics Data System (ADS)
Saldideh, M.; Shayesteh, M.; Eshghi, M.
2014-08-01
In this paper, we have investigated the prospects of exploiting the rich world thorium reserves using Canada Deuterium Uranium (CANDU) reactors. The analysis is performed using the Monte Carlo MCNP code in order to understand how much time the reactor is in criticality conduction. Four different fuel compositions have been selected for analysis. We have obtained the infinite multiplication factor, k?, under full power operation of the reactor over 8 years. The neutronic flux distribution in the full core reactor has already been investigated.
Morphological evolution of growing crystals - A Monte Carlo simulation
NASA Technical Reports Server (NTRS)
Xiao, Rong-Fu; Alexander, J. Iwan D.; Rosenberger, Franz
1988-01-01
The combined effects of nutrient diffusion and surface kinetics on the crystal morphology were investigated using a Monte Carlo model to simulate the evolving morphology of a crystal growing from a two-component gaseous nutrient phase. The model combines nutrient diffusion, based on a modified diffusion-limited aggregation process, with anisotropic surface-attachment kinetics and surface diffusion. A variety of conditions, ranging from kinetic-controlled to diffusion-controlled growth, were examined. Successive transitions from compact faceted (dominant surface kinetics) to open dendritic morphologies (dominant volume diffusion) were obtained.
Element Agglomeration Algebraic Multilevel Monte-Carlo Library
Energy Science and Technology Software Center (ESTSC)
2015-02-19
ElagMC is a parallel C++ library for Multilevel Monte Carlo simulations with algebraically constructed coarse spaces. ElagMC enables Multilevel variance reduction techniques in the context of general unstructured meshes by using the specialized element-based agglomeration techniques implemented in ELAG (the Element-Agglomeration Algebraic Multigrid and Upscaling Library developed by U. Villa and P. Vassilevski and currently under review for public release). The ElabMC library can support different type of deterministic problems, including mixed finite element discretizationsmore »of subsurface flow problems.« less
Application of Monte Carlo methods in tomotherapy and radiation biophysics
NASA Astrophysics Data System (ADS)
Hsiao, Ya-Yun
Helical tomotherapy is an attractive treatment for cancer therapy because highly conformal dose distributions can be achieved while the on-board megavoltage CT provides simultaneous images for accurate patient positioning. The convolution/superposition (C/S) dose calculation methods typically used for Tomotherapy treatment planning may overestimate skin (superficial) doses by 3-13%. Although more accurate than C/S methods, Monte Carlo (MC) simulations are too slow for routine clinical treatment planning. However, the computational requirements of MC can be reduced by developing a source model for the parts of the accelerator that do not change from patient to patient. This source model then becomes the starting point for additional simulations of the penetration of radiation through patient. In the first section of this dissertation, a source model for a helical tomotherapy is constructed by condensing information from MC simulations into series of analytical formulas. The MC calculated percentage depth dose and beam profiles computed using the source model agree within 2% of measurements for a wide range of field sizes, which suggests that the proposed source model provides an adequate representation of the tomotherapy head for dose calculations. Monte Carlo methods are a versatile technique for simulating many physical, chemical and biological processes. In the second major of this thesis, a new methodology is developed to simulate of the induction of DNA damage by low-energy photons. First, the PENELOPE Monte Carlo radiation transport code is used to estimate the spectrum of initial electrons produced by photons. The initial spectrum of electrons are then combined with DNA damage yields for monoenergetic electrons from the fast Monte Carlo damage simulation (MCDS) developed earlier by Semenenko and Stewart (Purdue University). Single- and double-strand break yields predicted by the proposed methodology are in good agreement (1%) with the results of published experimental and theoretical studies for 60Co gamma-rays and low-energy x-rays. The reported studies provide new information about the potential biological consequences of diagnostic x-rays and selected gamma-emitting radioisotopes used in brachytherapy for the treatment of cancer. The proposed methodology is computationally efficient and may also be useful in proton therapy, space applications or internal dosimetry.
Monte Carlo Simulations and Generation of the SPI Response
NASA Technical Reports Server (NTRS)
Sturner, S. J.; Shrader, C. R.; Weidenspointner, G.; Teegarden, B. J.; Attie, D.; Diehl, R.; Ferguson, C.; Jean, P.; vonKienlin, A.
2003-01-01
In this paper we discuss the methods developed for the production of the INTEGRAL/SPI instrument response. The response files were produced using a suite of Monte Carlo simulation software developed at NASA/GSFC based on the GEANT-3 package available from CERN. The production of the INTEGRAL/SPI instrument response also required the development of a detailed computer mass model for SPI. We discuss our extensive investigations into methods to reduce both the computation time and storage requirements for the SPI response. We also discuss corrections to the simulated response based on our comparison of ground and inflight calibration data with MGEANT simulation.
Monte Carlo Simulations and Generation of the SPI Response
NASA Technical Reports Server (NTRS)
Sturner, S. J.; Shrader, C. R.; Weidenspointner, G.; Teegarden, B. J.; Attie, D.; Cordier, B.; Diehl, R.; Ferguson, C.; Jean, P.; vonKienlin, A.
2003-01-01
In this paper we discuss the methods developed for the production of the INTEGRAL/SPI instrument response. The response files were produced using a suite of Monte Carlo simulation software developed at NASA/GSFC based on the GEANT-3 package available from CERN. The production of the INTEGRAL/SPI instrument response also required the development of a detailed computer mass model for SPI. We discuss ow extensive investigations into methods to reduce both the computation time and storage requirements for the SPI response. We also discuss corrections to the simulated response based on our comparison of ground and infiight Calibration data with MGEANT simulations.
Monte Carlo calculations of few-body and light nuclei
Wiringa, R.B.
1992-02-01
A major goal in nuclear physics is to understand how nuclear structure comes about from the underlying interactions between nucleons. This requires modelling nuclei as collections of strongly interacting particles. Using realistic nucleon-nucleon potentials, supplemented with consistent three-nucleon potentials and two-body electroweak current operators, variational Monte Carlo methods are used to calculate nuclear ground-state properties, such as the binding energy, electromagnetic form factors, and momentum distributions. Other properties such as excited states and low-energy reactions are also calculable with these methods.
Monte Carlo calculations of few-body and light nuclei
Wiringa, R.B.
1992-01-01
A major goal in nuclear physics is to understand how nuclear structure comes about from the underlying interactions between nucleons. This requires modelling nuclei as collections of strongly interacting particles. Using realistic nucleon-nucleon potentials, supplemented with consistent three-nucleon potentials and two-body electroweak current operators, variational Monte Carlo methods are used to calculate nuclear ground-state properties, such as the binding energy, electromagnetic form factors, and momentum distributions. Other properties such as excited states and low-energy reactions are also calculable with these methods.
AVATAR -- Automatic variance reduction in Monte Carlo calculations
Van Riper, K.A.; Urbatsch, T.J.; Soran, P.D.
1997-05-01
AVATAR{trademark} (Automatic Variance And Time of Analysis Reduction), accessed through the graphical user interface application, Justine{trademark}, is a superset of MCNP{trademark} that automatically invokes THREEDANT{trademark} for a three-dimensional deterministic adjoint calculation on a mesh independent of the Monte Carlo geometry, calculates weight windows, and runs MCNP. Computational efficiency increases by a factor of 2 to 5 for a three-detector oil well logging tool model. Human efficiency increases dramatically, since AVATAR eliminates the need for deep intuition and hours of tedious handwork.
Neutron streaming Monte Carlo radiation transport code MORSE-CG
Halley, A.M.; Miller, W.H.
1986-11-01
Calculations have been performed using the Monte Carlo code, MORSE-CG, to determine the neutron streaming through various straight and stepped gaps between radiation shield sectors in the conceptual tokamak fusion power plant design STARFIRE. This design calls for ''pie-shaped'' radiation shields with gaps between segments. It is apparent that some type of offset, or stepped gap, configuration will be necessary to reduce neutron streaming through these gaps. To evaluate this streaming problem, a MORSE-to-MORSE coupling technique was used, consisting of two separate transport calculations, which together defined the entire transport problem. The results define the effectiveness of various gap configurations to eliminate radiation streaming.
Electron scattering in helium for Monte Carlo simulations
Khrabrov, Alexander V.; Kaganovich, Igor D.
2012-09-15
An analytical approximation for differential cross-section of electron scattering on helium atoms is introduced. It is intended for Monte Carlo simulations, which, instead of angular distributions based on experimental data (or on first-principle calculations), usually rely on approximations that are accurate yet numerically efficient. The approximation is based on the screened-Coulomb differential cross-section with energy-dependent screening. For helium, a two-pole approximation of the screening parameter is found to be highly accurate over a wide range of energies.
Continuous-Estimator Representation for Monte Carlo Criticality Diagnostics
Kiedrowski, Brian C.; Brown, Forrest B.
2012-06-18
An alternate means of computing diagnostics for Monte Carlo criticality calculations is proposed. Overlapping spherical regions or estimators are placed covering the fissile material with a minimum center-to-center separation of the 'fission distance', which is defined herein, and a radius that is some multiple thereof. Fission neutron production is recorded based upon a weighted average of proximities to centers for all the spherical estimators. These scores are used to compute the Shannon entropy, and shown to reproduce the value, to within an additive constant, determined from a well-placed mesh by a user. The spherical estimators are also used to assess statistical coverage.
Active neutron multiplicity analysis and Monte Carlo calculations
NASA Astrophysics Data System (ADS)
Krick, M. S.; Ensslin, N.; Langner, D. G.; Miller, M. C.; Siebelist, R.; Stewart, J. E.; Ceo, R. N.; May, P. K.; Collins, L. L., Jr.
Active neutron multiplicity measurements of high-enrichment uranium metal and oxide samples have been made at Los Alamos and Y-12. The data from the measurements of standards at Los Alamos were analyzed to obtain values for neutron multiplication and source-sample coupling. These results are compared to equivalent results obtained from Monte Carlo calculations. An approximate relationship between coupling and multiplication is derived and used to correct doubles rates for multiplication and coupling. The utility of singles counting for uranium samples is also examined.
Parallel domain decomposition methods in fluid models with Monte Carlo transport
Alme, H.J.; Rodrigues, G.H.; Zimmerman, G.B.
1996-12-01
To examine the domain decomposition code coupled Monte Carlo-finite element calculation, it is important to use a domain decomposition that is suitable for the individual models. We have developed a code that simulates a Monte Carlo calculation ( ) on a massively parallel processor. This code is used to examine the load balancing behavior of three domain decomposition ( ) for a Monte Carlo calculation. Results are presented.
Automated Monte Carlo biasing for photon-generated electrons near surfaces.
Franke, Brian Claude; Crawford, Martin James; Kensek, Ronald Patrick
2009-09-01
This report describes efforts to automate the biasing of coupled electron-photon Monte Carlo particle transport calculations. The approach was based on weight-windows biasing. Weight-window settings were determined using adjoint-flux Monte Carlo calculations. A variety of algorithms were investigated for adaptivity of the Monte Carlo tallies. Tree data structures were used to investigate spatial partitioning. Functional-expansion tallies were used to investigate higher-order spatial representations.
Direct Monte Carlo simulation of chemical reaction systems: Dissociation and recombination
Anderson, James B.
Direct Monte Carlo simulation of chemical reaction systems: Dissociation and recombination Shannon Carlo simulations of a chemical reaction system with bimolecular and termolecular dissociation8 to be well suited for treating chemical reaction systems with nonequilibrium distributions, coupled gas
Hierarchical fractional-step approximations and parallel kinetic Monte Carlo algorithms
Arampatzis, Giorgos; Katsoulakis, Markos A.; Plechac, Petr; Taufer, Michela; Xu, Lifan
2012-10-01
We present a mathematical framework for constructing and analyzing parallel algorithms for lattice kinetic Monte Carlo (KMC) simulations. The resulting algorithms have the capacity to simulate a wide range of spatio-temporal scales in spatially distributed, non-equilibrium physiochemical processes with complex chemistry and transport micro-mechanisms. Rather than focusing on constructing exactly the stochastic trajectories, our approach relies on approximating the evolution of observables, such as density, coverage, correlations and so on. More specifically, we develop a spatial domain decomposition of the Markov operator (generator) that describes the evolution of all observables according to the kinetic Monte Carlo algorithm. This domain decomposition corresponds to a decomposition of the Markov generator into a hierarchy of operators and can be tailored to specific hierarchical parallel architectures such as multi-core processors or clusters of Graphical Processing Units (GPUs). Based on this operator decomposition, we formulate parallel Fractional step kinetic Monte Carlo algorithms by employing the Trotter Theorem and its randomized variants; these schemes, (a) are partially asynchronous on each fractional step time-window, and (b) are characterized by their communication schedule between processors. The proposed mathematical framework allows us to rigorously justify the numerical and statistical consistency of the proposed algorithms, showing the convergence of our approximating schemes to the original serial KMC. The approach also provides a systematic evaluation of different processor communicating schedules. We carry out a detailed benchmarking of the parallel KMC schemes using available exact solutions, for example, in Ising-type systems and we demonstrate the capabilities of the method to simulate complex spatially distributed reactions at very large scales on GPUs. Finally, we discuss work load balancing between processors and propose a re-balancing scheme based on probabilistic mass transport methods.
A Monte Carlo Dispersion Analysis of the X-33 Simulation Software
NASA Technical Reports Server (NTRS)
Williams, Peggy S.
2001-01-01
A Monte Carlo dispersion analysis has been completed on the X-33 software simulation. The simulation is based on a preliminary version of the software and is primarily used in an effort to define and refine how a Monte Carlo dispersion analysis would have been done on the final flight-ready version of the software. This report gives an overview of the processes used in the implementation of the dispersions and describes the methods used to accomplish the Monte Carlo analysis. Selected results from 1000 Monte Carlo runs are presented with suggestions for improvements in future work.
Markov chain Monte Carlo based Approaches for Inverse Problems
NASA Astrophysics Data System (ADS)
Chen, J.; Hoverten, M.; Vasco, D.; Hou, Z.; Rubin, Y.
2005-12-01
Inverse modeling of heterogeneous subsurface systems is difficult. One of the main challenges is the lack of effective and flexible inversion methods that can handle complex practical situations, which may be characterized by non-Gaussian likelihood functions and prior distributions, multiple local optimal solutions, as well as nonlinearity and non-uniqueness of the relationships between parameters and measurements. This study presents a Markov chain Monte Carlo (MCMC) based approach for inverting complex data sets. This approach includes three major steps: (1) Build a stochastic model within the Bayesian framework; (2) Generate many samples from the joint posterior distribution using MCMC methods; (3) Make inferences about unknown parameters from the generated samples. The use of MCMC methods makes our approach very flexible for solving complex inversion problems. First, we can virtually use any types of likelihood functions and prior distributions in the Bayesian model. This allows us to build inversion models primarily based on complex practical situations. Second, MCMC methods are well suitable for parallel computing. This allows us to incorporate computationally intensive forward simulation models into the inversion procedures and allows us to avoid being trapped in multiple local modes of the joint posterior distribution. Finally, MCMC methods generate many samples of unknown parameters. This allows for quantification of uncertainty in estimation of each unknown parameter. To demonstrate our approach, we applied it on geophysical seismic and electromagnetic (EM) data for estimating porosity and natural gas saturation in deepwater gas reservoir. Conventional techniques (such as seismic methods) for gas exploration often suffer a large degree of uncertainty because seismic properties of a medium are not sensitive to gas saturation in the medium. In contrast, electrical properties of a medium are very sensitive to gas saturation. Therefore, EM techniques have the potential of providing information for reducing the uncertainty. We explore in this study the combined use of seismic and EM data using MCMC methods based on layered reservoir models. We consider gas saturation and porosity in each layer of the reservoir, seismic velocities and density in the layers below and above the reservoir, and electrical conductivity in the overburden as random variables. We consider pre-stack seismic amplitude versus offsets (AVO) measurements in a given time window and the amplitudes and phases of the recorded electrical field as data. Using the Bayes' theorem, we get the joint posterior distribution function of all the unknowns. Using MCMC sampling methods, we obtain many samples for each of the unknowns. We demonstrate the efficiency of the developed model for joint inversion of seismic AVO and EM data, and the benefits of incorporating EM data into gas saturation estimation, using two case studies, one is a synthetic case study, and the other is a field case study. Results show that the incorporation of EM data reduces the uncertainty within estimation of both gas saturation and porosity.
Digitally reconstructed radiograph generation by an adaptive Monte Carlo method
NASA Astrophysics Data System (ADS)
Li, Xiaoliang; Yang, Jie; Zhu, Yuemin
2006-06-01
Digitally reconstructed radiograph (DRR) generation is an important step in several medical imaging applications such as 2D-3D image registration, where the generation of DRR is a rate-limiting step. We present a novel DRR generation technique, called the adaptive Monte Carlo volume rendering (AMCVR) algorithm. It is based on the conventional Monte Carlo volume rendering (MCVR) technique that is very efficient for rendering large medical datasets. In contrast to the MCVR, the AMCVR does not produce sample points by sampling directly in the entire volume domain. Instead, it adaptively divides the entire volume domain into sub-domains using importance separation and then performs sampling in these sub-domains. As a result, the AMCVR produces almost the same image quality as that obtained with the MCVR while only using half samples, and increases projection speed by a factor of 2. Moreover, the AMCVR is suitable for fast memory addressing, which further improves processing speed. Independent of the size of medical datasets, the AMCVR allows for achieving a frame rate of about 15 Hz on a 2.8 GHz Pentium 4 PC while generating reasonably good quality DRR.
Monte Carlo Simulation of Sudden Death Bearing Testing
NASA Technical Reports Server (NTRS)
Vlcek, Brian L.; Hendricks, Robert C.; Zaretsky, Erwin V.
2003-01-01
Monte Carlo simulations combined with sudden death testing were used to compare resultant bearing lives to the calculated hearing life and the cumulative test time and calendar time relative to sequential and censored sequential testing. A total of 30 960 virtual 50-mm bore deep-groove ball bearings were evaluated in 33 different sudden death test configurations comprising 36, 72, and 144 bearings each. Variations in both life and Weibull slope were a function of the number of bearings failed independent of the test method used and not the total number of bearings tested. Variation in L10 life as a function of number of bearings failed were similar to variations in lift obtained from sequentially failed real bearings and from Monte Carlo (virtual) testing of entire populations. Reductions up to 40 percent in bearing test time and calendar time can be achieved by testing to failure or the L(sub 50) life and terminating all testing when the last of the predetermined bearing failures has occurred. Sudden death testing is not a more efficient method to reduce bearing test time or calendar time when compared to censored sequential testing.
Improved criticality convergence via a modified Monte Carlo iteration method
Booth, Thomas E; Gubernatis, James E
2009-01-01
Nuclear criticality calculations with Monte Carlo codes are normally done using a power iteration method to obtain the dominant eigenfunction and eigenvalue. In the last few years it has been shown that the power iteration method can be modified to obtain the first two eigenfunctions. This modified power iteration method directly subtracts out the second eigenfunction and thus only powers out the third and higher eigenfunctions. The result is a convergence rate to the dominant eigenfunction being |k{sub 3}|/k{sub 1} instead of |k{sub 2}|/k{sub 1}. One difficulty is that the second eigenfunction contains particles of both positive and negative weights that must sum somehow to maintain the second eigenfunction. Summing negative and positive weights can be done using point detector mechanics, but this sometimes can be quite slow. We show that an approximate cancellation scheme is sufficient to accelerate the convergence to the dominant eigenfunction. A second difficulty is that for some problems the Monte Carlo implementation of the modified power method has some stability problems. We also show that a simple method deals with this in an effective, but ad hoc manner.
Monte Carlo simulation of quantum Zeno effect in the brain
NASA Astrophysics Data System (ADS)
Georgiev, Danko
2015-12-01
Environmental decoherence appears to be the biggest obstacle for successful construction of quantum mind theories. Nevertheless, the quantum physicist Henry Stapp promoted the view that the mind could utilize quantum Zeno effect to influence brain dynamics and that the efficacy of such mental efforts would not be undermined by environmental decoherence of the brain. To address the physical plausibility of Stapp's claim, we modeled the brain using quantum tunneling of an electron in a multiple-well structure such as the voltage sensor in neuronal ion channels and performed Monte Carlo simulations of quantum Zeno effect exerted by the mind upon the brain in the presence or absence of environmental decoherence. The simulations unambiguously showed that the quantum Zeno effect breaks down for timescales greater than the brain decoherence time. To generalize the Monte Carlo simulation results for any n-level quantum system, we further analyzed the change of brain entropy due to the mind probing actions and proved a theorem according to which local projections cannot decrease the von Neumann entropy of the unconditional brain density matrix. The latter theorem establishes that Stapp's model is physically implausible but leaves a door open for future development of quantum mind theories provided the brain has a decoherence-free subspace.
BEAM: a Monte Carlo code to simulate radiotherapy treatment units.
Rogers, D W; Faddegon, B A; Ding, G X; Ma, C M; We, J; Mackie, T R
1995-05-01
This paper describes BEAM, a general purpose Monte Carlo code to simulate the radiation beams from radiotherapy units including high-energy electron and photon beams, 60Co beams and orthovoltage units. The code handles a variety of elementary geometric entities which the user puts together as needed (jaws, applicators, stacked cones, mirrors, etc.), thus allowing simulation of a wide variety of accelerators. The code is not restricted to cylindrical symmetry. It incorporates a variety of powerful variance reduction techniques such as range rejection, bremsstrahlung splitting and forcing photon interactions. The code allows direct calculation of charge in the monitor ion chamber. It has the capability of keeping track of each particle's history and using this information to score separate dose components (e.g., to determine the dose from electrons scattering off the applicator). The paper presents a variety of calculated results to demonstrate the code's capabilities. The calculated dose distributions in a water phantom irradiated by electron beams from the NRC 35 MeV research accelerator, a Varian Clinac 2100C, a Philips SL75-20, an AECL Therac 20 and a Scanditronix MM50 are all shown to be in good agreement with measurements at the 2 to 3% level. Eighteen electron spectra from four different commercial accelerators are presented and various aspects of the electron beams from a Clinac 2100C are discussed. Timing requirements and selection of parameters for the Monte Carlo calculations are discussed. PMID:7643786
A Monte Carlo methodology for modelling ashfall hazards
NASA Astrophysics Data System (ADS)
Hurst, Tony; Smith, Warwick
2004-12-01
We have developed a methodology for quantifying the probability of particular thicknesses of tephra at any given site, using Monte Carlo methods. This is a part of the development of a probabilistic volcanic hazard model (PVHM) for New Zealand, for hazards planning and insurance purposes. We use an established program (ASHFALL) to model individual eruptions, where the likely thickness of ash deposited at selected sites depends on the location of the volcano, eruptive volume, column height and ash size, and the wind conditions. A Monte Carlo procedure allows us to simulate the variations in eruptive volume and in wind conditions by analysing repeat eruptions, each time allowing the parameters to vary randomly according to known or assumed distributions. Actual wind velocity profiles are used, with randomness included by selection of a starting date. This method can handle the effects of multiple volcanic sources, each source with its own characteristics. We accumulate the tephra thicknesses from all sources to estimate the combined ashfall hazard, expressed as the frequency with which any given depth of tephra is likely to be deposited at selected sites. These numbers are expressed as annual probabilities or as mean return periods. We can also use this method for obtaining an estimate of how often and how large the eruptions from a particular volcano have been. Results from sediment cores in Auckland give useful bounds for the likely total volumes erupted from Egmont Volcano (Mt. Taranaki), 280 km away, during the last 130,000 years.
Ultracold atoms at unitarity within quantum Monte Carlo methods
Morris, Andrew J.; Lopez Rios, P.; Needs, R. J.
2010-03-15
Variational and diffusion quantum Monte Carlo (VMC and DMC) calculations of the properties of the zero-temperature fermionic gas at unitarity are reported. Our study differs from earlier ones mainly in that we have constructed more accurate trial wave functions and used a larger system size, we have studied the dependence of the energy on the particle density and well width, and we have achieved much smaller statistical error bars. The correct value of the universal ratio of the energy of the interacting to that of the noninteracting gas, {xi}, is still a matter of debate. We find DMC values of {xi} of 0.4244(1) with 66 particles and 0.4339(1) with 128 particles. The spherically averaged pair-correlation functions, momentum densities, and one-body density matrices are very similar in VMC and DMC, which suggests that our results for these quantities are very accurate. We find, however, some differences between the VMC and DMC results for the two-body density matrices and condensate fractions, which indicates that these quantities are more sensitive to the quality of the trial wave function. Our best estimate of the condensate fraction of 0.51 is smaller than the values from earlier quantum Monte Carlo calculations.
Stationkeeping Monte Carlo Simulation for the James Webb Space Telescope
NASA Technical Reports Server (NTRS)
Dichmann, Donald J.; Alberding, Cassandra M.; Yu, Wayne H.
2014-01-01
The James Webb Space Telescope (JWST) is scheduled to launch in 2018 into a Libration Point Orbit (LPO) around the Sun-Earth/Moon (SEM) L2 point, with a planned mission lifetime of 10.5 years after a six-month transfer to the mission orbit. This paper discusses our approach to Stationkeeping (SK) maneuver planning to determine an adequate SK delta-V budget. The SK maneuver planning for JWST is made challenging by two factors: JWST has a large Sunshield, and JWST will be repointed regularly producing significant changes in Solar Radiation Pressure (SRP). To accurately model SRP we employ the Solar Pressure and Drag (SPAD) tool, which uses ray tracing to accurately compute SRP force as a function of attitude. As an additional challenge, the future JWST observation schedule will not be known at the time of SK maneuver planning. Thus there will be significant variation in SRP between SK maneuvers, and the future variation in SRP is unknown. We have enhanced an earlier SK simulation to create a Monte Carlo simulation that incorporates random draws for uncertainties that affect the budget, including random draws of the observation schedule. Each SK maneuver is planned to optimize delta-V magnitude, subject to constraints on spacecraft pointing. We report the results of the Monte Carlo simulations and discuss possible improvements during flight operations to reduce the SK delta-V budget.
James Webb Space Telescope (JWST) Stationkeeping Monte Carlo Simulations
NASA Technical Reports Server (NTRS)
Dichmann, Donald J.; Alberding, Cassandra; Yu, Wayne
2014-01-01
The James Webb Space Telescope (JWST) will launch in 2018 into a Libration Point Orbit (LPO) around the Sun-EarthMoon (SEM) L2 point, with a planned mission lifetime of 11 years. This paper discusses our approach to Stationkeeping (SK) maneuver planning to determine an adequate SK delta-V budget. The SK maneuver planning for JWST is made challenging by two factors: JWST has a large Sunshield, and JWST will be repointed regularly producing significant changes in Solar Radiation Pressure (SRP). To accurately model SRP we employ the Solar Pressure and Drag (SPAD) tool, which uses ray tracing to accurately compute SRP force as a function of attitude. As an additional challenge, the future JWST observation schedule will not be known at the time of SK maneuver planning. Thus there will be significant variation in SRP between SK maneuvers, and the future variation in SRP is unknown. We have enhanced an earlier SK simulation to create a Monte Carlo simulation that incorporates random draws for uncertainties that affect the budget, including random draws of the observation schedule. Each SK maneuver is planned to optimize delta-V magnitude, subject to constraints on spacecraft pointing. We report the results of the Monte Carlo simulations and discuss possible improvements during flight operations to reduce the SK delta-V budget.
Quantum Monte Carlo finite temperature electronic structure of quantum dots
NASA Astrophysics Data System (ADS)
Leino, Markku; Rantala, Tapio T.
2002-08-01
Quantum Monte Carlo methods allow a straightforward procedure for evaluation of electronic structures with a proper treatment of electronic correlations. This can be done even at finite temperatures [1]. We test the Path Integral Monte Carlo (PIMC) simulation method [2] for one and two electrons in one and three dimensional harmonic oscillator potentials and apply it in evaluation of finite temperature effects of single and coupled quantum dots. Our simulations show the correct finite temperature excited state populations including degeneracy in cases of one and three dimensional harmonic oscillators. The simulated one and two electron distributions of a single and coupled quantum dots are compared to those from experiments and other theoretical (0 K) methods [3]. Distributions are shown to agree and the finite temperature effects are discussed. Computational capacity is found to become the limiting factor in simulations with increasing accuracy. Other essential aspects of PIMC and its capability in this type of calculations are also discussed. [1] R.P. Feynman: Statistical Mechanics, Addison Wesley, 1972. [2] D.M. Ceperley, Rev.Mod.Phys. 67, 279 (1995). [3] M. Pi, A. Emperador and M. Barranco, Phys.Rev.B 63, 115316 (2001).
Monte Carlo track structure for radiation biology and space applications
NASA Technical Reports Server (NTRS)
Nikjoo, H.; Uehara, S.; Khvostunov, I. G.; Cucinotta, F. A.; Wilson, W. E.; Goodhead, D. T.
2001-01-01
Over the past two decades event by event Monte Carlo track structure codes have increasingly been used for biophysical modelling and radiotherapy. Advent of these codes has helped to shed light on many aspects of microdosimetry and mechanism of damage by ionising radiation in the cell. These codes have continuously been modified to include new improved cross sections and computational techniques. This paper provides a summary of input data for ionizations, excitations and elastic scattering cross sections for event by event Monte Carlo track structure simulations for electrons and ions in the form of parametric equations, which makes it easy to reproduce the data. Stopping power and radial distribution of dose are presented for ions and compared with experimental data. A model is described for simulation of full slowing down of proton tracks in water in the range 1 keV to 1 MeV. Modelling and calculations are presented for the response of a TEPC proportional counter irradiated with 5 MeV alpha-particles. Distributions are presented for the wall and wall-less counters. Data shows contribution of indirect effects to the lineal energy distribution for the wall counters responses even at such a low ion energy.
Fourier Monte Carlo renormalization-group approach to crystalline membranes.
Tröster, A
2015-02-01
The computation of the critical exponent ? characterizing the universal elastic behavior of crystalline membranes in the flat phase continues to represent challenges to theorists as well as computer simulators that manifest themselves in a considerable spread of numerical results for ? published in the literature. We present additional insight into this problem that results from combining Wilson's momentum shell renormalization-group method with the power of modern computer simulations based on the Fourier Monte Carlo algorithm. After discussing the ideas and difficulties underlying this combined scheme, we present a calculation of the renormalization-group flow of the effective two-dimensional Young modulus for momentum shells of different thickness. Extrapolation to infinite shell thickness allows us to produce results in reasonable agreement with those obtained by functional renormalization group or by Fourier Monte Carlo simulations in combination with finite-size scaling. Moreover, our method allows us to obtain a decent estimate for the value of the Wegner exponent ? that determines the leading correction to scaling, which in turn allows us to refine our numerical estimate for ? previously obtained from precise finite-size scaling data. PMID:25768483
Fidelity Susceptibility Made Simple: A Unified Quantum Monte Carlo Approach
NASA Astrophysics Data System (ADS)
Wang, Lei; Liu, Ye-Hua; Imriška, Jakub; Ma, Ping Nang; Troyer, Matthias
2015-07-01
The fidelity susceptibility is a general purpose probe of phase transitions. With its origin in quantum information and in the differential geometry perspective of quantum states, the fidelity susceptibility can indicate the presence of a phase transition without prior knowledge of the local order parameter, as well as reveal the universal properties of a critical point. The wide applicability of the fidelity susceptibility to quantum many-body systems is, however, hindered by the limited computational tools to evaluate it. We present a generic, efficient, and elegant approach to compute the fidelity susceptibility of correlated fermions, bosons, and quantum spin systems in a broad range of quantum Monte Carlo methods. It can be applied to both the ground-state and nonzero-temperature cases. The Monte Carlo estimator has a simple yet universal form, which can be efficiently evaluated in simulations. We demonstrate the power of this approach with applications to the Bose-Hubbard model, the spin-1 /2 X X Z model, and use it to examine the hypothetical intermediate spin-liquid phase in the Hubbard model on the honeycomb lattice.
Variational Monte Carlo investigation of SU (N ) Heisenberg chains
NASA Astrophysics Data System (ADS)
Dufour, Jérôme; Nataf, Pierre; Mila, Frédéric
2015-05-01
Motivated by recent experimental progress in the context of ultracold multicolor fermionic atoms in optical lattices, we have investigated the properties of the SU (N) Heisenberg chain with totally antisymmetric irreducible representations, the effective model of Mott phases with m
On the time scale associated with Monte Carlo simulations.
Bal, Kristof M; Neyts, Erik C
2014-11-28
Uniform-acceptance force-bias Monte Carlo (fbMC) methods have been shown to be a powerful technique to access longer timescales in atomistic simulations allowing, for example, phase transitions and growth. Recently, a new fbMC method, the time-stamped force-bias Monte Carlo (tfMC) method, was derived with inclusion of an estimated effective timescale; this timescale, however, does not seem able to explain some of the successes the method. In this contribution, we therefore explicitly quantify the effective timescale tfMC is able to access for a variety of systems, namely a simple single-particle, one-dimensional model system, the Lennard-Jones liquid, an adatom on the Cu(100) surface, a silicon crystal with point defects and a highly defected graphene sheet, in order to gain new insights into the mechanisms by which tfMC operates. It is found that considerable boosts, up to three orders of magnitude compared to molecular dynamics, can be achieved for solid state systems by lowering of the apparent activation barrier of occurring processes, while not requiring any system-specific input or modifications of the method. We furthermore address the pitfalls of using the method as a replacement or complement of molecular dynamics simulations, its ability to explicitly describe correct dynamics and reaction mechanisms, and the association of timescales to MC simulations in general. PMID:25429930
Monte Carlo Simulation of Massive Absorbers for Cryogenic Calorimeters
Brandt, D.; Asai, M.; Brink, P.L.; Cabrera, B.; Silva, E.do Couto e; Kelsey, M.; Leman, S.W.; McArthy, K.; Resch, R.; Wright, D.; Figueroa-Feliciano, E.; /MIT
2012-06-12
There is a growing interest in cryogenic calorimeters with macroscopic absorbers for applications such as dark matter direct detection and rare event search experiments. The physics of energy transport in calorimeters with absorber masses exceeding several grams is made complex by the anisotropic nature of the absorber crystals as well as the changing mean free paths as phonons decay to progressively lower energies. We present a Monte Carlo model capable of simulating anisotropic phonon transport in cryogenic crystals. We have initiated the validation process and discuss the level of agreement between our simulation and experimental results reported in the literature, focusing on heat pulse propagation in germanium. The simulation framework is implemented using Geant4, a toolkit originally developed for high-energy physics Monte Carlo simulations. Geant4 has also been used for nuclear and accelerator physics, and applications in medical and space sciences. We believe that our current work may open up new avenues for applications in material science and condensed matter physics.
Infinite Variance in Fermion Quantum Monte Carlo Calculations
Shi, Hao
2015-01-01
For important classes of many-fermion problems, quantum Monte Carlo (QMC) methods allow exact calculations of ground-state and finite-temperature properties, without the sign problem. The list spans condensed matter, nuclear physics, and high-energy physics, including the half-filled repulsive Hubbard model, the spin-balanced atomic Fermi gas, lattice QCD calculations at zero density with Wilson Fermions, and is growing rapidly as a number of problems have been discovered recently to be free of the sign problem. In these situations, QMC calculations are relied upon to provide definitive answers. Their results are instrumental to our ability to understand and compute properties in fundamental models important to multiple sub-areas in quantum physics. It is shown, however, that the most commonly employed algorithms in such situations turn out to have an infinite variance problem. A diverging variance causes the estimated Monte Carlo statistical error bar to be incorrect, which can render the results of the calc...
Fourier Monte Carlo renormalization-group approach to crystalline membranes
NASA Astrophysics Data System (ADS)
Tröster, A.
2015-02-01
The computation of the critical exponent ? characterizing the universal elastic behavior of crystalline membranes in the flat phase continues to represent challenges to theorists as well as computer simulators that manifest themselves in a considerable spread of numerical results for ? published in the literature. We present additional insight into this problem that results from combining Wilson's momentum shell renormalization-group method with the power of modern computer simulations based on the Fourier Monte Carlo algorithm. After discussing the ideas and difficulties underlying this combined scheme, we present a calculation of the renormalization-group flow of the effective two-dimensional Young modulus for momentum shells of different thickness. Extrapolation to infinite shell thickness allows us to produce results in reasonable agreement with those obtained by functional renormalization group or by Fourier Monte Carlo simulations in combination with finite-size scaling. Moreover, our method allows us to obtain a decent estimate for the value of the Wegner exponent ? that determines the leading correction to scaling, which in turn allows us to refine our numerical estimate for ? previously obtained from precise finite-size scaling data.
Multiparticle moves in acceptance rate optimized monte carlo.
Neumann, Tobias; Danilov, Denis; Wenzel, Wolfgang
2015-11-15
Molecular Dynamics (MD) and Monte Carlo (MC) based simulation methods are widely used to investigate molecular and nanoscale structures and processes. While the investigation of systems in MD simulations is limited by very small time steps, MC methods are often stifled by low acceptance rates for moves that significantly perturb the system. In many Metropolis MC methods with hard potentials, the acceptance rate drops exponentially with the number of uncorrelated, simultaneously proposed moves. In this work, we discuss a multiparticle Acceptance Rate Optimized Monte Carlo approach (AROMoCa) to construct collective moves with near unit acceptance probability, while preserving detailed balance even for large step sizes. After an illustration of the protocol, we demonstrate that AROMoCa significantly accelerates MC simulations in four model systems in comparison to standard MC methods. AROMoCa can be applied to all MC simulations where a gradient of the potential is available and can help to significantly speed up molecular simulations. © 2015 Wiley Periodicals, Inc. PMID:26459216
MONTE-CARLO BURNUP CALCULATION UNCERTAINTY QUANTIFICATION AND PROPAGATION DETERMINATION
Sternat, M.; Nichols, T.
2011-06-09
Reactor burnup or depletion codes are used thoroughly in the fields of nuclear forensics and nuclear safeguards. Two common codes include MONTEBURNS and MCNPX/CINDER. These are Monte-Carlo depletion routines utilizing MCNP for neutron transport calculations and either ORIGEN or CINDER for burnup calculations. Uncertainties exist in the MCNP steps, but this information is not passed to the depletion calculations or saved. To quantify this transport uncertainty and determine how it propagates between burnup steps, a statistical analysis of multiple repeated depletion runs is performed. The reactor model chosen is the Oak Ridge Research Reactor (ORR) in a single assembly, infinite lattice configuration. This model was burned for a 150 day cycle broken down into three steps. The output isotopics as well as effective multiplication factor (k-effective) were tabulated and histograms were created at each burnup step using the Scott Method to determine the bin width. The distributions for each code are a statistical benchmark and comparisons made. It was expected that the gram quantities and k-effective histograms would produce normally distributed results since they were produced from a Monte-Carlo routine, but some of the results appear to not. Statistical analyses are performed using the {chi}{sup 2} test against a normal distribution for the k-effective results and several isotopes including {sup 134}Cs, {sup 137}Cs, {sup 235}U, {sup 238}U, {sup 237}Np, {sup 238}Pu, {sup 239}Pu, and {sup 240}Pu.
Chemical accuracy from quantum Monte Carlo for the Benzene Dimer
Azadi, Sam
2015-01-01
We report an accurate study of interactions between Benzene molecules using variational quantum Monte Carlo (VMC) and diffusion quantum Monte Carlo (DMC) methods. We compare these results with density functional theory (DFT) using different van der Waals (vdW) functionals. In our QMC calculations, we use accurate correlated trial wave functions including three-body Jastrow factors, and backflow transformations. We consider two benzene molecules in the parallel displaced (PD) geometry, and find that by highly optimizing the wave function and introducing more dynamical correlation into the wave function, we compute the weak chemical binding energy between aromatic rings accurately. We find optimal VMC and DMC binding energies of -2.3(4) and -2.7(3) kcal/mol, respectively. The best estimate of the CCSD(T)/CBS limit is -2.65(2) kcal/mol [E. Miliordos et al, J. Phys. Chem. A 118, 7568 (2014)]. Our results indicate that QMC methods give chemical accuracy for weakly bound van der Waals molecular interactions, compar...
Monte Carlo simulation of quantum Zeno effect in the brain
Danko Georgiev
2014-12-11
Environmental decoherence appears to be the biggest obstacle for successful construction of quantum mind theories. Nevertheless, the quantum physicist Henry Stapp promoted the view that the mind could utilize quantum Zeno effect to influence brain dynamics and that the efficacy of such mental efforts would not be undermined by environmental decoherence of the brain. To address the physical plausibility of Stapp's claim, we modeled the brain using quantum tunneling of an electron in a multiple-well structure such as the voltage sensor in neuronal ion channels and performed Monte Carlo simulations of quantum Zeno effect exerted by the mind upon the brain in the presence or absence of environmental decoherence. The simulations unambiguously showed that the quantum Zeno effect breaks down for timescales greater than the brain decoherence time. To generalize the Monte Carlo simulation results for any n-level quantum system, we further analyzed the change of brain entropy due to the mind probing actions and proved a theorem according to which local projections cannot decrease the von Neumann entropy of the unconditional brain density matrix. The latter theorem establishes that Stapp's model is physically implausible but leaves a door open for future development of quantum mind theories provided the brain has a decoherence-free subspace.
Quantum Monte Carlo algorithms for electronic structure at the petascale; the endstation project.
Kim, J; Ceperley, D M; Purwanto, W; Walter, E J; Krakauer, H; Zhang, S W; Kent, P.R. C; Hennig, R G; Umrigar, C; Bajdich, M; Kolorenc, J; Mitas, L; Srinivasan, A
2008-10-01
Over the past two decades, continuum quantum Monte Carlo (QMC) has proved to be an invaluable tool for predicting of the properties of matter from fundamental principles. By solving the Schrodinger equation through a stochastic projection, it achieves the greatest accuracy and reliability of methods available for physical systems containing more than a few quantum particles. QMC enjoys scaling favorable to quantum chemical methods, with a computational effort which grows with the second or third power of system size. This accuracy and scalability has enabled scientific discovery across a broad spectrum of disciplines. The current methods perform very efficiently at the terascale. The quantum Monte Carlo Endstation project is a collaborative effort among researchers in the field to develop a new generation of algorithms, and their efficient implementations, which will take advantage of the upcoming petaflop architectures. Some aspects of these developments are discussed here. These tools will expand the accuracy, efficiency and range of QMC applicability and enable us to tackle challenges which are currently out of reach. The methods will be applied to several important problems including electronic and structural properties of water, transition metal oxides, nanosystems and ultracold atoms.
Monte Carlo Simulation Of Soot Evolution along Lagrangian Trajectories in a Turbulent Flame
NASA Astrophysics Data System (ADS)
Abdelgadir, Ahmed; Zhou, Kun; Attili, Antonio; Bisetti, Fabrizio
2013-11-01
A newly developed Monte Carlo method is used to simulate soot formation and growth in a turbulent n-heptane/air flame. The Monte Carlo method is used to simulate the soot evolution along selected Lagragnian trajectories obtained from a direct numerical simulation of a turbulent sooting jet flame [Attili et al., Direct and Large-Eddy Simulation 9, Springer, 2013] based on a high-order method of moments. The method adopts an operator splitting approach, which splits the deterministic processes (nucleation, surface growth and oxidation) from coagulation, which is treated stochastically. The purpose of this work is to assess the solution based on the moment method and to investigate the soot particle size distribution (PSD) that is not available in methods of moments. Nucleation and coagulation have the greatest effect on the PSD, therefore, various coagulation models are considered. Along each trajectory, one or more rapid nucleation events occur, affecting the shape of the PSD. It is shown that oxidation and surface growth affect the PSD quantitatively, but do not change the shape significantly.
Accelerating Monte Carlo simulations with an NVIDIA ® graphics processor
NASA Astrophysics Data System (ADS)
Martinsen, Paul; Blaschke, Johannes; Künnemeyer, Rainer; Jordan, Robert
2009-10-01
Modern graphics cards, commonly used in desktop computers, have evolved beyond a simple interface between processor and display to incorporate sophisticated calculation engines that can be applied to general purpose computing. The Monte Carlo algorithm for modelling photon transport in turbid media has been implemented on an NVIDIA ® 8800 GT graphics card using the CUDA toolkit. The Monte Carlo method relies on following the trajectory of millions of photons through the sample, often taking hours or days to complete. The graphics-processor implementation, processing roughly 110 million scattering events per second, was found to run more than 70 times faster than a similar, single-threaded implementation on a 2.67 GHz desktop computer. Program summaryProgram title: Phoogle-C/Phoogle-G Catalogue identifier: AEEB_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AEEB_v1_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: Standard CPC licence, http://cpc.cs.qub.ac.uk/licence/licence.html No. of lines in distributed program, including test data, etc.: 51 264 No. of bytes in distributed program, including test data, etc.: 2 238 805 Distribution format: tar.gz Programming language: C++ Computer: Designed for Intel PCs. Phoogle-G requires a NVIDIA graphics card with support for CUDA 1.1 Operating system: Windows XP Has the code been vectorised or parallelized?: Phoogle-G is written for SIMD architectures RAM: 1 GB Classification: 21.1 External routines: Charles Karney Random number library. Microsoft Foundation Class library. NVIDA CUDA library [1]. Nature of problem: The Monte Carlo technique is an effective algorithm for exploring the propagation of light in turbid media. However, accurate results require tracing the path of many photons within the media. The independence of photons naturally lends the Monte Carlo technique to implementation on parallel architectures. Generally, parallel computing can be expensive, but recent advances in consumer grade graphics cards have opened the possibility of high-performance desktop parallel-computing. Solution method: In this pair of programmes we have implemented the Monte Carlo algorithm described by Prahl et al. [2] for photon transport in infinite scattering media to compare the performance of two readily accessible architectures: a standard desktop PC and a consumer grade graphics card from NVIDIA. Restrictions: The graphics card implementation uses single precision floating point numbers for all calculations. Only photon transport from an isotropic point-source is supported. The graphics-card version has no user interface. The simulation parameters must be set in the source code. The desktop version has a simple user interface; however some properties can only be accessed through an ActiveX client (such as Matlab). Additional comments: The random number library used has a LGPL ( http://www.gnu.org/copyleft/lesser.html) licence. Running time: Runtime can range from minutes to months depending on the number of photons simulated and the optical properties of the medium. References:http://www.nvidia.com/object/cuda_home.html. S. Prahl, M. Keijzer, Sl. Jacques, A. Welch, SPIE Institute Series 5 (1989) 102.
A Monte Carlo Method for Modeling Thermal Damping: Beyond the Brownian-Motion Master Equation
Kurt Jacobs
2009-01-06
The "standard" Brownian motion master equation, used to describe thermal damping, is not completely positive, and does not admit a Monte Carlo method, important in numerical simulations. To eliminate both these problems one must add a term that generates additional position diffusion. He we show that one can obtain a completely positive simple quantum Brownian motion, efficiently solvable, without any extra diffusion. This is achieved by using a stochastic Schroedinger equation (SSE), closely analogous to Langevin's equation, that has no equivalent Markovian master equation. Considering a specific example, we show that this SSE is sensitive to nonlinearities in situations in which the master equation is not, and may therefore be a better model of damping for nonlinear systems.
Three-dimensional Monte Carlo model of the coffee-ring effect in evaporating colloidal droplets
Crivoi, A.; Duan, Fei
2014-01-01
The residual deposits usually left near the contact line after pinned sessile colloidal droplet evaporation are commonly known as a “coffee-ring” effect. However, there were scarce attempts to simulate the effect, and the realistic fully three-dimensional (3D) model is lacking since the complex drying process seems to limit the further investigation. Here we develop a stochastic method to model the particle deposition in evaporating a pinned sessile colloidal droplet. The 3D Monte Carlo model is developed in the spherical-cap-shaped droplet. In the algorithm, the analytical equations of fluid flow are used to calculate the probability distributions for the biased random walk, associated with the drift-diffusion equations. We obtain the 3D coffee-ring structures as the final results of the simulation and analyze the dependence of the ring profile on the particle volumetric concentration and sticking probability. PMID:24603647
Three-dimensional Monte Carlo model of the coffee-ring effect in evaporating colloidal droplets
NASA Astrophysics Data System (ADS)
Crivoi, A.; Duan, Fei
2014-03-01
The residual deposits usually left near the contact line after pinned sessile colloidal droplet evaporation are commonly known as a ``coffee-ring'' effect. However, there were scarce attempts to simulate the effect, and the realistic fully three-dimensional (3D) model is lacking since the complex drying process seems to limit the further investigation. Here we develop a stochastic method to model the particle deposition in evaporating a pinned sessile colloidal droplet. The 3D Monte Carlo model is developed in the spherical-cap-shaped droplet. In the algorithm, the analytical equations of fluid flow are used to calculate the probability distributions for the biased random walk, associated with the drift-diffusion equations. We obtain the 3D coffee-ring structures as the final results of the simulation and analyze the dependence of the ring profile on the particle volumetric concentration and sticking probability.
Kinetic Monte Carlo simulation of dopant-defect systems under submicrosecond laser thermal processes
Fisicaro, G.; Pelaz, Lourdes; Lopez, P.; Italia, M.; Huet, K.; Venturini, J.; La Magna, A.
2012-11-06
An innovative Kinetic Monte Carlo (KMC) code has been developed, which rules the post-implant kinetics of the defects system in the extremely far-from-the equilibrium conditions caused by the laser irradiation close to the liquid-solid interface. It considers defect diffusion, annihilation and clustering. The code properly implements, consistently to the stochastic formalism, the fast varying local event rates related to the thermal field T(r,t) evolution. This feature of our numerical method represents an important advancement with respect to current state of the art KMC codes. The reduction of the implantation damage and its reorganization in defect aggregates are studied as a function of the process conditions. Phosphorus activation efficiency, experimentally determined in similar conditions, has been related to the emerging damage scenario.
Quantum Monte Carlo simulations of the one-dimensional extended Hubbard model
Somsky, W.R.; Gubernatis, J.E.
1989-01-01
We report preliminary results of an investigation of the thermodynamic properties of the extended Hubbard model in one- dimension, calculated with the world-line Monte Carlo method described by Hirsch et al. With strictly continuous world-lines, we are able to measure the expectation of operators that conserve fermion number locally, such as the energy and (spatial) occupation number. By permitting the world-lines to be broken'' stochastically, we may also measure the expectation of operators that conserve fermion number only globally, such as the single-particle Green's function. For a 32 site lattice we present preliminary calculations of the average electron occupancy as a function of wavenumber when U = 4, V = 0 and {beta} = 16. For a half-filled band we find no indications of a Fermi surface. Slightly away from half-filling, we find Fermi-surface-like behavior similar to that found in other numerical investigations. 8 refs., 3 figs.
An excited-state approach within full configuration interaction quantum Monte Carlo
NASA Astrophysics Data System (ADS)
Blunt, N. S.; Smart, Simon D.; Booth, George H.; Alavi, Ali
2015-10-01
We present a new approach to calculate excited states with the full configuration interaction quantum Monte Carlo (FCIQMC) method. The approach uses a Gram-Schmidt procedure, instantaneously applied to the stochastically evolving distributions of walkers, to orthogonalize higher energy states against lower energy ones. It can thus be used to study several of the lowest-energy states of a system within the same symmetry. This additional step is particularly simple and computationally inexpensive, requiring only a small change to the underlying FCIQMC algorithm. No trial wave functions or partitioning of the space is needed. The approach should allow excited states to be studied for systems similar to those accessible to the ground-state method due to a comparable computational cost. As a first application, we consider the carbon dimer in basis sets up to quadruple-zeta quality and compare to existing results where available.
SCOUT: a fast Monte-Carlo modeling tool of scintillation camera output†
Hunter, William C J; Barrett, Harrison H.; Muzi, John P.; McDougald, Wendy; MacDonald, Lawrence R.; Miyaoka, Robert S.; Lewellen, Thomas K.
2013-01-01
We have developed a Monte-Carlo photon-tracking and readout simulator called SCOUT to study the stochastic behavior of signals output from a simplified rectangular scintillation-camera design. SCOUT models the salient processes affecting signal generation, transport, and readout of a scintillation camera. Presently, we compare output signal statistics from SCOUT to experimental results for both a discrete and a monolithic camera. We also benchmark the speed of this simulation tool and compare it to existing simulation tools. We find this modeling tool to be relatively fast and predictive of experimental results. Depending on the modeled camera geometry, we found SCOUT to be 4 to 140 times faster than other modeling tools. PMID:23640136
System Level Numerical Analysis of a Monte Carlo Simulation of the E. Coli Chemotaxis
Siettos, Constantinos I
2010-01-01
Over the past few years it has been demonstrated that "coarse timesteppers" establish a link between traditional numerical analysis and microscopic/ stochastic simulation. The underlying assumption of the associated lift-run-restrict-estimate procedure is that macroscopic models exist and close in terms of a few governing moments of microscopically evolving distributions, but they are unavailable in closed form. This leads to a system identification based computational approach that sidesteps the necessity of deriving explicit closures. Two-level codes are constructed; the outer code performs macroscopic, continuum level numerical tasks, while the inner code estimates -through appropriately initialized bursts of microscopic simulation- the quantities required for continuum numerics. Such quantities include residuals, time derivatives, and the action of coarse slow Jacobians. We demonstrate how these coarse timesteppers can be applied to perform equation-free computations of a kinetic Monte Carlo simulation of...
Application analysis of Monte Carlo to estimate the capacity of geothermal resources in Lawu Mount
Supriyadi; Srigutomo, Wahyu; Munandar, Arif
2014-03-24
Monte Carlo analysis has been applied in calculation of geothermal resource capacity based on volumetric method issued by Standar Nasional Indonesia (SNI). A deterministic formula is converted into a stochastic formula to take into account the nature of uncertainties in input parameters. The method yields a range of potential power probability stored beneath Lawu Mount geothermal area. For 10,000 iterations, the capacity of geothermal resources is in the range of 139.30-218.24 MWe with the most likely value is 177.77 MWe. The risk of resource capacity above 196.19 MWe is less than 10%. The power density of the prospect area covering 17 km{sup 2} is 9.41 MWe/km{sup 2} with probability 80%.
Brachytherapy structural shielding calculations using Monte Carlo generated, monoenergetic data
Zourari, K.; Peppa, V.; Papagiannis, P.; Ballester, Facundo; Siebert, Frank-André
2014-04-15
Purpose: To provide a method for calculating the transmission of any broad photon beam with a known energy spectrum in the range of 20–1090 keV, through concrete and lead, based on the superposition of corresponding monoenergetic data obtained from Monte Carlo simulation. Methods: MCNP5 was used to calculate broad photon beam transmission data through varying thickness of lead and concrete, for monoenergetic point sources of energy in the range pertinent to brachytherapy (20–1090 keV, in 10 keV intervals). The three parameter empirical model introduced byArcher et al. [“Diagnostic x-ray shielding design based on an empirical model of photon attenuation,” Health Phys. 44, 507–517 (1983)] was used to describe the transmission curve for each of the 216 energy-material combinations. These three parameters, and hence the transmission curve, for any polyenergetic spectrum can then be obtained by superposition along the lines of Kharrati et al. [“Monte Carlo simulation of x-ray buildup factors of lead and its applications in shielding of diagnostic x-ray facilities,” Med. Phys. 34, 1398–1404 (2007)]. A simple program, incorporating a graphical user interface, was developed to facilitate the superposition of monoenergetic data, the graphical and tabular display of broad photon beam transmission curves, and the calculation of material thickness required for a given transmission from these curves. Results: Polyenergetic broad photon beam transmission curves of this work, calculated from the superposition of monoenergetic data, are compared to corresponding results in the literature. A good agreement is observed with results in the literature obtained from Monte Carlo simulations for the photon spectra emitted from bare point sources of various radionuclides. Differences are observed with corresponding results in the literature for x-ray spectra at various tube potentials, mainly due to the different broad beam conditions or x-ray spectra assumed. Conclusions: The data of this work allow for the accurate calculation of structural shielding thickness, taking into account the spectral variation with shield thickness, and broad beam conditions, in a realistic geometry. The simplicity of calculations also obviates the need for the use of crude transmission data estimates such as the half and tenth value layer indices. Although this study was primarily designed for brachytherapy, results might also be useful for radiology and nuclear medicine facility design, provided broad beam conditions apply.
Benchmarking of Proton Transport in Super Monte Carlo Simulation Program
NASA Astrophysics Data System (ADS)
Wang, Yongfeng; Li, Gui; Song, Jing; Zheng, Huaqing; Sun, Guangyao; Hao, Lijuan; Wu, Yican
2014-06-01
The Monte Carlo (MC) method has been traditionally applied in nuclear design and analysis due to its capability of dealing with complicated geometries and multi-dimensional physics problems as well as obtaining accurate results. The Super Monte Carlo Simulation Program (SuperMC) is developed by FDS Team in China for fusion, fission, and other nuclear applications. The simulations of radiation transport, isotope burn-up, material activation, radiation dose, and biology damage could be performed using SuperMC. Complicated geometries and the whole physical process of various types of particles in broad energy scale can be well handled. Bi-directional automatic conversion between general CAD models and full-formed input files of SuperMC is supported by MCAM, which is a CAD/image-based automatic modeling program for neutronics and radiation transport simulation. Mixed visualization of dynamical 3D dataset and geometry model is supported by RVIS, which is a nuclear radiation virtual simulation and assessment system. Continuous-energy cross section data from hybrid evaluated nuclear data library HENDL are utilized to support simulation. Neutronic fixed source and critical design parameters calculates for reactors of complex geometry and material distribution based on the transport of neutron and photon have been achieved in our former version of SuperMC. Recently, the proton transport has also been intergrated in SuperMC in the energy region up to 10 GeV. The physical processes considered for proton transport include electromagnetic processes and hadronic processes. The electromagnetic processes include ionization, multiple scattering, bremsstrahlung, and pair production processes. Public evaluated data from HENDL are used in some electromagnetic processes. In hadronic physics, the Bertini intra-nuclear cascade model with exitons, preequilibrium model, nucleus explosion model, fission model, and evaporation model are incorporated to treat the intermediate energy nuclear reactions for proton. Some other hadronic models are also being developed now. The benchmarking of proton transport in SuperMC has been performed according to Accelerator Driven subcritical System (ADS) benchmark data and model released by IAEA from IAEA's Cooperation Research Plan (CRP). The incident proton energy is 1.0 GeV. The neutron flux and energy deposition were calculated. The results simulated using SupeMC and FLUKA are in agreement within the statistical uncertainty inherent in the Monte Carlo method. The proton transport in SuperMC has also been applied in China Lead-Alloy cooled Reactor (CLEAR), which is designed by FDS Team for the calculation of spallation reaction in the target.
Neutron stimulated emission computed tomography: a Monte Carlo simulation approach
NASA Astrophysics Data System (ADS)
Sharma, A. C.; Harrawood, B. P.; Bender, J. E.; Tourassi, G. D.; Kapadia, A. J.
2007-10-01
A Monte Carlo simulation has been developed for neutron stimulated emission computed tomography (NSECT) using the GEANT4 toolkit. NSECT is a new approach to biomedical imaging that allows spectral analysis of the elements present within the sample. In NSECT, a beam of high-energy neutrons interrogates a sample and the nuclei in the sample are stimulated to an excited state by inelastic scattering of the neutrons. The characteristic gammas emitted by the excited nuclei are captured in a spectrometer to form multi-energy spectra. Currently, a tomographic image is formed using a collimated neutron beam to define the line integral paths for the tomographic projections. These projection data are reconstructed to form a representation of the distribution of individual elements in the sample. To facilitate the development of this technique, a Monte Carlo simulation model has been constructed from the GEANT4 toolkit. This simulation includes modeling of the neutron beam source and collimation, the samples, the neutron interactions within the samples, the emission of characteristic gammas, and the detection of these gammas in a Germanium crystal. In addition, the model allows the absorbed radiation dose to be calculated for internal components of the sample. NSECT presents challenges not typically addressed in Monte Carlo modeling of high-energy physics applications. In order to address issues critical to the clinical development of NSECT, this paper will describe the GEANT4 simulation environment and three separate simulations performed to accomplish three specific aims. First, comparison of a simulation to a tomographic experiment will verify the accuracy of both the gamma energy spectra produced and the positioning of the beam relative to the sample. Second, parametric analysis of simulations performed with different user-defined variables will determine the best way to effectively model low energy neutrons in tissue, which is a concern with the high hydrogen content in biological tissue. Third, determination of the energy absorbed in tissue during neutron interrogation in order to estimate the dose. Results from these three simulation experiments demonstrate that GEANT4 is an effective simulation platform that can be used to facilitate the future development and optimization of NSECT.
A wave-function Monte Carlo method for simulating conditional master equations
Kurt Jacobs
2010-01-21
Wave-function Monte Carlo methods are an important tool for simulating quantum systems, but the standard method cannot be used to simulate decoherence in continuously measured systems. Here we present a new Monte Carlo method for such systems. This was used to perform the simulations of a continuously measured nano-resonator in [Phys. Rev. Lett. 102, 057208 (2009)].
Direct Monte Carlo simulation of chemical reaction systems: Simple bimolecular reactions
Anderson, James B.
Direct Monte Carlo simulation of chemical reaction systems: Simple bimolecular reactions Shannon D and understanding the behavior of gas phase chemical reaction systems. This Monte Carlo method, originated by Bird. Extension to chemical reactions offers a powerful tool for treating reaction systems with nonthermal
A Scalable Parallel Monte Carlo Method for Free Energy Simulations of Molecular Systems
Chan, Derek Y C
A Scalable Parallel Monte Carlo Method for Free Energy Simulations of Molecular Systems MALEK O to the system Hamiltonian. This external potential is related to the free energy. In the parallel implementation77, 2005 Key words: parallel computing; high performance computing; Monte Carlo; free energy; molecular
Goddard III, William A.
The continuous configurational Boltzmann biased direct Monte Carlo method for free energyBB with 400 chains leads to an accuracy of 0.1% in the free energy whereas simple sampling direct Monte Carlo, and to evaluating the mixing free energy for polymer blends. © 1997 American Institute of Physics. S0021-9606 97
Higher accuracy quantum Monte Carlo calculations of the barrier for the HH2 reaction
Anderson, James B.
of 10. As in the previous studies, the Green's function quantum Monte Carlo method with an ``exact configuration interaction method. The lowest-energy expectation value for the energy at the saddle point Monte Carlo methods.13 The energy at the saddle point in the barrier to reaction was previously
How-to: Write a parton-level Monte Carlo event generator
Andreas Papaefstathiou
2014-12-15
This article provides an introduction to the principles of particle physics event generators that are based on the Monte Carlo method. Following some preliminaries, instructions on how to built a basic parton-level Monte Carlo event generator are given through exercises.
First-row hydrides: Dissociation and ground state energies using quantum Monte Carlo
Anderson, James B.
First-row hydrides: Dissociation and ground state energies using quantum Monte Carlo Arne Lu. The dissociation energies De have been calculated with accuracies of 0.5 kcal mol 1 or better. For all hydrides, the dissociation energies are consistent with experimental values. The fixed-node quantum Monte Carlo method can
Widom, Michael
2011-01-01
PHYSICAL REVIEW E 84, 061912 (2011) Kinetic Monte Carlo method applied to nucleic acid hairpin December 2011) Kinetic Monte Carlo on coarse-grained systems, such as nucleic acid secondary structure states. Secondary structure models of nucleic acids, which record the pairings of complementary
MONTE CARLO MIXTURE KALMAN FILTER AND ITS APPLICATION TO SPACE-TIME INVERSION
Higuchi, Tomoyuki
MONTE CARLO MIXTURE KALMAN FILTER AND ITS APPLICATION TO SPACE-TIME INVERSION Tomoyuki Higuchi ,1 and applied Kalman filter. The Kalman filter based methods, however, do not allow any variation scheme, Monte Carlo mixture Kalman filter (MCMKF), to the time dependent inversion. MCMKF allows
The X-43A Six Degree of Freedom Monte Carlo Analysis
NASA Technical Reports Server (NTRS)
Baumann, Ethan; Bahm, Catherine; Strovers, Brian; Beck, Roger; Richard, Michael
2007-01-01
This report provides an overview of the Hyper-X research vehicle Monte Carlo analysis conducted with the six-degree-of-freedom simulation. The methodology and model uncertainties used for the Monte Carlo analysis are presented as permitted. In addition, the process used to select hardware validation test cases from the Monte Carlo data is described. The preflight Monte Carlo analysis indicated that the X-43A control system was robust to the preflight uncertainties and provided the Hyper-X project an important indication that the vehicle would likely be successful in accomplishing the mission objectives. The X-43A in-flight performance is compared to the preflight Monte Carlo predictions and shown to exceed the Monte Carlo bounds in several instances. Possible modeling shortfalls are presented that may account for these discrepancies. The flight control laws and guidance algorithms were robust enough as a result of the preflight Monte Carlo analysis that the unexpected in-flight performance did not have undue consequences. Modeling and Monte Carlo analysis lessons learned are presented.
Teaching Ionic Solvation Structure with a Monte Carlo Liquid Simulation Program
ERIC Educational Resources Information Center
Serrano, Agostinho; Santos, Flavia M. T.; Greca, Ileana M.
2004-01-01
The use of molecular dynamics and Monte Carlo methods has provided efficient means to stimulate the behavior of molecular liquids and solutions. A Monte Carlo simulation program is used to compute the structure of liquid water and of water as a solvent to Na(super +), Cl(super -), and Ar on a personal computer to show that it is easily feasible to…
The X-43A Six Degree of Freedom Monte Carlo Analysis
NASA Technical Reports Server (NTRS)
Baumann, Ethan; Bahm, Catherine; Strovers, Brian; Beck, Roger
2008-01-01
This report provides an overview of the Hyper-X research vehicle Monte Carlo analysis conducted with the six-degree-of-freedom simulation. The methodology and model uncertainties used for the Monte Carlo analysis are presented as permitted. In addition, the process used to select hardware validation test cases from the Monte Carlo data is described. The preflight Monte Carlo analysis indicated that the X-43A control system was robust to the preflight uncertainties and provided the Hyper-X project an important indication that the vehicle would likely be successful in accomplishing the mission objectives. The X-43A inflight performance is compared to the preflight Monte Carlo predictions and shown to exceed the Monte Carlo bounds in several instances. Possible modeling shortfalls are presented that may account for these discrepancies. The flight control laws and guidance algorithms were robust enough as a result of the preflight Monte Carlo analysis that the unexpected in-flight performance did not have undue consequences. Modeling and Monte Carlo analysis lessons learned are presented.
ERIC Educational Resources Information Center
Mao, Xiuzhen; Xin, Tao
2013-01-01
The Monte Carlo approach which has previously been implemented in traditional computerized adaptive testing (CAT) is applied here to cognitive diagnostic CAT to test the ability of this approach to address multiple content constraints. The performance of the Monte Carlo approach is compared with the performance of the modified maximum global…
A Monte Carlo Approach to the Design, Assembly, and Evaluation of Multistage Adaptive Tests
ERIC Educational Resources Information Center
Belov, Dmitry I.; Armstrong, Ronald D.
2008-01-01
This article presents an application of Monte Carlo methods for developing and assembling multistage adaptive tests (MSTs). A major advantage of the Monte Carlo assembly over other approaches (e.g., integer programming or enumerative heuristics) is that it provides a uniform sampling from all MSTs (or MST paths) available from a given item pool.…
Nonlinear elasticity of an -helical polypeptide: Monte Carlo studies Buddhapriya Chakrabarti1,2
Levine, Alex J.
Nonlinear elasticity of an -helical polypeptide: Monte Carlo studies Buddhapriya Chakrabarti1 September 2006 We report on Monte Carlo studies of the elastic properties of the helix-coil wormlike chain-like variable that controls the local chain bending modulus. We characterize the nonlinear elastic properties
Melting of Iron under Earth's Core Conditions from Diffusion Monte Carlo Free Energy Calculations
Alfè, Dario
Melting of Iron under Earth's Core Conditions from Diffusion Monte Carlo Free Energy Calculations Ester Sola1 and Dario Alfe`1,2 1 Thomas Young Centre@UCL, and Department of Earth Sciences, UCL, Gower. Here we used quantum Monte Carlo techniques to compute the free energies of solid and liquid iron
CS294 Markov Chain Monte Carlo: Foundations & Applications Fall 2009 Lecture 2: September 8
Sinclair, Alistair
CS294 Markov Chain Monte Carlo: Foundations & Applications Fall 2009 Lecture 2: September 8 this class only with the permission of the Instructor. 2.1 Markov Chains We begin by reviewing the basic goal in the Markov Chain Monte Carlo paradigm. Assume a finite state space and a weight function w : + . Our goal
Monte Carlo simulations of polymer brushes C.-M. Chen and Y.-A. Fwu
Chen, Chi-Ming
vesicles can be used for controlled drug delivery, in which polymer chains can protect the vesicles fromMonte Carlo simulations of polymer brushes C.-M. Chen and Y.-A. Fwu Physics Department, National-dimensional Monte Carlo simulations of flexible and semiflexible polymer brushes at various grafting densities
Phase Equilibria of Lattice Polymers from Histogram Reweighting Monte Carlo Simulations
Phase Equilibria of Lattice Polymers from Histogram Reweighting Monte Carlo Simulations Athanassios;2 INTRODUCTION Phase equilibria in polymer solutions are important in manufacturing, processing and applications-reweighting Monte Carlo simulations were used to obtain polymer / sol- vent phase diagrams for lattice homopolymers
and energetic properties for water clusters and for the thermodynamic and the structural properties of liquid environments, we evaluate the coexistence properties of this water model by using Gibbs ensemble Monte Carlo computational studies of coexistence properties us- ing different water models.4 The Gibbs ensemble Monte Carlo
Open source software for electric field Monte Carlo simulation of coherent
Pradhan, Prabhakar
Open source software for electric field Monte Carlo simulation of coherent backscattering/26/2012 Terms of Use: http://spiedl.org/terms #12;Open source software for electric field Monte Carlo simulation and scattering media containing linear birefringence. © 2012 Society of Photo-Optical Instrumentation Engineers
Ryan, Dominic
Monte Carlo simulations of transverse spin freezing in the three-dimensional frustrated HeisenbergT8, Canada Monte Carlo simulations of the randomly frustrated three-dimensional Heisenberg model. INTRODUCTION Experimental studies of frustrated magnetic systems show that the presence of competing
Ageing at the Spin-Glass/Ferromagnet Transition: Monte Carlo Simulation using Markus Manssen
Peinke, Joachim
Ageing at the Spin-Glass/Ferromagnet Transition: Monte Carlo Simulation using GPUs Markus Manssen in three dimensions for samples of size up to N = 1283 and for up to 108 Monte Carlo sweeps. In particular by a rough free-energy landscape and by slow glassy dynamics.7,8 Disorder and frustration in the spin
Rao-Blackwellized Monte Carlo Data Association for Multiple Target Tracking
Kaski, Samuel
Rao-Blackwellized Monte Carlo Data Association for Multiple Target Tracking Simo Särkkä Aki Vehtari for tracking multiple targets in presence of clutter and false alarm measurements. The advantage of the new: multiple target tracking, data association, rao- blackwellization, kalman filter, sequential monte carlo 1
Teacher's Corner: Using SAS for Monte Carlo Simulation Research in SEM
ERIC Educational Resources Information Center
Fan, Xitao; Fan, Xiaotao
2005-01-01
This article illustrates the use of the SAS system for Monte Carlo simulation work in structural equation modeling (SEM). Data generation procedures for both multivariate normal and nonnormal conditions are discussed, and relevant SAS codes for implementing these procedures are presented. A hypothetical example is presented in which Monte Carlo…
A Bio-inspired Job Scheduling Algorithm for Monte Carlo Applications on a Computational Grid
Li, Yaohang
, naturally parallel and compute-intensive Monte Carlo tasks to clustered computational farms. Examples of such farms include large-scale computational grids, with heterogeneous and dynamic performance. The kernel is organized as follows. In Section II, we analyze the grid-based Monte Carlo paradigm. We discuss the behavior
Monte Carlo Methods for Uncertainty Quantification Mathematical Institute, University of Oxford
Giles, Mike
.g. a wing) complex geometries (e.g. an aircraft in landing configuration) adding new features to design. Mike Giles (Oxford) Monte Carlo methods 3 / 23 PDEs with Uncertainty The big move now is towards, and then using data to derive an improved a posteriori distribution. Mike Giles (Oxford) Monte Carlo methods 4
Comparison of neutron diffusion and Monte Carlo simulations of a fission wave
Deinert, Mark
Comparison of neutron diffusion and Monte Carlo simulations of a fission wave A.G. Osborne, M Keywords: Fission wave Breed-burn Monte Carlo Diffusion approximation a b s t r a c t Simulations by many groups have shown that fission waves could form in uranium and potentially be used as the basis
PHYSICAL REVIEW C 87, 014617 (2013) Monte Carlo Hauser-Feshbach predictions of prompt fission rays
Danon, Yaron
2013-01-01
PHYSICAL REVIEW C 87, 014617 (2013) Monte Carlo Hauser-Feshbach predictions of prompt fission rays and emission from primary fission fragments are calculated for thermal neutron induced fission of 235 U and 239 Pu and for spontaneous fission of 252 Cf using a Monte Carlo Hauser-Feshbach approach
Quasi-Monte Carlo Simulation of the Light Environment of Plants Mikolaj Cieslak1,5
Lemieux, Christiane
those elements, both of which may affect growth. A light environment model based on Monte Carlo (MC regulating the growth and development of plants. Consequently, simulation of light environmentQuasi-Monte Carlo Simulation of the Light Environment of Plants Mikolaj Cieslak1,5 , Christiane
Kidera, A
1995-01-01
A Monte Carlo simulation method for globular proteins, called extended-scaled-collective-variable (ESCV) Monte Carlo, is proposed. This method combines two Monte Carlo algorithms known as entropy-sampling and scaled-collective-variable algorithms. Entropy-sampling Monte Carlo is able to sample a large configurational space even in a disordered system that has a large number of potential barriers. In contrast, scaled-collective-variable Monte Carlo provides an efficient sampling for a system whose dynamics is highly cooperative. Because a globular protein is a disordered system whose dynamics is characterized by collective motions, a combination of these two algorithms could provide an optimal Monte Carlo simulation for a globular protein. As a test case, we have carried out an ESCV Monte Carlo simulation for a cell adhesive Arg-Gly-Asp-containing peptide, Lys-Arg-Cys-Arg-Gly-Asp-Cys-Met-Asp, and determined the conformational distribution at 300 K. The peptide contains a disulfide bridge between the two cysteine residues. This bond mimics the strong geometrical constraints that result from a protein's globular nature and give rise to highly cooperative dynamics. Computation results show that the ESCV Monte Carlo was not trapped at any local minimum and that the canonical distribution was correctly determined. PMID:7568238
Quantum Monte Carlo: Direct calculation of corrections to trial wave functions and their energies
Anderson, James B.
ARTICLES Quantum Monte Carlo: Direct calculation of corrections to trial wave functions for calculating the difference between a true wave function and an analytic trial wave function 0 . The method Monte Carlo QMC method for the direct calculation of corrections to trial wave functions.13 We report
Accuracy of electronic wave functions in quantum Monte Carlo: The effect of high-order correlations
Nightingale, Peter
Accuracy of electronic wave functions in quantum Monte Carlo: The effect of high-order correlations, Rhode Island 02881 Received 24 February 1997; accepted 19 May 1997 Compact and accurate wave functions can be constructed by quantum Monte Carlo methods. Typically, these wave functions consist of a sum
DATA ASSIMILATION WITH MONTE CARLO MIXTURE KALMAN FILTER TOWARD SPACE WEATHER FORECASTING
Higuchi, Tomoyuki
DATA ASSIMILATION WITH MONTE CARLO MIXTURE KALMAN FILTER TOWARD SPACE WEATHER FORECASTING Tomoyuki Kalman filter. The Kalman filter based meth- ods, however, do not allow for the strong nonlinear and Monte Carlo mixture Kalman filter (MCMKF). A basic idea of MCMKF is as follows: (1) we prepare a finite
Monte Carlo and experimental internal radionuclide dosimetry in RANDO head phantom.
Ghahraman Asl, Ruhollah; Nasseri, Shahrokh; Parach, Ali Asghar; Zakavi, Seyed Rasoul; Momennezhad, Mehdi; Davenport, David
2015-09-01
Monte Carlo techniques are widely employed in internal dosimetry to obtain better estimates of absorbed dose distributions from irradiation sources in medicine. Accurate 3D absorbed dosimetry would be useful for risk assessment of inducing deterministic and stochastic biological effects for both therapeutic and diagnostic radiopharmaceuticals in nuclear medicine. The goal of this study was to experimentally evaluate the use of Geant4 application for tomographic emission (GATE) Monte Carlo package for 3D internal dosimetry using the head portion of the RANDO phantom. GATE package (version 6.1) was used to create a voxel model of a human head phantom from computed tomography (CT) images. Matrix dimensions consisted of 319 × 216 × 30 voxels (0.7871 × 0.7871 × 5 mm(3)). Measurements were made using thermoluminescent dosimeters (TLD-100). One rod-shaped source with 94 MBq activity of (99m)Tc was positioned in the brain tissue of the posterior part of the human head phantom in slice number 2. The results of the simulation were compared with measured mean absorbed dose per cumulative activity (S value). Absorbed dose was also calculated for each slice of the digital model of the head phantom and dose volume histograms (DVHs) were computed to analyze the absolute and relative doses in each slice from the simulation data. The S-values calculated by GATE and TLD methods showed a significant correlation (correlation coefficient, r(2) ? 0.99, p < 0.05) with each other. The maximum relative percentage differences were ?14% for most cases. DVHs demonstrated dose decrease along the direction of movement toward the lower slices of the head phantom. Based on the results obtained from GATE Monte Carlopackage it can be deduced that a complete dosimetry simulation study, from imaging to absorbed dose map calculation, is possible to execute in a single framework. PMID:26232251
Monte Carlo simulation of dense polymer melts using event chain algorithms
NASA Astrophysics Data System (ADS)
Kampmann, Tobias A.; Boltz, Horst-Holger; Kierfeld, Jan
2015-07-01
We propose an efficient Monte Carlo algorithm for the off-lattice simulation of dense hard sphere polymer melts using cluster moves, called event chains, which allow for a rejection-free treatment of the excluded volume. Event chains also allow for an efficient preparation of initial configurations in polymer melts. We parallelize the event chain Monte Carlo algorithm to further increase simulation speeds and suggest additional local topology-changing moves ("swap" moves) to accelerate equilibration. By comparison with other Monte Carlo and molecular dynamics simulations, we verify that the event chain algorithm reproduces the correct equilibrium behavior of polymer chains in the melt. By comparing intrapolymer diffusion time scales, we show that event chain Monte Carlo algorithms can achieve simulation speeds comparable to optimized molecular dynamics simulations. The event chain Monte Carlo algorithm exhibits Rouse dynamics on short time scales. In the absence of swap moves, we find reptation dynamics on intermediate time scales for long chains.
Monte Carlo Simulation of Dense Polymer Melts Using Event Chain Algorithms
Tobias Alexander Kampmann; Horst-Holger Boltz; Jan Kierfeld
2015-07-23
We propose an efficient Monte Carlo algorithm for the off-lattice simulation of dense hard sphere polymer melts using cluster moves, called event chains, which allow for a rejection-free treatment of the excluded volume. Event chains also allow for an efficient preparation of initial configurations in polymer melts. We parallelize the event chain Monte Carlo algorithm to further increase simulation speeds and suggest additional local topology-changing moves ("swap" moves) to accelerate equilibration. By comparison with other Monte Carlo and molecular dynamics simulations, we verify that the event chain algorithm reproduces the correct equilibrium behavior of polymer chains in the melt. By comparing intrapolymer diffusion time scales, we show that event chain Monte Carlo algorithms can achieve simulation speeds comparable to optimized molecular dynamics simulations. The event chain Monte Carlo algorithm exhibits Rouse dynamics on short time scales. In the absence of swap moves, we find reptation dynamics on intermediate time scales for long chains.
Heavy deformed nuclei in the shell model Monte Carlo method
Alhassid, Y; Nakada, H
2007-01-01
We extend the shell model Monte Carlo approach to heavy deformed nuclei using a new proton-neutron formalism. The low excitation energies of such nuclei necessitate calculations at low temperatures for which a stabilization method is implemented in the canonical ensemble. We apply the method to study a well deformed rare-earth nucleus, 162Dy. The single-particle model space includes the 50-82 shell plus 1f_{7/2} orbital for protons and the 82-126 shell plus 0h_{11/2}, 1g_{9/2} orbitals for neutrons. We show that the spherical shell model reproduces well the rotational character of 162Dy within this model space. We also calculate the level density of 162Dy and find it to be in excellent agreement with the experimental level density, which we extract from several experiments.
Heavy deformed nuclei in the shell model Monte Carlo method
Y. Alhassid; L. Fang; H. Nakada
2007-10-09
We extend the shell model Monte Carlo approach to heavy deformed nuclei using a new proton-neutron formalism. The low excitation energies of such nuclei necessitate calculations at low temperatures for which a stabilization method is implemented in the canonical ensemble. We apply the method to study a well deformed rare-earth nucleus, 162Dy. The single-particle model space includes the 50-82 shell plus 1f_{7/2} orbital for protons and the 82-126 shell plus 0h_{11/2}, 1g_{9/2} orbitals for neutrons. We show that the spherical shell model reproduces well the rotational character of 162Dy within this model space. We also calculate the level density of 162Dy and find it to be in excellent agreement with the experimental level density, which we extract from several experiments.
Grain growth phenomena in films: a Monte Carlo approach
Srolovitz, D.J.
1986-01-01
A statistical model of microstructural evolution is developed for the evolution of grain structure during deposition. In cases where the atomic mobility on the surface greatly exceeds that in the bulk of the film, the bulk microstructure may be viewed as static while all of the evolution is controlled by the free surface. This leads naturally to a two-dimensional model of microstructural evolution. Since the surface is advancing at a constant rate during deposition there is a linear relationship between time in the two-dimensional model and depth in the film. A Monte Carlo computer simulation technique is described which models the evolution of microstructure in this way. Various driving forces are included. Simulated microstructures in the plane of the film and in the plane perpendicular to the free surface are shown.
Stationarity and source convergence in monte carlo criticality calculation.
Ueki, T.; Brown, F. B.
2002-01-01
In Monte Carlo (MC) criticality calculations, source error propagation through the stationary cycles and source convergcnce in the settling (inactive) cycles are both dominated by the dominance ratio (DR) of fission kernels, Le., the ratio of the second largest to largest eigenvalues. For symmetric two fissile component systems with DR close to unity, the extinction of fission source sites can occur in one of the components even when the initial source is symmetric and the number of histories per cycle is larger than one thousand. When such a system is made slightly asymmetric, the neutron effective multiplication factor (kern) at the inactive cycles does not reflect the convergence to stationary source distribution. To overcome this problem, relative entropy (Kullback Leibler distance) is applied to a slightly asymmetric two fissile component problem with a dominance ratio of 0.9925. Numerical results show that relative entropy is effective as a posterior diagnostic tool.
Monte Carlo simulations of air showers in atmospheric electric fields
Buitink, S; Falcke, H; Heck, D; Kuijpers, J
2009-01-01
The development of cosmic ray air showers can be influenced by atmospheric electric fields. Under fair weather conditions these fields are small, but the strong fields inside thunderstorms can have a significant effect on the electromagnetic component of a shower. Understanding this effect is particularly important for radio detection of air showers, since the radio emission is produced by the shower electrons and positrons. We perform Monte Carlo simulations to calculate the effects of different electric field configurations on the shower development. We find that the electric field becomes important for values of the order of 1 kV/cm. Not only can the energy distribution of electrons and positrons change significantly for such field strengths, it is also possible that runaway electron breakdown occurs at high altitudes, which is an important effect in lightning initiation.
RMC - A Monte Carlo code for reactor physics analysis
Wang, K.; Li, Z.; She, D.; Liang, J.; Xu, Q.; Qiu, A.; Yu, J.; Sun, J.; Fan, X.; Yu, G.
2013-07-01
A new Monte Carlo neutron transport code RMC has been being developed by Department of Engineering Physics, Tsinghua University, Beijing as a tool for reactor physics analysis on high-performance computing platforms. To meet the requirements of reactor analysis, RMC now has such functions as criticality calculation, fixed-source calculation, burnup calculation and kinetics simulations. Some techniques for geometry treatment, new burnup algorithm, source convergence acceleration, massive tally and parallel calculation, and temperature dependent cross sections processing are researched and implemented in RMC to improve the efficiency. Validation results of criticality calculation, burnup calculation, source convergence acceleration, tallies performance and parallel performance shown in this paper prove the capabilities of RMC in dealing with reactor analysis problems with good performances. (authors)
MEAN FIELD AND MONTE CARLO MODELING OF MULTIBLOCK COPOLYMERS
K. RASMUSSEN; ET AL
2001-01-01
The authors discuss and apply extensions needed to treat multiblock copolymers within the mean field theoretical framework for microphase separation in diblock copolymer metals, originally due to Leibler. The mean field calculations are complemented by lattice Monte Carlo realizations using the bond fluctuation model. They find that the microphase separation transition occurs at larger {sub {chi}}N as the number of blocks in increased beyond two (i.e., beyond diblock), and that the characteristic length scale of the emerging morphology decreases as the number of blocks increases. The latter prediction is in qualitative agreement with published experimental results due to Sontak and co-workers for model multiblock poly(styrene-isoprene) systems and recent results due to Hjelm and co-workers for a segmented poly(ester-urethane) relevant to Los Alamos interests. Additionally, the mean field predictions and bond fluctuation realizations yield consistent results.
Quantum states of confined hydrogen plasma species: Monte Carlo calculations
NASA Astrophysics Data System (ADS)
Micca Longo, G.; Longo, S.; Giordano, D.
2015-12-01
The diffusion Monte Carlo method with symmetry-based state selection is used to calculate the quantum energy states of \\text{H}2+ confined into potential barriers of atomic dimensions (a model for these ions in solids). Special solutions are employed, permitting one to obtain satisfactory results with rather simple native code. As a test case, {{}2}{{\\Pi}u} and {{}2}{{\\Pi}g} states of \\text{H}2+ ions under spherical confinement are considered. The results are interpreted using the correlation of \\text{H}2+ states to atomic orbitals of H atoms lying on the confining surface and perturbation calculations. The method is straightforwardly applied to cavities of any shape and different hydrogen plasma species (at least one-electron ones, including H) for future studies with real crystal symmetries.
Monte Carlo simulations and benchmark studies at CERN's accelerator chain
De Carvalho Saraiva, Joao Pedro
2015-01-01
Mixed particle and energy radiation fields present at the Large Hadron Collider (LHC) and its accelerator chain are responsible for failures on electronic devices located in the vicinity of the accelerator beam lines. These radiation effects on electronics and, more generally, the overall radiation damage issues have a direct impact on component and system lifetimes, as well as on maintenance requirements and radiation exposure to personnel who have to intervene and fix existing faults. The radiation environments and respective radiation damage issues along the CERN’s accelerator chain were studied in the framework of the CERN Radiation to Electronics (R2E) project and are hereby presented. The important interplay between Monte Carlo simulations and radiation monitoring is also highlighted.
Monte Carlo simulations in small animal PET imaging
NASA Astrophysics Data System (ADS)
Branco, Susana; Jan, Sébastien; Almeida, Pedro
2007-10-01
This work is based on the use of an implemented Positron Emission Tomography (PET) simulation system dedicated for small animal PET imaging. Geant4 Application for Tomographic Emission (GATE), a Monte Carlo simulation platform based on the Geant4 libraries, is well suited for modeling the microPET ® FOCUS system and to implement realistic phantoms, such as the MOBY phantom, and data maps from real examinations. The use of a microPET ® FOCUS simulation model with GATE has been validated for spatial resolution, counting rates performances, imaging contrast recovery and quantitative analysis. Results from realistic studies of the mouse body using -F and [ 18F]FDG imaging protocols are presented. These simulations include the injection of realistic doses into the animal and realistic time framing. The results have shown that it is possible to simulate small animal PET acquisitions under realistic conditions, and are expected to be useful to improve the quantitative analysis in PET mouse body studies.
Improved version of the PHOBOS Glauber Monte Carlo
Loizides, C.; Nagle, J.; Steinberg, P.
2015-09-01
“Glauber” models are used to calculate geometric quantities in the initial state of heavy ion collisions, such as impact parameter, number of participating nucleons and initial eccentricity. Experimental heavy-ion collaborations, in particular at RHIC and LHC, use Glauber Model calculations for various geometric observables for determination of the collision centrality. In this document, we describe the assumptions inherent to the approach, and provide an updated implementation (v2) of the Monte Carlo based Glauber Model calculation, which originally was used by the PHOBOS collaboration. The main improvement w.r.t. the earlier version (v1) (Alver et al. 2008) is the inclusion of Tritium,more »Helium-3, and Uranium, as well as the treatment of deformed nuclei and Glauber–Gribov fluctuations of the proton in p +A collisions. A users’ guide (updated to reflect changes in v2) is provided for running various calculations.« less
Application of Monte Carlo method to laser coding detection
NASA Astrophysics Data System (ADS)
Wang, Wei; Li, Wei; Song, Xiao-tong; Yu, Tao
2015-10-01
Based on the requirements of engineering design and improving the detection ability of laser detector, the Monte Carlo method is adopted to analyze and compare the statistic distributed rules of the detection probability, the false probability, the ratio of signal to noise and the threshold value of detecting circuit for laser detector using pseudo-random code pulse and equidistant pulse system. The simulation results show that the signal to noise ratio for pseudo-random code pulse system is about three times bigger than the equidistant pulse system, the detecting threshold for pseudo-random code pulse system is one times bigger than the equidistant pulse system, and the pseudo-random code pulse system excels the equidistant pulse system in consistency of the simulation curves. A conclusion can be made that the ability of laser detector using pseudo-random code pulse system is better than laser detector using equidistant pulse system.
Monte Carlo reactor calculation with substantially reduced number of cycles
Lee, M. J.; Joo, H. G.; Lee, D.; Smith, K.
2012-07-01
A new Monte Carlo (MC) eigenvalue calculation scheme that substantially reduces the number of cycles is introduced with the aid of coarse mesh finite difference (CMFD) formulation. First, it is confirmed in terms of pin power errors that using extremely many particles resulting in short active cycles is beneficial even in the conventional MC scheme although wasted operations in inactive cycles cannot be reduced with more particles. A CMFD-assisted MC scheme is introduced as an effort to reduce the number of inactive cycles and the fast convergence behavior and reduced inter-cycle effect of the CMFD assisted MC calculation is investigated in detail. As a practical means of providing a good initial fission source distribution, an assembly based few-group condensation and homogenization scheme is introduced and it is shown that efficient MC eigenvalue calculations with fewer than 20 total cycles (including inactive cycles) are possible for large power reactor problems. (authors)
OBJECT KINETIC MONTE CARLO SIMULATIONS OF RADIATION DAMAGE IN TUNGSTEN
Nandipati, Giridhar; Setyawan, Wahyu; Heinisch, Howard L.; Roche, Kenneth J.; Kurtz, Richard J.; Wirth, Brian D.
2015-04-16
We used our recently developed lattice-based object kinetic Monte Carlo code; KSOME [1] to carryout simulations of radiation damage in bulk tungsten at temperatures of 300, and 2050 K for various dose rates. Displacement cascades generated from molecular dynamics (MD) simulations for PKA energies at 60, 75 and 100 keV provided residual point defect distributions. It was found that the number density of vacancies in the simulation box does not change with dose rate while the number density of vacancy clusters slightly decreases with dose rate indicating that bigger clusters are formed at larger dose rates. At 300 K, although the average vacancy cluster size increases slightly, the vast majority of vacancies exist as mono-vacancies. At 2050 K no accumulation of defects was observed during irradiation over a wide range of dose rates for all PKA energies studied in this work.
An improved kinetic Monte Carlo approach for epitaxial submonolayer growth
Robert Deak; Zoltan Neda; Peter B. Barna
2007-07-05
Two-component submonolayer growth on triangular lattice is qualitatively studied by kinetic Monte Carlo techniques. The hopping barrier governing surface diffusion of the atoms is estimated with an improved formula and using realistic pair interaction potentials. Realistic degrees of freedoms enhancing the surface diffusion of atoms are also introduced. The main advantages of the presented technique are the reduced number of free parameters and the clear diffusion activated mechanism for the segregation of different types of atoms. The potential of this method is exemplified by reproducing (i) vacancy and stacking fault related phase-boundary creation and dynamics; (ii) a special co-deposition and segregation process where the segregated atoms of the second component surrounds the islands formed by the first type of atoms.
Monte Carlo simulation of linac irradiation with dynamic wedges.
Chang, Kwo-Ping; Chen, Lu-Yu; Chien, Yu-Huang
2014-11-01
This study aims to simulate the dose distributions of LINAC with dynamic wedges (DWs) under various field sizes and wedge angles by the BEAMnrc code with DYNJAWS component module. These were compared with those calculated by the treatment planning system (TPS) and the measured data. All percentage depth doses (PDDs) were found to be in good agreement between TPS, Monte Carlo (MC) and measurements made in open fields and fields with DWs. For dose profiles, compared with the MC and the measurements, TPS gives reliable results for large field sizes (>10 × 10 cm(2)) but results in significant errors in small field sizes (5 × 5 cm(2)). The entrance surface doses calculated by TPS were found to be significantly overestimated. For depths deeper than 0.5 cm, TPS yields PDDs in agreement with MC simulations. PMID:25004937
A simple eigenfunction convergence acceleration method for Monte Carlo
Booth, Thomas E
2010-11-18
Monte Carlo transport codes typically use a power iteration method to obtain the fundamental eigenfunction. The standard convergence rate for the power iteration method is the ratio of the first two eigenvalues, that is, k{sub 2}/k{sub 1}. Modifications to the power method have accelerated the convergence by explicitly calculating the subdominant eigenfunctions as well as the fundamental. Calculating the subdominant eigenfunctions requires using particles of negative and positive weights and appropriately canceling the negative and positive weight particles. Incorporating both negative weights and a {+-} weight cancellation requires a significant change to current transport codes. This paper presents an alternative convergence acceleration method that does not require modifying the transport codes to deal with the problems associated with tracking and cancelling particles of {+-} weights. Instead, only positive weights are used in the acceleration method.
Monte Carlo Simulation Tool Installation and Operation Guide
Aguayo Navarrete, Estanislao; Ankney, Austin S.; Berguson, Timothy J.; Kouzes, Richard T.; Orrell, John L.; Troy, Meredith D.; Wiseman, Clinton G.
2013-09-02
This document provides information on software and procedures for Monte Carlo simulations based on the Geant4 toolkit, the ROOT data analysis software and the CRY cosmic ray library. These tools have been chosen for its application to shield design and activation studies as part of the simulation task for the Majorana Collaboration. This document includes instructions for installation, operation and modification of the simulation code in a high cyber-security computing environment, such as the Pacific Northwest National Laboratory network. It is intended as a living document, and will be periodically updated. It is a starting point for information collection by an experimenter, and is not the definitive source. Users should consult with one of the authors for guidance on how to find the most current information for their needs.
Quantum Entanglement of Interacting Fermions in Monte Carlo
Tarun Grover
2013-08-06
Given a specific interacting quantum Hamiltonian in a general spatial dimension, can one access its entanglement properties, such as, the entanglement entropy corresponding to the ground state wavefunction? Even though progress has been made in addressing this question for interacting bosons and quantum spins, as yet there exist no corresponding methods for interacting fermions. Here we show that the entanglement structure of interacting fermionic Hamiltonians has a particularly simple form -- the interacting reduced density matrix can be written as a sum of operators that describe free fermions. This decomposition allows one to calculate the Renyi entropies for Hamiltonians which can be simulated via Determinantal Quantum Monte Carlo, while employing the efficient techniques hitherto available only for free fermion systems. The method presented works for the ground state, as well as for the thermally averaged reduced density matrix.
Improved version of the PHOBOS Glauber Monte Carlo
NASA Astrophysics Data System (ADS)
Loizides, C.; Nagle, J.; Steinberg, P.
2015-09-01
"Glauber" models are used to calculate geometric quantities in the initial state of heavy ion collisions, such as impact parameter, number of participating nucleons and initial eccentricity. Experimental heavy-ion collaborations, in particular at RHIC and LHC, use Glauber Model calculations for various geometric observables for determination of the collision centrality. In this document, we describe the assumptions inherent to the approach, and provide an updated implementation (v2) of the Monte Carlo based Glauber Model calculation, which originally was used by the PHOBOS collaboration. The main improvement w.r.t. the earlier version (v1) (Alver et al. 2008) is the inclusion of Tritium, Helium-3, and Uranium, as well as the treatment of deformed nuclei and Glauber-Gribov fluctuations of the proton in p +A collisions. A users' guide (updated to reflect changes in v2) is provided for running various calculations.
Go with the Winners: a General Monte Carlo Strategy
P. Grassberger
2002-01-17
We describe a general strategy for sampling configurations from a given distribution, NOT based on the standard Metropolis (Markov chain) strategy. It uses the fact that nontrivial problems in statistical physics are high dimensional and often close to Markovian. Therefore, configurations are built up in many, usually biased, steps. Due to the bias, each configuration carries its weight which changes at every step. If the bias is close to optimal, all weights are similar and importance sampling is perfect. If not, ``population control" is applied by cloning/killing partial configurations with too high/low weight. This is done such that the final (weighted) distribution is unbiased. We apply this method (which is also closely related to diffusion type quantum Monte Carlo) to several problems of polymer statistics, reaction-diffusion models, sequence alignment, and percolation.
Tool for Rapid Analysis of Monte Carlo Simulations
NASA Technical Reports Server (NTRS)
Restrepo, Carolina; McCall, Kurt E.; Hurtado, John E.
2013-01-01
Designing a spacecraft, or any other complex engineering system, requires extensive simulation and analysis work. Oftentimes, the large amounts of simulation data generated are very difficult and time consuming to analyze, with the added risk of overlooking potentially critical problems in the design. The authors have developed a generic data analysis tool that can quickly sort through large data sets and point an analyst to the areas in the data set that cause specific types of failures. The first version of this tool was a serial code and the current version is a parallel code, which has greatly increased the analysis capabilities. This paper describes the new implementation of this analysis tool on a graphical processing unit, and presents analysis results for NASA's Orion Monte Carlo data to demonstrate its capabilities.
A Comparison of Experimental EPMA Data and Monte Carlo Simulations
NASA Technical Reports Server (NTRS)
Carpenter, P. K.
2004-01-01
Monte Carlo (MC) modeling shows excellent prospects for simulating electron scattering and x-ray emission from complex geometries, and can be compared to experimental measurements using electron-probe microanalysis (EPMA) and phi(rho z) correction algorithms. Experimental EPMA measurements made on NIST SRM 481 (AgAu) and 482 (CuAu) alloys, at a range of accelerating potential and instrument take-off angles, represent a formal microanalysis data set that has been used to develop phi(rho z) correction algorithms. The accuracy of MC calculations obtained using the NIST, WinCasino, WinXray, and Penelope MC packages will be evaluated relative to these experimental data. There is additional information contained in the extended abstract.
Adaptive domain decomposition for Monte Carlo simulations on parallel processors
NASA Technical Reports Server (NTRS)
Wilmoth, Richard G.
1990-01-01
A method is described for performing direct simulation Monte Carlo (DSMC) calculations on parallel processors using adaptive domain decomposition to distribute the computational work load. The method has been implemented on a commercially available hypercube and benchmark results are presented which show the performance of the method relative to current supercomputers. The problems studied were simulations of equilibrium conditions in a closed, stationary box, a two-dimensional vortex flow, and the hypersonic, rarefield flow in a two-dimensional channel. For these problems, the parallel DSMC method ran 5 to 13 times faster than on a single processor of a Cray-2. The adaptive decomposition method worked well in uniformly distributing the computational work over an arbitrary number of processors and reduced the average computational time by over a factor of two in certain cases.
Monte Carlo simulation of electron acceleration in modified relativistic shocks
NASA Technical Reports Server (NTRS)
Ellison, Donald C.
1992-01-01
We give a brief review of Monte Carlo simulations of nonlinear Fermi shock acceleration and then give new results on electron acceleration in SNRs and in relativistic parallel shocks. The acceleration of low energy electrons in shocks is poorly understood, but even when energetic electrons are considered, where electron and proton scattering should be qualitatively similar, dramatic differences result between electron and proton acceleration in relativistic shocks. If the shocked plasma is a mixture of electrons and protons, the electrons are accelerated much less efficiently than protons. We predict that only e(-)-e(+) pair dominated plasmas can produce significant radio emission in relativistic flows if the standard Fermi mechanism operates in parallel shocks.