Science.gov

Sample records for monte carlo stochastic

  1. Stabilized multilevel Monte Carlo method for stiff stochastic differential equations

    SciTech Connect

    Abdulle, Assyr Blumenthal, Adrian

    2013-10-15

    A multilevel Monte Carlo (MLMC) method for mean square stable stochastic differential equations with multiple scales is proposed. For such problems, that we call stiff, the performance of MLMC methods based on classical explicit methods deteriorates because of the time step restriction to resolve the fastest scales that prevents to exploit all the levels of the MLMC approach. We show that by switching to explicit stabilized stochastic methods and balancing the stabilization procedure simultaneously with the hierarchical sampling strategy of MLMC methods, the computational cost for stiff systems is significantly reduced, while keeping the computational algorithm fully explicit and easy to implement. Numerical experiments on linear and nonlinear stochastic differential equations and on a stochastic partial differential equation illustrate the performance of the stabilized MLMC method and corroborate our theoretical findings.

  2. Quantum Monte Carlo with a stochastic Poisson solver

    NASA Astrophysics Data System (ADS)

    Das, Dyutiman

    Quantum Monte Carlo (QMC) is an extremely powerful method to treat many-body systems. Usually QMC has been applied in cases where the interaction potential has a simple analytic form, like the 1/r Coulomb potential. However, in a complicated environment as in a semiconductor heterostructure, the evaluation of the interaction itself becomes a non-trivial problem. Obtaining the potential from any grid-based finite-difference method, for every walker and every step is unfeasible. We demonstrate an alternative approach of solving the Poisson equation by a classical Monte Carlo within the overall QMC scheme. We have developed a modified "Walk On Spheres" (WOS) algorithm using Green's function techniques, which can efficiently account for the interaction energy of walker configurations, typical of QMC algorithms. This stochastically obtained potential can be easily incorporated within popular QMC techniques like variational Monte Carlo (VMC) or diffusion Monte Carlo(DMC). We demonstrate the validity of this method by studying a simple problem, the polarization of a helium atom in the electric field of an infinite capacitor. Then we apply this method to calculate the singlet-triplet splitting in a realistic heterostructure device. We also outline some other prospective applications for spherical quantum dots where the dielectric mismatch becomes an important issue for the addition energy spectrum.

  3. Semi-stochastic full configuration interaction quantum Monte Carlo

    NASA Astrophysics Data System (ADS)

    Holmes, Adam; Petruzielo, Frank; Khadilkar, Mihir; Changlani, Hitesh; Nightingale, M. P.; Umrigar, C. J.

    2012-02-01

    In the recently proposed full configuration interaction quantum Monte Carlo (FCIQMC) [1,2], the ground state is projected out stochastically, using a population of walkers each of which represents a basis state in the Hilbert space spanned by Slater determinants. The infamous fermion sign problem manifests itself in the fact that walkers of either sign can be spawned on a given determinant. We propose an improvement on this method in the form of a hybrid stochastic/deterministic technique, which we expect will improve the efficiency of the algorithm by ameliorating the sign problem. We test the method on atoms and molecules, e.g., carbon, carbon dimer, N2 molecule, and stretched N2. [4pt] [1] Fermion Monte Carlo without fixed nodes: a Game of Life, death and annihilation in Slater Determinant space. George Booth, Alex Thom, Ali Alavi. J Chem Phys 131, 050106, (2009).[0pt] [2] Survival of the fittest: Accelerating convergence in full configuration-interaction quantum Monte Carlo. Deidre Cleland, George Booth, and Ali Alavi. J Chem Phys 132, 041103 (2010).

  4. Optimization of Monte Carlo transport simulations in stochastic media

    SciTech Connect

    Liang, C.; Ji, W.

    2012-07-01

    This paper presents an accurate and efficient approach to optimize radiation transport simulations in a stochastic medium of high heterogeneity, like the Very High Temperature Gas-cooled Reactor (VHTR) configurations packed with TRISO fuel particles. Based on a fast nearest neighbor search algorithm, a modified fast Random Sequential Addition (RSA) method is first developed to speed up the generation of the stochastic media systems packed with both mono-sized and poly-sized spheres. A fast neutron tracking method is then developed to optimize the next sphere boundary search in the radiation transport procedure. In order to investigate their accuracy and efficiency, the developed sphere packing and neutron tracking methods are implemented into an in-house continuous energy Monte Carlo code to solve an eigenvalue problem in VHTR unit cells. Comparison with the MCNP benchmark calculations for the same problem indicates that the new methods show considerably higher computational efficiency. (authors)

  5. Longitudinal functional principal component modeling via Stochastic Approximation Monte Carlo

    PubMed Central

    Martinez, Josue G.; Liang, Faming; Zhou, Lan; Carroll, Raymond J.

    2010-01-01

    The authors consider the analysis of hierarchical longitudinal functional data based upon a functional principal components approach. In contrast to standard frequentist approaches to selecting the number of principal components, the authors do model averaging using a Bayesian formulation. A relatively straightforward reversible jump Markov Chain Monte Carlo formulation has poor mixing properties and in simulated data often becomes trapped at the wrong number of principal components. In order to overcome this, the authors show how to apply Stochastic Approximation Monte Carlo (SAMC) to this problem, a method that has the potential to explore the entire space and does not become trapped in local extrema. The combination of reversible jump methods and SAMC in hierarchical longitudinal functional data is simplified by a polar coordinate representation of the principal components. The approach is easy to implement and does well in simulated data in determining the distribution of the number of principal components, and in terms of its frequentist estimation properties. Empirical applications are also presented. PMID:20689648

  6. Monte Carlo solution for uncertainty propagation in particle transport with a stochastic Galerkin method

    SciTech Connect

    Franke, B. C.; Prinja, A. K.

    2013-07-01

    The stochastic Galerkin method (SGM) is an intrusive technique for propagating data uncertainty in physical models. The method reduces the random model to a system of coupled deterministic equations for the moments of stochastic spectral expansions of result quantities. We investigate solving these equations using the Monte Carlo technique. We compare the efficiency with brute-force Monte Carlo evaluation of uncertainty, the non-intrusive stochastic collocation method (SCM), and an intrusive Monte Carlo implementation of the stochastic collocation method. We also describe the stability limitations of our SGM implementation. (authors)

  7. Monte Carlo Hybrid Applied to Binary Stochastic Mixtures

    Energy Science and Technology Software Center (ESTSC)

    2008-08-11

    The purpose of this set of codes isto use an inexpensive, approximate deterministic flux distribution to generate weight windows, wihich will then be used to bound particle weights for the Monte Carlo code run. The process is not automated; the user must run the deterministic code and use the output file as a command-line argument for the Monte Carlo code. Two sets of text input files are included as test problems/templates.

  8. Semi-stochastic full configuration interaction quantum Monte Carlo: Developments and application.

    PubMed

    Blunt, N S; Smart, Simon D; Kersten, J A F; Spencer, J S; Booth, George H; Alavi, Ali

    2015-05-14

    We expand upon the recent semi-stochastic adaptation to full configuration interaction quantum Monte Carlo (FCIQMC). We present an alternate method for generating the deterministic space without a priori knowledge of the wave function and present stochastic efficiencies for a variety of both molecular and lattice systems. The algorithmic details of an efficient semi-stochastic implementation are presented, with particular consideration given to the effect that the adaptation has on parallel performance in FCIQMC. We further demonstrate the benefit for calculation of reduced density matrices in FCIQMC through replica sampling, where the semi-stochastic adaptation seems to have even larger efficiency gains. We then combine these ideas to produce explicitly correlated corrected FCIQMC energies for the beryllium dimer, for which stochastic errors on the order of wavenumber accuracy are achievable. PMID:25978883

  9. Semi-stochastic full configuration interaction quantum Monte Carlo: Developments and application

    SciTech Connect

    Blunt, N. S. Kersten, J. A. F.; Smart, Simon D.; Spencer, J. S.; Booth, George H.; Alavi, Ali

    2015-05-14

    We expand upon the recent semi-stochastic adaptation to full configuration interaction quantum Monte Carlo (FCIQMC). We present an alternate method for generating the deterministic space without a priori knowledge of the wave function and present stochastic efficiencies for a variety of both molecular and lattice systems. The algorithmic details of an efficient semi-stochastic implementation are presented, with particular consideration given to the effect that the adaptation has on parallel performance in FCIQMC. We further demonstrate the benefit for calculation of reduced density matrices in FCIQMC through replica sampling, where the semi-stochastic adaptation seems to have even larger efficiency gains. We then combine these ideas to produce explicitly correlated corrected FCIQMC energies for the beryllium dimer, for which stochastic errors on the order of wavenumber accuracy are achievable.

  10. Empirical Analysis of Stochastic Volatility Model by Hybrid Monte Carlo Algorithm

    NASA Astrophysics Data System (ADS)

    Takaishi, Tetsuya

    2013-04-01

    The stochastic volatility model is one of volatility models which infer latent volatility of asset returns. The Bayesian inference of the stochastic volatility (SV) model is performed by the hybrid Monte Carlo (HMC) algorithm which is superior to other Markov Chain Monte Carlo methods in sampling volatility variables. We perform the HMC simulations of the SV model for two liquid stock returns traded on the Tokyo Stock Exchange and measure the volatilities of those stock returns. Then we calculate the accuracy of the volatility measurement using the realized volatility as a proxy of the true volatility and compare the SV model with the GARCH model which is one of other volatility models. Using the accuracy calculated with the realized volatility we find that empirically the SV model performs better than the GARCH model.

  11. Bayesian parameter inference for stochastic biochemical network models using particle Markov chain Monte Carlo.

    PubMed

    Golightly, Andrew; Wilkinson, Darren J

    2011-12-01

    Computational systems biology is concerned with the development of detailed mechanistic models of biological processes. Such models are often stochastic and analytically intractable, containing uncertain parameters that must be estimated from time course data. In this article, we consider the task of inferring the parameters of a stochastic kinetic model defined as a Markov (jump) process. Inference for the parameters of complex nonlinear multivariate stochastic process models is a challenging problem, but we find here that algorithms based on particle Markov chain Monte Carlo turn out to be a very effective computationally intensive approach to the problem. Approximations to the inferential model based on stochastic differential equations (SDEs) are considered, as well as improvements to the inference scheme that exploit the SDE structure. We apply the methodology to a Lotka-Volterra system and a prokaryotic auto-regulatory network. PMID:23226583

  12. Quantification of stochastic uncertainty propagation for Monte Carlo depletion methods in reactor analysis

    NASA Astrophysics Data System (ADS)

    Newell, Quentin Thomas

    The Monte Carlo method provides powerful geometric modeling capabilities for large problem domains in 3-D; therefore, the Monte Carlo method is becoming popular for 3-D fuel depletion analyses to compute quantities of interest in spent nuclear fuel including isotopic compositions. The Monte Carlo approach has not been fully embraced due to unresolved issues concerning the effect of Monte Carlo uncertainties on the predicted results. Use of the Monte Carlo method to solve the neutron transport equation introduces stochastic uncertainty in the computed fluxes. These fluxes are used to collapse cross sections, estimate power distributions, and deplete the fuel within depletion calculations; therefore, the predicted number densities contain random uncertainties from the Monte Carlo solution. These uncertainties can be compounded in time because of the extrapolative nature of depletion and decay calculations. The objective of this research was to quantify the stochastic uncertainty propagation of the flux uncertainty, introduced by the Monte Carlo method, to the number densities for the different isotopes in spent nuclear fuel due to multiple depletion time steps. The research derived a formula that calculates the standard deviation in the nuclide number densities based on propagating the statistical uncertainty introduced when using coupled Monte Carlo depletion computer codes. The research was developed with the use of the TRITON/KENO sequence of the SCALE computer code. The linear uncertainty nuclide group approximation (LUNGA) method developed in this research approximated the variance of ?N term, which is the variance in the flux shape due to uncertainty in the calculated nuclide number densities. Three different example problems were used in this research to calculate of the standard deviation in the nuclide number densities using the LUNGA method. The example problems showed that the LUNGA method is capable of calculating the standard deviation of the nuclide number densities and kinf. Example 2 and Example 3 demonstrated a percent difference of much less than 1 percent between the LUNGA and the exact methods for calculating the standard deviation in the nuclide number densities. The LUNGA method was capable of calculating the variance of the ? N term in Example 2, but unfortunately not in Example 3. However, both Example 2 and 3 showed the contribution from the ?N term to the variance in the number densities is minute compared to the contribution from the ?S term and the variance and covariances of the number densities themselves. This research concluded with validation and verification of the LUNGA method. The research demonstrated that the LUNGA method and the statistics of 100 different Monte Carlo simulations agreed with 99 percent confidence in calculating the standard deviation in the number densities and kinf based on propagating the statistical uncertainty in the flux introduced by using the Monte Carlo method in coupled Monte Carlo depletion calculations.

  13. Bayesian estimation of realized stochastic volatility model by Hybrid Monte Carlo algorithm

    NASA Astrophysics Data System (ADS)

    Takaishi, Tetsuya

    2014-03-01

    The hybrid Monte Carlo algorithm (HMCA) is applied for Bayesian parameter estimation of the realized stochastic volatility (RSV) model. Using the 2nd order minimum norm integrator (2MNI) for the molecular dynamics (MD) simulation in the HMCA, we find that the 2MNI is more efficient than the conventional leapfrog integrator. We also find that the autocorrelation time of the volatility variables sampled by the HMCA is very short. Thus it is concluded that the HMCA with the 2MNI is an efficient algorithm for parameter estimations of the RSV model.

  14. Monte Carlo neutral particle transport through a binary stochastic mixture using chord length sampling

    NASA Astrophysics Data System (ADS)

    Donovan, Timothy J.

    A Monte Carlo algorithm is developed to estimate the ensemble-averaged behavior of neutral particles within a binary stochastic mixture. A special case stochastic mixture is examined, in which non-overlapping spheres of constant radius are uniformly mixed in a matrix material. Spheres are chosen to represent the stochastic volumes due to their geometric simplicity and because spheres are a common approximation to a large number of applications. The boundaries of the mixture are impenetrable, meaning that spheres in the stochastic mixture cannot be assumed to overlap the mixture boundaries. The algorithm employs a method called Limited Chord Length Sampling (LCLS). While in the matrix material, LCLS uses chord-length sampling to sample the distance to the next stochastic interface. After a surface crossing into a stochastic sphere, transport is treated explicitly until the particle exits or is killed. This capability eliminates the need to explicitly model a representation of the random geometry of the mixture. The algorithm is first proposed and tested against benchmark results for a two dimensional, fixed source model using stand-alone Monte Carlo codes. The algorithm is then implemented and tested in a test version of the Los Alamos M&barbelow;onte C&barbelow;arlo ?-p&barbelow;article Code MCNP. This prototype MCNP version has the capability to calculate LCLS results for both fixed source and multiplied source (i.e., eigenvalue) problems. Problems analyzed with MCNP range from simple binary mixtures, designed to test LCLS over a range of optical thicknesses, to a detailed High Temperature Gas Reactor fuel element, which tests the value of LCLS in a current problem of practical significance. Comparisons of LCLS and benchmark results include both accuracy and efficiency comparisons. To ensure conservative efficiency comparisons, the statistical basis for the benchmark technique is derived and a formal method for optimizing the benchmark calculations is developed. LCLS results are compared to results obtained through other methods to gauge accuracy and efficiency. The LCLS model is efficient and provides a high degree of accuracy through a wide range of conditions.

  15. Can Markov chain Monte Carlo be usefully applied to stochastic processes with hidden birth times?

    NASA Astrophysics Data System (ADS)

    Renshaw, Eric; Gibson, Gavin J.

    1998-12-01

    This paper examines the power of Markov chain Monte Carlo methods to tackle the `inverse' problem of stochastic population modelling. Namely, given a partial series of event-time observations, believed governed by a known process, what range of model parameters might plausibly explain it? This problem is first introduced in the simple context of an immigration-death process, in which only deaths are recorded, and is then extended through the introduction of birth, standard and power-law logistic growth, and an `odd-even effects' quantum optics model. The results show that simple Metropolis Hastings samplers can be applied to provide useful information on models containing a high degree of complexity. Specific problems highlighted include: the potentially poor mixing qualities of simple Metropolis Hastings samplers; and, that heavily non-symmetric full likelihood surfaces may inflict substantial bias on their associated marginal distributions.

  16. Efficient Monte Carlo and greedy heuristic for the inference of stochastic block models

    NASA Astrophysics Data System (ADS)

    Peixoto, Tiago P.

    2014-01-01

    We present an efficient algorithm for the inference of stochastic block models in large networks. The algorithm can be used as an optimized Markov chain Monte Carlo (MCMC) method, with a fast mixing time and a much reduced susceptibility to getting trapped in metastable states, or as a greedy agglomerative heuristic, with an almost linear O (Nln2N) complexity, where N is the number of nodes in the network, independent of the number of blocks being inferred. We show that the heuristic is capable of delivering results which are indistinguishable from the more exact and numerically expensive MCMC method in many artificial and empirical networks, despite being much faster. The method is entirely unbiased towards any specific mixing pattern, and in particular it does not favor assortative community structures.

  17. Integration of Monte-Carlo ray tracing with a stochastic optimisation method: application to the design of solar receiver geometry.

    PubMed

    Asselineau, Charles-Alexis; Zapata, Jose; Pye, John

    2015-06-01

    A stochastic optimisation method adapted to illumination and radiative heat transfer problems involving Monte-Carlo ray-tracing is presented. A solar receiver shape optimisation case study illustrates the advantages of the method and its potential: efficient receivers are identified using a moderate computational cost. PMID:26072868

  18. Stochastic Monte-Carlo Markov Chain Inversions on Models Regionalized Using Receiver Functions

    NASA Astrophysics Data System (ADS)

    Larmat, C. S.; Maceira, M.; Kato, Y.; Bodin, T.; Calo, M.; Romanowicz, B. A.; Chai, C.; Ammon, C. J.

    2014-12-01

    There is currently a strong interest in stochastic approaches to seismic modeling - versus deterministic methods such as gradient methods - due to the ability of these methods to better deal with highly non-linear problems. Another advantage of stochastic methods is that they allow the estimation of the a posteriori probability distribution of the derived parameters, meaning the envisioned Bayesian inversion of Tarantola allowing the quantification of the solution error. The cost to pay of stochastic methods is that they require testing thousands of variations of each unknown parameter and their associated weights to ensure reliable probabilistic inferences. Even with the best High-Performance Computing resources available, 3D stochastic full waveform modeling at the regional scale still remains out-of-reach. We are exploring regionalization as one way to reduce the dimension of the parameter space, allowing the identification of areas in the models that can be treated as one block in a subsequent stochastic inversion. Regionalization is classically performed through the identification of tectonic or structural elements. Lekic & Romanowicz (2011) proposed a new approach with a cluster analysis of the tomographic velocity models instead. Here we present the results of a clustering analysis on the P-wave receiver-functions used in the subsequent inversion. Different clustering algorithms and quality of clustering are tested for different datasets of North America and China. Preliminary results with the kmean clustering algorithm show that an interpolated receiver function wavefield (Chai et al., GRL, in review) improve the agreement with the geological and tectonic regions of North America compared to the traditional approach of stacked receiver functions. After regionalization, 1D profile for each region is stochastically inferred using a parallelized code based on Monte-Carlo Markov Chains (MCMC), and modeling surfacewave-dispersion and receiver-functions observations. The parameters of the inversion are the elastic properties, the thickness and the number of isotropic layers. We will present preliminary results and compare them to results obtained from a different regionalizationbased on a tomographic model (Calo et al., 2013).

  19. Accurate Monte Carlo tests of the stochastic Ginzburg-Landau model with multiplicative colored noise

    SciTech Connect

    Jingdong Bao Inst. of Atomic Energy, Beijing ); Yizhong Zhuo Academia Sinica, Beijing ); Xizhen Wu )

    1992-03-01

    An accurate and fast Monte Carlo algorithm is proposed for solving the Ginzburg-Landau equation with multiplicative colored noise. The stable cases of solution for choosing time steps and trajectory numbers are discussed.

  20. Monte Carlo Benchmark

    Energy Science and Technology Software Center (ESTSC)

    2010-10-20

    The "Monte Carlo Benchmark" (MCB) is intended to model the computatiional performance of Monte Carlo algorithms on parallel architectures. It models the solution of a simple heuristic transport equation using a Monte Carlo technique. The MCB employs typical features of Monte Carlo algorithms such as particle creation, particle tracking, tallying particle information, and particle destruction. Particles are also traded among processors using MPI calls.

  1. Evaluation of Monte Carlo Electron-Transport Algorithms in the Integrated Tiger Series Codes for Stochastic-Media Simulations

    NASA Astrophysics Data System (ADS)

    Franke, Brian C.; Kensek, Ronald P.; Prinja, Anil K.

    2014-06-01

    Stochastic-media simulations require numerous boundary crossings. We consider two Monte Carlo electron transport approaches and evaluate accuracy with numerous material boundaries. In the condensed-history method, approximations are made based on infinite-medium solutions for multiple scattering over some track length. Typically, further approximations are employed for material-boundary crossings where infinite-medium solutions become invalid. We have previously explored an alternative "condensed transport" formulation, a Generalized Boltzmann-Fokker-Planck GBFP method, which requires no special boundary treatment but instead uses approximations to the electron-scattering cross sections. Some limited capabilities for analog transport and a GBFP method have been implemented in the Integrated Tiger Series (ITS) codes. Improvements have been made to the condensed history algorithm. The performance of the ITS condensed-history and condensed-transport algorithms are assessed for material-boundary crossings. These assessments are made both by introducing artificial material boundaries and by comparison to analog Monte Carlo simulations.

  2. Stochastic method for accommodation of equilibrating basins in kinetic Monte Carlo simulations

    SciTech Connect

    Van Siclen, Clinton D

    2007-02-01

    A computationally simple way to accommodate "basins" of trapping states in standard kinetic Monte Carlo simulations is presented. By assuming the system is effectively equilibrated in the basin, the residence time (time spent in the basin before escape) and the probabilities for transition to states outside the basin may be calculated. This is demonstrated for point defect diffusion over a periodic grid of sites containing a complex basin.

  3. Monte Carlo Example Programs

    Energy Science and Technology Software Center (ESTSC)

    2006-05-09

    The Monte Carlo example programs VARHATOM and DMCATOM are two small, simple FORTRAN programs that illustrate the use of the Monte Carlo Mathematical technique for calculating the ground state energy of the hydrogen atom.

  4. Monte Carlo methods on advanced computer architectures

    SciTech Connect

    Martin, W.R.

    1991-12-31

    Monte Carlo methods describe a wide class of computational methods that utilize random numbers to perform a statistical simulation of a physical problem, which itself need not be a stochastic process. For example, Monte Carlo can be used to evaluate definite integrals, which are not stochastic processes, or may be used to simulate the transport of electrons in a space vehicle, which is a stochastic process. The name Monte Carlo came about during the Manhattan Project to describe the new mathematical methods being developed which had some similarity to the games of chance played in the casinos of Monte Carlo. Particle transport Monte Carlo is just one application of Monte Carlo methods, and will be the subject of this review paper. Other applications of Monte Carlo, such as reliability studies, classical queueing theory, molecular structure, the study of phase transitions, or quantum chromodynamics calculations for basic research in particle physics, are not included in this review. The reference by Kalos is an introduction to general Monte Carlo methods and references to other applications of Monte Carlo can be found in this excellent book. For the remainder of this paper, the term Monte Carlo will be synonymous to particle transport Monte Carlo, unless otherwise noted. 60 refs., 14 figs., 4 tabs.

  5. Monte Carlo fundamentals

    SciTech Connect

    Brown, F.B.; Sutton, T.M.

    1996-02-01

    This report is composed of the lecture notes from the first half of a 32-hour graduate-level course on Monte Carlo methods offered at KAPL. These notes, prepared by two of the principle developers of KAPL`s RACER Monte Carlo code, cover the fundamental theory, concepts, and practices for Monte Carlo analysis. In particular, a thorough grounding in the basic fundamentals of Monte Carlo methods is presented, including random number generation, random sampling, the Monte Carlo approach to solving transport problems, computational geometry, collision physics, tallies, and eigenvalue calculations. Furthermore, modern computational algorithms for vector and parallel approaches to Monte Carlo calculations are covered in detail, including fundamental parallel and vector concepts, the event-based algorithm, master/slave schemes, parallel scaling laws, and portability issues.

  6. A stochastic Markov chain approach for tennis: Monte Carlo simulation and modeling

    NASA Astrophysics Data System (ADS)

    Aslam, Kamran

    This dissertation describes the computational formulation of probability density functions (pdfs) that facilitate head-to-head match simulations in tennis along with ranking systems developed from their use. A background on the statistical method used to develop the pdfs , the Monte Carlo method, and the resulting rankings are included along with a discussion on ranking methods currently being used both in professional sports and in other applications. Using an analytical theory developed by Newton and Keller in [34] that defines a tennis player's probability of winning a game, set, match and single elimination tournament, a computational simulation has been developed in Matlab that allows further modeling not previously possible with the analytical theory alone. Such experimentation consists of the exploration of non-iid effects, considers the concept the varying importance of points in a match and allows an unlimited number of matches to be simulated between unlikely opponents. The results of these studies have provided pdfs that accurately model an individual tennis player's ability along with a realistic, fair and mathematically sound platform for ranking them.

  7. Stochastic geometrical model and Monte Carlo optimization methods for building reconstruction from InSAR data

    NASA Astrophysics Data System (ADS)

    Zhang, Yue; Sun, Xian; Thiele, Antje; Hinz, Stefan

    2015-10-01

    Synthetic aperture radar (SAR) systems, such as TanDEM-X, TerraSAR-X and Cosmo-SkyMed, acquire imagery with high spatial resolution (HR), making it possible to observe objects in urban areas with high detail. In this paper, we propose a new top-down framework for three-dimensional (3D) building reconstruction from HR interferometric SAR (InSAR) data. Unlike most methods proposed before, we adopt a generative model and utilize the reconstruction process by maximizing a posteriori estimation (MAP) through Monte Carlo methods. The reason for this strategy refers to the fact that the noisiness of SAR images calls for a thorough prior model to better cope with the inherent amplitude and phase fluctuations. In the reconstruction process, according to the radar configuration and the building geometry, a 3D building hypothesis is mapped to the SAR image plane and decomposed to feature regions such as layover, corner line, and shadow. Then, the statistical properties of intensity, interferometric phase and coherence of each region are explored respectively, and are included as region terms. Roofs are not directly considered as they are mixed with wall into layover area in most cases. When estimating the similarity between the building hypothesis and the real data, the prior, the region term, together with the edge term related to the contours of layover and corner line, are taken into consideration. In the optimization step, in order to achieve convergent reconstruction outputs and get rid of local extrema, special transition kernels are designed. The proposed framework is evaluated on the TanDEM-X dataset and performs well for buildings reconstruction.

  8. MORSE Monte Carlo code

    SciTech Connect

    Cramer, S.N.

    1984-01-01

    The MORSE code is a large general-use multigroup Monte Carlo code system. Although no claims can be made regarding its superiority in either theoretical details or Monte Carlo techniques, MORSE has been, since its inception at ORNL in the late 1960s, the most widely used Monte Carlo radiation transport code. The principal reason for this popularity is that MORSE is relatively easy to use, independent of any installation or distribution center, and it can be easily customized to fit almost any specific need. Features of the MORSE code are described.

  9. Shell model Monte Carlo methods

    SciTech Connect

    Koonin, S.E.; Dean, D.J.

    1996-10-01

    We review quantum Monte Carlo methods for dealing with large shell model problems. These methods reduce the imaginary-time many-body evolution operator to a coherent superposition of one-body evolutions in fluctuating one-body fields; resultant path integral is evaluated stochastically. We first discuss the motivation, formalism, and implementation of such Shell Model Monte Carlo methods. There then follows a sampler of results and insights obtained from a number of applications. These include the ground state and thermal properties of pf-shell nuclei, thermal behavior of {gamma}-soft nuclei, and calculation of double beta-decay matrix elements. Finally, prospects for further progress in such calculations are discussed. 87 refs.

  10. Smart Darting Monte Carlo

    NASA Astrophysics Data System (ADS)

    Andricioaei, Ioan; Straub, John E.; Voter, Arthur F.

    2001-04-01

    The "Smart Walking" Monte Carlo algorithm is examined. In general, due to a bias imposed by the interbasin trial move, the algorithm does not satisfy detailed balance. While it has been shown that it can provide good estimates of equilibrium averages for certain potentials, for other potentials the estimates are poor. A modified version of the algorithm, Smart Darting Monte Carlo, which obeys the detailed balance condition, is proposed. Calculations on a one-dimensional model potential, on a Lennard-Jones cluster and on the alanine dipeptide demonstrate the accuracy and promise of the method for deeply quenched systems.

  11. Monte Carlo portal dosimetry

    SciTech Connect

    Chin, P.W. . E-mail: mary.chin@physics.org

    2005-10-15

    This project developed a solution for verifying external photon beam radiotherapy. The solution is based on a calibration chain for deriving portal dose maps from acquired portal images, and a calculation framework for predicting portal dose maps. Quantitative comparison between acquired and predicted portal dose maps accomplishes both geometric (patient positioning with respect to the beam) and dosimetric (two-dimensional fluence distribution of the beam) verifications. A disagreement would indicate that beam delivery had not been according to plan. The solution addresses the clinical need for verifying radiotherapy both pretreatment (without the patient in the beam) and on treatment (with the patient in the beam). Medical linear accelerators mounted with electronic portal imaging devices (EPIDs) were used to acquire portal images. Two types of EPIDs were investigated: the amorphous silicon (a-Si) and the scanning liquid ion chamber (SLIC). The EGSnrc family of Monte Carlo codes were used to predict portal dose maps by computer simulation of radiation transport in the beam-phantom-EPID configuration. Monte Carlo simulations have been implemented on several levels of high throughput computing (HTC), including the grid, to reduce computation time. The solution has been tested across the entire clinical range of gantry angle, beam size (5 cmx5 cm to 20 cmx20 cm), and beam-patient and patient-EPID separations (4 to 38 cm). In these tests of known beam-phantom-EPID configurations, agreement between acquired and predicted portal dose profiles was consistently within 2% of the central axis value. This Monte Carlo portal dosimetry solution therefore achieved combined versatility, accuracy, and speed not readily achievable by other techniques.

  12. Interaction picture density matrix quantum Monte Carlo

    NASA Astrophysics Data System (ADS)

    Malone, Fionn D.; Blunt, N. S.; Shepherd, James J.; Lee, D. K. K.; Spencer, J. S.; Foulkes, W. M. C.

    2015-07-01

    The recently developed density matrix quantum Monte Carlo (DMQMC) algorithm stochastically samples the N-body thermal density matrix and hence provides access to exact properties of many-particle quantum systems at arbitrary temperatures. We demonstrate that moving to the interaction picture provides substantial benefits when applying DMQMC to interacting fermions. In this first study, we focus on a system of much recent interest: the uniform electron gas in the warm dense regime. The basis set incompleteness error at finite temperature is investigated and extrapolated via a simple Monte Carlo sampling procedure. Finally, we provide benchmark calculations for a four-electron system, comparing our results to previous work where possible.

  13. MCMini: Monte Carlo on GPGPU

    SciTech Connect

    Marcus, Ryan C.

    2012-07-25

    MCMini is a proof of concept that demonstrates the possibility for Monte Carlo neutron transport using OpenCL with a focus on performance. This implementation, written in C, shows that tracing particles and calculating reactions on a 3D mesh can be done in a highly scalable fashion. These results demonstrate a potential path forward for MCNP or other Monte Carlo codes.

  14. Quantum Gibbs ensemble Monte Carlo

    SciTech Connect

    Fantoni, Riccardo; Moroni, Saverio

    2014-09-21

    We present a path integral Monte Carlo method which is the full quantum analogue of the Gibbs ensemble Monte Carlo method of Panagiotopoulos to study the gas-liquid coexistence line of a classical fluid. Unlike previous extensions of Gibbs ensemble Monte Carlo to include quantum effects, our scheme is viable even for systems with strong quantum delocalization in the degenerate regime of temperature. This is demonstrated by an illustrative application to the gas-superfluid transition of {sup 4}He in two dimensions.

  15. Reactive canonical Monte Carlo

    NASA Astrophysics Data System (ADS)

    Johnson, J. Karl; Panagiotopoulos, Athanassios Z.; Gubbins, Keith E.

    A new simulation technique is developed for calculating the properties of chemically reactive and associating (hydrogen bonding, charge transfer) systems. We call this new method reactive canonical Monte Carlo (RCMC). In contrast to previous methods for treating chemical reactions, this algorithm is applicable to reactions involving a change in mole number. Stoichiometrically balanced reactions are attempted in the forward and reverse directions to achieve chemical equilibrium. The transition probabilities do not depend on the chemical potentials or chemical potential differences of any of the components. We also extend RCMC to work in concert with the isothermal-isobaric ensemble for simulating chemical reactions at constant pressure, and with the Gibbs ensemble for simultaneous calculation of phase and chemical equilibria. Association is treated as a chemical reaction in the RCMC formalism. Results are presented for dimerization of simple model associating fluids. In contrast to previous methods, the reactive Gibbs ensemble can be used to calculate phase equilibrium for associating fluids with very strong bonding sites. RCMC simulations are performed for nitric oxide dimerization and results are compared with available experimental data in the liquid phase. Agreement with experiment is excellent. Results for a vapour phase simulation are also in remarkable agreement with estimates based on second virial coefficient data.

  16. Wormhole Hamiltonian Monte Carlo

    PubMed Central

    Lan, Shiwei; Streets, Jeffrey; Shahbaba, Babak

    2015-01-01

    In machine learning and statistics, probabilistic inference involving multimodal distributions is quite difficult. This is especially true in high dimensional problems, where most existing algorithms cannot easily move from one mode to another. To address this issue, we propose a novel Bayesian inference approach based on Markov Chain Monte Carlo. Our method can effectively sample from multimodal distributions, especially when the dimension is high and the modes are isolated. To this end, it exploits and modifies the Riemannian geometric properties of the target distribution to create wormholes connecting modes in order to facilitate moving between them. Further, our proposed method uses the regeneration technique in order to adapt the algorithm by identifying new modes and updating the network of wormholes without affecting the stationary distribution. To find new modes, as opposed to redis-covering those previously identified, we employ a novel mode searching algorithm that explores a residual energy function obtained by subtracting an approximate Gaussian mixture density (based on previously discovered modes) from the target density function. PMID:25861551

  17. Application of stochastic approach based on Monte Carlo (MC) simulation for life cycle inventory (LCI) to the steel process chain: case study.

    PubMed

    Bieda, Bogus?aw

    2014-05-15

    The purpose of the paper is to present the results of application of stochastic approach based on Monte Carlo (MC) simulation for life cycle inventory (LCI) data of Mittal Steel Poland (MSP) complex in Krakw, Poland. In order to assess the uncertainty, the software CrystalBall (CB), which is associated with Microsoft Excel spreadsheet model, is used. The framework of the study was originally carried out for 2005. The total production of steel, coke, pig iron, sinter, slabs from continuous steel casting (CSC), sheets from hot rolling mill (HRM) and blast furnace gas, collected in 2005 from MSP was analyzed and used for MC simulation of the LCI model. In order to describe random nature of all main products used in this study, normal distribution has been applied. The results of the simulation (10,000 trials) performed with the use of CB consist of frequency charts and statistical reports. The results of this study can be used as the first step in performing a full LCA analysis in the steel industry. Further, it is concluded that the stochastic approach is a powerful method for quantifying parameter uncertainty in LCA/LCI studies and it can be applied to any steel industry. The results obtained from this study can help practitioners and decision-makers in the steel production management. PMID:24290145

  18. Isotropic Monte Carlo Grain Growth

    Energy Science and Technology Software Center (ESTSC)

    2013-04-25

    IMCGG performs Monte Carlo simulations of normal grain growth in metals on a hexagonal grid in two dimensions with periodic boundary conditions. This may be performed with either an isotropic or a misorientation - and incliantion-dependent grain boundary energy.

  19. Monte Carlo simulation of uncoupled continuous-time random walks yielding a stochastic solution of the space-time fractional diffusion equation

    NASA Astrophysics Data System (ADS)

    Fulger, Daniel; Scalas, Enrico; Germano, Guido

    2008-02-01

    We present a numerical method for the Monte Carlo simulation of uncoupled continuous-time random walks with a Lvy ? -stable distribution of jumps in space and a Mittag-Leffler distribution of waiting times, and apply it to the stochastic solution of the Cauchy problem for a partial differential equation with fractional derivatives both in space and in time. The one-parameter Mittag-Leffler function is the natural survival probability leading to time-fractional diffusion equations. Transformation methods for Mittag-Leffler random variables were found later than the well-known transformation method by Chambers, Mallows, and Stuck for Lvy ? -stable random variables and so far have not received as much attention; nor have they been used together with the latter in spite of their mathematical relationship due to the geometric stability of the Mittag-Leffler distribution. Combining the two methods, we obtain an accurate approximation of space- and time-fractional diffusion processes almost as easy and fast to compute as for standard diffusion processes.

  20. Proton Upset Monte Carlo Simulation

    NASA Technical Reports Server (NTRS)

    O'Neill, Patrick M.; Kouba, Coy K.; Foster, Charles C.

    2009-01-01

    The Proton Upset Monte Carlo Simulation (PROPSET) program calculates the frequency of on-orbit upsets in computer chips (for given orbits such as Low Earth Orbit, Lunar Orbit, and the like) from proton bombardment based on the results of heavy ion testing alone. The software simulates the bombardment of modern microelectronic components (computer chips) with high-energy (.200 MeV) protons. The nuclear interaction of the proton with the silicon of the chip is modeled and nuclear fragments from this interaction are tracked using Monte Carlo techniques to produce statistically accurate predictions.

  1. Application of Monte Carlo techniques to optimization of high-energy beam transport in a stochastic environment

    NASA Technical Reports Server (NTRS)

    Parrish, R. V.; Dieudonne, J. E.; Filippas, T. A.

    1971-01-01

    An algorithm employing a modified sequential random perturbation, or creeping random search, was applied to the problem of optimizing the parameters of a high-energy beam transport system. The stochastic solution of the mathematical model for first-order magnetic-field expansion allows the inclusion of state-variable constraints, and the inclusion of parameter constraints allowed by the method of algorithm application eliminates the possibility of infeasible solutions. The mathematical model and the algorithm were programmed for a real-time simulation facility; thus, two important features are provided to the beam designer: (1) a strong degree of man-machine communication (even to the extent of bypassing the algorithm and applying analog-matching techniques), and (2) extensive graphics for displaying information concerning both algorithm operation and transport-system behavior. Chromatic aberration was also included in the mathematical model and in the optimization process. Results presented show this method as yielding better solutions (in terms of resolutions) to the particular problem than those of a standard analog program as well as demonstrating flexibility, in terms of elements, constraints, and chromatic aberration, allowed by user interaction with both the algorithm and the stochastic model. Example of slit usage and a limited comparison of predicted results and actual results obtained with a 600 MeV cyclotron are given.

  2. Is Monte Carlo embarrassingly parallel?

    SciTech Connect

    Hoogenboom, J. E.

    2012-07-01

    Monte Carlo is often stated as being embarrassingly parallel. However, running a Monte Carlo calculation, especially a reactor criticality calculation, in parallel using tens of processors shows a serious limitation in speedup and the execution time may even increase beyond a certain number of processors. In this paper the main causes of the loss of efficiency when using many processors are analyzed using a simple Monte Carlo program for criticality. The basic mechanism for parallel execution is MPI. One of the bottlenecks turn out to be the rendez-vous points in the parallel calculation used for synchronization and exchange of data between processors. This happens at least at the end of each cycle for fission source generation in order to collect the full fission source distribution for the next cycle and to estimate the effective multiplication factor, which is not only part of the requested results, but also input to the next cycle for population control. Basic improvements to overcome this limitation are suggested and tested. Also other time losses in the parallel calculation are identified. Moreover, the threading mechanism, which allows the parallel execution of tasks based on shared memory using OpenMP, is analyzed in detail. Recommendations are given to get the maximum efficiency out of a parallel Monte Carlo calculation. (authors)

  3. Monte Carlo calculations of nuclei

    SciTech Connect

    Pieper, S.C.

    1997-10-01

    Nuclear many-body calculations have the complication of strong spin- and isospin-dependent potentials. In these lectures the author discusses the variational and Green`s function Monte Carlo techniques that have been developed to address this complication, and presents a few results.

  4. Synchronous Parallel Kinetic Monte Carlo

    SciTech Connect

    Mart?nez, E; Marian, J; Kalos, M H

    2006-12-14

    A novel parallel kinetic Monte Carlo (kMC) algorithm formulated on the basis of perfect time synchronicity is presented. The algorithm provides an exact generalization of any standard serial kMC model and is trivially implemented in parallel architectures. We demonstrate the mathematical validity and parallel performance of the method by solving several well-understood problems in diffusion.

  5. Applications of Monte Carlo Methods in Calculus.

    ERIC Educational Resources Information Center

    Gordon, Sheldon P.; Gordon, Florence S.

    1990-01-01

    Discusses the application of probabilistic ideas, especially Monte Carlo simulation, to calculus. Describes some applications using the Monte Carlo method: Riemann sums; maximizing and minimizing a function; mean value theorems; and testing conjectures. (YP)

  6. Parallel Monte Carlo Simulation for control system design

    NASA Technical Reports Server (NTRS)

    Schubert, Wolfgang M.

    1995-01-01

    The research during the 1993/94 academic year addressed the design of parallel algorithms for stochastic robustness synthesis (SRS). SRS uses Monte Carlo simulation to compute probabilities of system instability and other design-metric violations. The probabilities form a cost function which is used by a genetic algorithm (GA). The GA searches for the stochastic optimal controller. The existing sequential algorithm was analyzed and modified to execute in a distributed environment. For this, parallel approaches to Monte Carlo simulation and genetic algorithms were investigated. Initial empirical results are available for the KSR1.

  7. Monte Carlo Simulation for Perusal and Practice.

    ERIC Educational Resources Information Center

    Brooks, Gordon P.; Barcikowski, Robert S.; Robey, Randall R.

    The meaningful investigation of many problems in statistics can be solved through Monte Carlo methods. Monte Carlo studies can help solve problems that are mathematically intractable through the analysis of random samples from populations whose characteristics are known to the researcher. Using Monte Carlo simulation, the values of a statistic are

  8. Monte Carlo Experiments: Design and Implementation.

    ERIC Educational Resources Information Center

    Paxton, Pamela; Curran, Patrick J.; Bollen, Kenneth A.; Kirby, Jim; Chen, Feinian

    2001-01-01

    Illustrates the design and planning of Monte Carlo simulations, presenting nine steps in planning and performing a Monte Carlo analysis from developing a theoretically derived question of interest through summarizing the results. Uses a Monte Carlo simulation to illustrate many of the relevant points. (SLD)

  9. Stochastic Engine Final Report: Applying Markov Chain Monte Carlo Methods with Importance Sampling to Large-Scale Data-Driven Simulation

    SciTech Connect

    Glaser, R E; Johannesson, G; Sengupta, S; Kosovic, B; Carle, S; Franz, G A; Aines, R D; Nitao, J J; Hanley, W G; Ramirez, A L; Newmark, R L; Johnson, V M; Dyer, K M; Henderson, K A; Sugiyama, G A; Hickling, T L; Pasyanos, M E; Jones, D A; Grimm, R J; Levine, R A

    2004-03-11

    Accurate prediction of complex phenomena can be greatly enhanced through the use of data and observations to update simulations. The ability to create these data-driven simulations is limited by error and uncertainty in both the data and the simulation. The stochastic engine project addressed this problem through the development and application of a family of Markov Chain Monte Carlo methods utilizing importance sampling driven by forward simulators to minimize time spent search very large state spaces. The stochastic engine rapidly chooses among a very large number of hypothesized states and selects those that are consistent (within error) with all the information at hand. Predicted measurements from the simulator are used to estimate the likelihood of actual measurements, which in turn reduces the uncertainty in the original sample space via a conditional probability method called Bayesian inferencing. This highly efficient, staged Metropolis-type search algorithm allows us to address extremely complex problems and opens the door to solving many data-driven, nonlinear, multidimensional problems. A key challenge has been developing representation methods that integrate the local details of real data with the global physics of the simulations, enabling supercomputers to efficiently solve the problem. Development focused on large-scale problems, and on examining the mathematical robustness of the approach in diverse applications. Multiple data types were combined with large-scale simulations to evaluate systems with {approx}{sup 10}20,000 possible states (detecting underground leaks at the Hanford waste tanks). The probable uses of chemical process facilities were assessed using an evidence-tree representation and in-process updating. Other applications included contaminant flow paths at the Savannah River Site, locating structural flaws in buildings, improving models for seismic travel times systems used to monitor nuclear proliferation, characterizing the source of indistinct atmospheric plumes, and improving flash radiography. In the course of developing these applications, we also developed new methods to cluster and analyze the results of the state-space searches, as well as a number of algorithms to improve the search speed and efficiency. Our generalized solution contributes both a means to make more informed predictions of the behavior of very complex systems, and to improve those predictions as events unfold, using new data in real time.

  10. Electronic structure quantum Monte Carlo

    NASA Astrophysics Data System (ADS)

    Bajdich, Michal; Mitas, Lubos

    2009-04-01

    Quantum Monte Carlo (QMC) is an advanced simulation methodology for studies of manybody quantum systems. The QMC approaches combine analytical insights with stochastic computational techniques for efficient solution of several classes of important many-body problems such as the stationary Schrdinger equation. QMC methods of various flavors have been applied to a great variety of systems spanning continuous and lattice quantum models, molecular and condensed systems, BEC-BCS ultracold condensates, nuclei, etc. In this review, we focus on the electronic structure QMC, i.e., methods relevant for systems described by the electron-ion Hamiltonians. Some of the key QMC achievements include direct treatment of electron correlation, accuracy in predicting energy differences and favorable scaling in the system size. Calculations of atoms, molecules, clusters and solids have demonstrated QMC applicability to real systems with hundreds of electrons while providing 90-95% of the correlation energy and energy differences typically within a few percent of experiments. Advances in accuracy beyond these limits are hampered by the so-called fixed-node approximation which is used to circumvent the notorious fermion sign problem. Many-body nodes of fermion states and their properties have therefore become one of the important topics for further progress in predictive power and efficiency of QMC calculations. Some of our recent results on the wave function nodes and related nodal domain topologies will be briefly reviewed. This includes analysis of few-electron systems and descriptions of exact and approximate nodes using transformations and projections of the highly-dimensional nodal hypersurfaces into the 3D space. Studies of fermion nodes offer new insights into topological properties of eigenstates such as explicit demonstrations that generic fermionic ground states exhibit the minimal number of two nodal domains. Recently proposed trial wave functions based on Pfaffians with pairing orbitals are presented and their nodal properties are tested in calculations of first row atoms and molecules. Finally, backflow "dressed" coordinates are introduced as another possibility for capturing correlation effects and for decreasing the fixed-node bias.

  11. Monte Carlo methods in ICF

    SciTech Connect

    Zimmerman, G.B.

    1997-06-24

    Monte Carlo methods appropriate to simulate the transport of x-rays, neutrons, ion and electrons in Inertial Confinement Fusion targets are described and analyzed. The Implicit Monte Carlo method of x-ray transport handles symmetry within indirect drive ICF hohlraums well, but can be improved 50X in efficiency by angular biasing the x-rays towards the fuel capsule. Accurate simulation of thermonuclear burns nd burn diagnostics involves detailed particle source spectra, charged particle ranges, inflight reaction kinematics, corrections for bulk and thermal Doppler effects and variance reduction to obtain adequate statistics for rare events. It is found that the effects of angular Coulomb scattering must be included in models of charged particle transport through heterogeneous materials.

  12. The D0 Monte Carlo

    SciTech Connect

    Womersley, J. . Dept. of Physics)

    1992-10-01

    The D0 detector at the Fermilab Tevatron began its first data taking run in May 1992. For analysis of the expected 25 pb[sup [minus]1] data sample, roughly half a million simulated events will be needed. The GEANT-based Monte Carlo program used to generate these events is described, together with comparisons to test beam data. Some novel techniques used to speed up execution and simplify geometrical input are described.

  13. Present status of vectorized Monte Carlo

    SciTech Connect

    Brown, F.B.

    1987-01-01

    Monte Carlo applications have traditionally been limited by the large amounts of computer time required to produce acceptably small statistical uncertainties, so the immediate benefit of vectorization is an increase in either the number of jobs completed or the number of particles processed per job, typically by one order of magnitude or more. This results directly in improved engineering design analyses, since Monte Carlo methods are used as standards for correcting more approximate methods. The relatively small number of vectorized programs is a consequence of the newness of vectorized Monte Carlo, the difficulties of nonportability, and the very large development effort required to rewrite or restructure Monte Carlo codes for vectorization. Based on the successful efforts to date, it may be concluded that Monte Carlo vectorization will spread to increasing numbers of codes and applications. The possibility of multitasking provides even further motivation for vectorizing Monte Carlo, since the step from vector to multitasked vector is relatively straightforward.

  14. Theoretical basis for QCD Monte Carlo simulations

    SciTech Connect

    Marchesini, G. )

    1992-02-05

    The coherent branching algorithm, which resumes large perturbative QCD contributions and is the basis for the Monte Carlo simulations, is discussed and reviewed together with some phenomenological implications.

  15. 1-D EQUILIBRIUM DISCRETE DIFFUSION MONTE CARLO

    SciTech Connect

    T. EVANS; ET AL

    2000-08-01

    We present a new hybrid Monte Carlo method for 1-D equilibrium diffusion problems in which the radiation field coexists with matter in local thermodynamic equilibrium. This method, the Equilibrium Discrete Diffusion Monte Carlo (EqDDMC) method, combines Monte Carlo particles with spatially discrete diffusion solutions. We verify the EqDDMC method with computational results from three slab problems. The EqDDMC method represents an incremental step toward applying this hybrid methodology to non-equilibrium diffusion, where it could be simultaneously coupled to Monte Carlo transport.

  16. Density-matrix quantum Monte Carlo method

    NASA Astrophysics Data System (ADS)

    Blunt, N. S.; Rogers, T. W.; Spencer, J. S.; Foulkes, W. M. C.

    2014-06-01

    We present a quantum Monte Carlo method capable of sampling the full density matrix of a many-particle system at finite temperature. This allows arbitrary reduced density matrix elements and expectation values of complicated nonlocal observables to be evaluated easily. The method resembles full configuration interaction quantum Monte Carlo but works in the space of many-particle operators instead of the space of many-particle wave functions. One simulation provides the density matrix at all temperatures simultaneously, from T =? to T =0, allowing the temperature dependence of expectation values to be studied. The direct sampling of the density matrix also allows the calculation of some previously inaccessible entanglement measures. We explain the theory underlying the method, describe the algorithm, and introduce an importance-sampling procedure to improve the stochastic efficiency. To demonstrate the potential of our approach, the energy and staggered magnetization of the isotropic antiferromagnetic Heisenberg model on small lattices, the concurrence of one-dimensional spin rings, and the Renyi S2 entanglement entropy of various sublattices of the 66 Heisenberg model are calculated. The nature of the sign problem in the method is also investigated.

  17. Single scatter electron Monte Carlo

    SciTech Connect

    Svatos, M.M.

    1997-03-01

    A single scatter electron Monte Carlo code (SSMC), CREEP, has been written which bridges the gap between existing transport methods and modeling real physical processes. CREEP simulates ionization, elastic and bremsstrahlung events individually. Excitation events are treated with an excitation-only stopping power. The detailed nature of these simulations allows for calculation of backscatter and transmission coefficients, backscattered energy spectra, stopping powers, energy deposits, depth dose, and a variety of other associated quantities. Although computationally intense, the code relies on relatively few mathematical assumptions, unlike other charged particle Monte Carlo methods such as the commonly-used condensed history method. CREEP relies on sampling the Lawrence Livermore Evaluated Electron Data Library (EEDL) which has data for all elements with an atomic number between 1 and 100, over an energy range from approximately several eV (or the binding energy of the material) to 100 GeV. Compounds and mixtures may also be used by combining the appropriate element data via Bragg additivity.

  18. Monte Carlo surface flux tallies

    SciTech Connect

    Favorite, Jeffrey A

    2010-11-19

    Particle fluxes on surfaces are difficult to calculate with Monte Carlo codes because the score requires a division by the surface-crossing angle cosine, and grazing angles lead to inaccuracies. We revisit the standard practice of dividing by half of a cosine 'cutoff' for particles whose surface-crossing cosines are below the cutoff. The theory behind this approximation is sound, but the application of the theory to all possible situations does not account for two implicit assumptions: (1) the grazing band must be symmetric about 0, and (2) a single linear expansion for the angular flux must be applied in the entire grazing band. These assumptions are violated in common circumstances; for example, for separate in-going and out-going flux tallies on internal surfaces, and for out-going flux tallies on external surfaces. In some situations, dividing by two-thirds of the cosine cutoff is more appropriate. If users were able to control both the cosine cutoff and the substitute value, they could use these parameters to make accurate surface flux tallies. The procedure is demonstrated in a test problem in which Monte Carlo surface fluxes in cosine bins are converted to angular fluxes and compared with the results of a discrete ordinates calculation.

  19. Chemical application of diffusion quantum Monte Carlo

    NASA Technical Reports Server (NTRS)

    Reynolds, P. J.; Lester, W. A., Jr.

    1984-01-01

    The diffusion quantum Monte Carlo (QMC) method gives a stochastic solution to the Schroedinger equation. This approach is receiving increasing attention in chemical applications as a result of its high accuracy. However, reducing statistical uncertainty remains a priority because chemical effects are often obtained as small differences of large numbers. As an example, the single-triplet splitting of the energy of the methylene molecule CH sub 2 is given. The QMC algorithm was implemented on the CYBER 205, first as a direct transcription of the algorithm running on the VAX 11/780, and second by explicitly writing vector code for all loops longer than a crossover length C. The speed of the codes relative to one another as a function of C, and relative to the VAX, are discussed. The computational time dependence obtained versus the number of basis functions is discussed and this is compared with that obtained from traditional quantum chemistry codes and that obtained from traditional computer architectures.

  20. Summarizing Monte Carlo Results in Methodological Research.

    ERIC Educational Resources Information Center

    Harwell, Michael R.

    Monte Carlo studies of statistical tests are prominently featured in the methodological research literature. Unfortunately, the information from these studies does not appear to have significantly influenced methodological practice in educational and psychological research. One reason is that Monte Carlo studies lack an overarching theory to guide

  1. Summarizing Monte Carlo Results in Methodological Research.

    ERIC Educational Resources Information Center

    Harwell, Michael R.

    1992-01-01

    A methodological framework is provided for quantitatively integrating Type I error rates and power values for Monte Carlo studies. An example is given using Monte Carlo studies of a test of equality of variances, and the importance of relating metanalytic results to exact statistical theory is emphasized. (SLD)

  2. Monte Carlo Simulation of Transport

    NASA Astrophysics Data System (ADS)

    Kuhl, Nelson M.

    1996-11-01

    This paper is concerned with the problem of transport in controlled nuclear fusion as it applies to confinement in a tokamak or stellarator. Numerical experiments validate a mathematical model of Paul R. Garabedian in which the electric potential is determined by quasineutrality because of singular perturbation of the Poisson equation. The Monte Carlo method is used to solve a test particle drift kinetic equation. The collision operator drives the distribution function in velocity space towards the normal distribution, or Maxwellian, as suggested by the central limit theorem. The detailed structure of the collision operator and the role of conservation of momentum are investigated. Exponential decay of expected values allows the computation of the confinement times of both ions and electrons. Three-dimensional perturbations in the electromagnetic field model the anomalous transport of electrons and simulate the turbulent behavior that is presumably triggered by the displacement current. Comparison with experimental data and derivation of scaling laws are presented.

  3. Monte Carlo simulation of transport

    SciTech Connect

    Kuhl, N.M.

    1996-11-01

    This paper is concerned with the problem of transport in controlled nuclear fusion as it applies to confinement in a tokamak or stellarator. Numerical experiments validate a mathematical model of Paul R. Garabedian in which the electric potential is determined by quasineutrality because of singular perturbation of the Poisson equation. The Monte Carlo method is used to solve a test particle drift kinetic equation. The collision operator drives the distribution function in velocity space towards the normal distribution, or Maxwellian, as suggested by the central limit theorem. The detailed structure of the collision operator and the role of conservation of momentum are investigated. Exponential decay of expected values allows the computation of the confinement times of both ions and electrons. Three-dimensional perturbations in the electromagnetic field model the anomalous transport of electrons and simulate the turbulent behavior that is presumably triggered by the displacement current. Comparison with experimental data and derivation of scaling laws are presented. 13 refs., 6 figs.

  4. The MC21 Monte Carlo Transport Code

    SciTech Connect

    Sutton TM, Donovan TJ, Trumbull TH, Dobreff PS, Caro E, Griesheimer DP, Tyburski LJ, Carpenter DC, Joo H

    2007-01-09

    MC21 is a new Monte Carlo neutron and photon transport code currently under joint development at the Knolls Atomic Power Laboratory and the Bettis Atomic Power Laboratory. MC21 is the Monte Carlo transport kernel of the broader Common Monte Carlo Design Tool (CMCDT), which is also currently under development. The vision for CMCDT is to provide an automated, computer-aided modeling and post-processing environment integrated with a Monte Carlo solver that is optimized for reactor analysis. CMCDT represents a strategy to push the Monte Carlo method beyond its traditional role as a benchmarking tool or ''tool of last resort'' and into a dominant design role. This paper describes various aspects of the code, including the neutron physics and nuclear data treatments, the geometry representation, and the tally and depletion capabilities.

  5. Importance iteration in MORSE Monte Carlo calculations

    SciTech Connect

    Kloosterman, J.L.; Hoogenboom, J.E. . Interfaculty Reactor Institute)

    1994-05-01

    an expression to calculate point values (the expected detector response of a particle emerging from a collision or the source) is derived and implemented in the MORSE-SGC/S Monte Carlo code. It is outlined how these point values can be smoothed as a function of energy and as a function of the optical thickness between the detector and the source. The smoothed point values are subsequently used to calculate the biasing parameters of the Monte Carlo runs to follow. The method is illustrated by an example that shows that the obtained biasing parameters lead to a more efficient Monte Carlo calculation.

  6. Monte Carlo Shower Counter Studies

    NASA Technical Reports Server (NTRS)

    Snyder, H. David

    1991-01-01

    Activities and accomplishments related to the Monte Carlo shower counter studies are summarized. A tape of the VMS version of the GEANT software was obtained and installed on the central computer at Gallaudet University. Due to difficulties encountered in updating this VMS version, a decision was made to switch to the UNIX version of the package. This version was installed and used to generate the set of data files currently accessed by various analysis programs. The GEANT software was used to write files of data for positron and proton showers. Showers were simulated for a detector consisting of 50 alternating layers of lead and scintillator. Each file consisted of 1000 events at each of the following energies: 0.1, 0.5, 2.0, 10, 44, and 200 GeV. Data analysis activities related to clustering, chi square, and likelihood analyses are summarized. Source code for the GEANT user subprograms and data analysis programs are provided along with example data plots.

  7. Improved Monte Carlo Renormalization Group Method

    DOE R&D Accomplishments Database

    Gupta, R.; Wilson, K. G.; Umrigar, C.

    1985-01-01

    An extensive program to analyze critical systems using an Improved Monte Carlo Renormalization Group Method (IMCRG) being undertaken at LANL and Cornell is described. Here we first briefly review the method and then list some of the topics being investigated.

  8. Monte Carlo techniques in statistical physics

    NASA Astrophysics Data System (ADS)

    Murthy, K. P. N.

    2006-11-01

    In this paper we shall briefly review a few Markov Chain Monte Carlo methods for simulating closed systems described by canonical ensembles. We cover both Boltzmann and non-Boltzmann sampling techniques. The Metropolis algorithm is a typical example of Boltzmann Monte Carlo method. We discuss the time-symmetry of the Markov chain generated by Metropolis like algo- rithms that obey detailed balance. The non-Boltzmann Monte Carlo techniques reviewed include the multicanonical and Wang-Landau sampling. We list what we consider as milestones in the historical development of Monte Carlo methods in statistical physics. We dedicate this article to Prof. Dr. G. Ananthakrishna and wish him the very best in the coming years

  9. Multiple-time-stepping generalized hybrid Monte Carlo methods

    NASA Astrophysics Data System (ADS)

    Escribano, Bruno; Akhmatskaya, Elena; Reich, Sebastian; Azpiroz, Jon M.

    2015-01-01

    Performance of the generalized shadow hybrid Monte Carlo (GSHMC) method [1], which proved to be superior in sampling efficiency over its predecessors [2-4], molecular dynamics and hybrid Monte Carlo, can be further improved by combining it with multi-time-stepping (MTS) and mollification of slow forces. We demonstrate that the comparatively simple modifications of the method not only lead to better performance of GSHMC itself but also allow for beating the best performed methods, which use the similar force splitting schemes. In addition we show that the same ideas can be successfully applied to the conventional generalized hybrid Monte Carlo method (GHMC). The resulting methods, MTS-GHMC and MTS-GSHMC, provide accurate reproduction of thermodynamic and dynamical properties, exact temperature control during simulation and computational robustness and efficiency. MTS-GHMC uses a generalized momentum update to achieve weak stochastic stabilization to the molecular dynamics (MD) integrator. MTS-GSHMC adds the use of a shadow (modified) Hamiltonian to filter the MD trajectories in the HMC scheme. We introduce a new shadow Hamiltonian formulation adapted to force-splitting methods. The use of such Hamiltonians improves the acceptance rate of trajectories and has a strong impact on the sampling efficiency of the method. Both methods were implemented in the open-source MD package ProtoMol and were tested on a water and a protein systems. Results were compared to those obtained using a Langevin Molly (LM) method [5] on the same systems. The test results demonstrate the superiority of the new methods over LM in terms of stability, accuracy and sampling efficiency. This suggests that putting the MTS approach in the framework of hybrid Monte Carlo and using the natural stochasticity offered by the generalized hybrid Monte Carlo lead to improving stability of MTS and allow for achieving larger step sizes in the simulation of complex systems.

  10. Multiple-time-stepping generalized hybrid Monte Carlo methods

    SciTech Connect

    Escribano, Bruno; Akhmatskaya, Elena; Reich, Sebastian; Azpiroz, Jon M.

    2015-01-01

    Performance of the generalized shadow hybrid Monte Carlo (GSHMC) method [1], which proved to be superior in sampling efficiency over its predecessors [2–4], molecular dynamics and hybrid Monte Carlo, can be further improved by combining it with multi-time-stepping (MTS) and mollification of slow forces. We demonstrate that the comparatively simple modifications of the method not only lead to better performance of GSHMC itself but also allow for beating the best performed methods, which use the similar force splitting schemes. In addition we show that the same ideas can be successfully applied to the conventional generalized hybrid Monte Carlo method (GHMC). The resulting methods, MTS-GHMC and MTS-GSHMC, provide accurate reproduction of thermodynamic and dynamical properties, exact temperature control during simulation and computational robustness and efficiency. MTS-GHMC uses a generalized momentum update to achieve weak stochastic stabilization to the molecular dynamics (MD) integrator. MTS-GSHMC adds the use of a shadow (modified) Hamiltonian to filter the MD trajectories in the HMC scheme. We introduce a new shadow Hamiltonian formulation adapted to force-splitting methods. The use of such Hamiltonians improves the acceptance rate of trajectories and has a strong impact on the sampling efficiency of the method. Both methods were implemented in the open-source MD package ProtoMol and were tested on a water and a protein systems. Results were compared to those obtained using a Langevin Molly (LM) method [5] on the same systems. The test results demonstrate the superiority of the new methods over LM in terms of stability, accuracy and sampling efficiency. This suggests that putting the MTS approach in the framework of hybrid Monte Carlo and using the natural stochasticity offered by the generalized hybrid Monte Carlo lead to improving stability of MTS and allow for achieving larger step sizes in the simulation of complex systems.

  11. A Guide to Monte Carlo Simulations in Statistical Physics

    NASA Astrophysics Data System (ADS)

    Landau, David P.; Binder, Kurt

    2014-11-01

    1. Introduction; 2. Some necessary background; 3. Simple sampling Monte Carlo methods; 4. Importance sampling Monte Carlo methods; 5. More on importance sampling Monte Carlo methods for lattice systems; 6. Off-lattice models; 7. Reweighting methods; 8. Quantum Monte Carlo methods; 9. Monte Carlo renormalization group methods; 10. Non-equilibrium and irreversible processes; 11. Lattice gauge models: a brief introduction; 12. A brief review of other methods of computer simulation; 13. Monte Carlo simulations at the periphery of physics and beyond; 14. Monte Carlo studies of biological molecules; 15. Outlook; Appendix: listing of programs mentioned in the text; Index.

  12. A Guide to Monte Carlo Simulations in Statistical Physics

    NASA Astrophysics Data System (ADS)

    Landau, David P.; Binder, Kurt

    2013-11-01

    Preface; 1. Introduction; 2. Some necessary background; 3. Simple sampling Monte Carlo methods; 4. Importance sampling Monte Carlo methods; 5. More on importance sampling Monte Carlo methods of lattice systems; 6. Off-lattice models; 7. Reweighting methods; 8. Quantum Monte Carlo methods; 9. Monte Carlo renormalization group methods; 10. Non-equilibrium and irreversible processes; 11. Lattice gauge models: a brief introduction; 12. A brief review of other methods of computer simulation; 13. Monte Carlo simulations at the periphery of physics and beyond; 14. Monte Carlo studies of biological molecules; 15. Outlook; Appendix; Index.

  13. A Guide to Monte Carlo Simulations in Statistical Physics

    NASA Astrophysics Data System (ADS)

    Landau, David P.; Binder, Kurt

    2009-09-01

    Preface; 1. Introduction; 2. Some necessary background; 3. Simple sampling Monte Carlo methods; 4. Importance sampling Monte Carlo methods; 5. More on importance sampling Monte Carlo methods of lattice systems; 6. Off-lattice models; 7. Reweighting methods; 8. Quantum Monte Carlo methods; 9. Monte Carlo renormalization group methods; 10. Non-equilibrium and irreversible processes; 11. Lattice gauge models: a brief introduction; 12. A brief review of other methods of computer simulation; 13. Monte Carlo simulations at the periphery of physics and beyond; 14. Monte Carlo studies of biological molecules; 15. Outlook; Appendix; Index.

  14. Quantum Monte Carlo simulations of complex Hamiltonians

    NASA Astrophysics Data System (ADS)

    Rousseau, Valery; Hettiarachchilage, Kalani; Tam, Ka-Ming; Moreno, Juana; Jarrell, Mark

    2013-03-01

    In the last two decades there have been tremendous advances in boson Quantum Monte Carlo methods, which allow for solving more and more complex Hamiltonians. In particular, it is now possible to simulate Hamiltonians that include terms that couple an arbitrary number of sites and/or particles, such as six-site ring-exchange terms. These ring-exchange interactions are crucial for the study of quantum fluctuations on highly frustrated systems. We illustrate how the Stochastic Green Function algorithm with Global Space-Time Update can easily simulate such complex systems, and present some results for a highly non-trivial model of bosons in a pyrochlore crystal with six-site ring-exchange terms. This work is supported by NSF OISE-0952300 (KH, VGR and JM) and by DOE SciDAC grant DE-FC02-06ER25792 (KMT and MJ). This work used the Extreme Science and Engineer- ing Discovery Environment (XSEDE), which is sup- ported by the National Science Foundation

  15. Fission Matrix Capability for MCNP Monte Carlo

    SciTech Connect

    Carney, Sean E.; Brown, Forrest B.; Kiedrowski, Brian C.; Martin, William R.

    2012-09-05

    In a Monte Carlo criticality calculation, before the tallying of quantities can begin, a converged fission source (the fundamental eigenvector of the fission kernel) is required. Tallies of interest may include powers, absorption rates, leakage rates, or the multiplication factor (the fundamental eigenvalue of the fission kernel, k{sub eff}). Just as in the power iteration method of linear algebra, if the dominance ratio (the ratio of the first and zeroth eigenvalues) is high, many iterations of neutron history simulations are required to isolate the fundamental mode of the problem. Optically large systems have large dominance ratios, and systems containing poor neutron communication between regions are also slow to converge. The fission matrix method, implemented into MCNP[1], addresses these problems. When Monte Carlo random walk from a source is executed, the fission kernel is stochastically applied to the source. Random numbers are used for: distances to collision, reaction types, scattering physics, fission reactions, etc. This method is used because the fission kernel is a complex, 7-dimensional operator that is not explicitly known. Deterministic methods use approximations/discretization in energy, space, and direction to the kernel. Consequently, they are faster. Monte Carlo directly simulates the physics, which necessitates the use of random sampling. Because of this statistical noise, common convergence acceleration methods used in deterministic methods do not work. In the fission matrix method, we are using the random walk information not only to build the next-iteration fission source, but also a spatially-averaged fission kernel. Just like in deterministic methods, this involves approximation and discretization. The approximation is the tallying of the spatially-discretized fission kernel with an incorrect fission source. We address this by making the spatial mesh fine enough that this error is negligible. As a consequence of discretization we get a spatially low-order kernel, the fundamental eigenvector of which should converge faster than that of continuous kernel. We can then redistribute the fission bank to match the fundamental fission matrix eigenvector, effectively eliminating all higher modes. For all computations here biasing is not used, with the intention of comparing the unaltered, conventional Monte Carlo process with the fission matrix results. The source convergence of standard Monte Carlo criticality calculations are, to some extent, always subject to the characteristics of the problem. This method seeks to partially eliminate this problem-dependence by directly calculating the spatial coupling. The primary cost of this, which has prevented widespread use since its inception [2,3,4], is the extra storage required. To account for the coupling of all N spatial regions to every other region requires storing N{sup 2} values. For realistic problems, where a fine resolution is required for the suppression of discretization error, the storage becomes inordinate. Two factors lead to a renewed interest here: the larger memory available on modern computers and the development of a better storage scheme based on physical intuition. When the distance between source and fission events is short compared with the size of the entire system, saving memory by accounting for only local coupling introduces little extra error. We can gain other information from directly tallying the fission kernel: higher eigenmodes and eigenvalues. Conventional Monte Carlo cannot calculate this data - here we have a way to get new information for multiplying systems. In Ref. [5], higher mode eigenfunctions are analyzed for a three-region 1-dimensional problem and 2-dimensional homogenous problem. We analyze higher modes for more realistic problems. There is also the question of practical use of this information; here we examine a way of using eigenmode information to address the negative confidence interval bias due to inter-cycle correlation. We apply this method mainly to four problems: 2D pressurized water reactor (PWR) [6], 3D Kord Smith Challenge [7], OECD - Nuclear Energy Agency (NEA) source convergence benchmark fuel storage vault [8], and Advanced Test Reactor (ATR) [9]. We see excellent source convergence acceleration for the most difficult problems: the 3D Kord Smith Challenge and fuel storage vault. Additionally, we examine higher eigenmode results for all these problems. Using part of the eigenvalue spectrum for a one-group 1D problem, we find confidence interval correction factors that are improvements over existing corrections [10].

  16. Novel Quantum Monte Carlo Approaches for Quantum Liquids

    NASA Astrophysics Data System (ADS)

    Rubenstein, Brenda M.

    Quantum Monte Carlo methods are a powerful suite of techniques for solving the quantum many-body problem. By using random numbers to stochastically sample quantum properties, QMC methods are capable of studying low-temperature quantum systems well beyond the reach of conventional deterministic techniques. QMC techniques have likewise been indispensible tools for augmenting our current knowledge of superfluidity and superconductivity. In this thesis, I present two new quantum Monte Carlo techniques, the Monte Carlo Power Method and Bose-Fermi Auxiliary-Field Quantum Monte Carlo, and apply previously developed Path Integral Monte Carlo methods to explore two new phases of quantum hard spheres and hydrogen. I lay the foundation for a subsequent description of my research by first reviewing the physics of quantum liquids in Chapter One and the mathematics behind Quantum Monte Carlo algorithms in Chapter Two. I then discuss the Monte Carlo Power Method, a stochastic way of computing the first several extremal eigenvalues of a matrix too memory-intensive to be stored and therefore diagonalized. As an illustration of the technique, I demonstrate how it can be used to determine the second eigenvalues of the transition matrices of several popular Monte Carlo algorithms. This information may be used to quantify how rapidly a Monte Carlo algorithm is converging to the equilibrium probability distribution it is sampling. I next present the Bose-Fermi Auxiliary-Field Quantum Monte Carlo algorithm. This algorithm generalizes the well-known Auxiliary-Field Quantum Monte Carlo algorithm for fermions to bosons and Bose-Fermi mixtures. Despite some shortcomings, the Bose-Fermi Auxiliary-Field Quantum Monte Carlo algorithm represents the first exact technique capable of studying Bose-Fermi mixtures of any size in any dimension. In Chapter Six, I describe a new Constant Stress Path Integral Monte Carlo algorithm for the study of quantum mechanical systems under high pressures. While the eventual hope is to apply this algorithm to the exploration of yet unidentified high-pressure, low-temperature phases of hydrogen, I employ this algorithm to determine whether or not quantum hard spheres can form a low-temperature bcc solid if exchange is not taken into account. In the final chapter of this thesis, I use Path Integral Monte Carlo once again to explore whether glassy para-hydrogen exhibits superfluidity. Physicists have long searched for ways to coax hydrogen into becoming a superfluid. I present evidence that, while glassy hydrogen does not crystallize at the temperatures at which hydrogen might become a superfluid, it nevertheless does not exhibit superfluidity. This is because the average binding energy per p-H2 molecule poses a severe barrier to exchange regardless of whether the system is crystalline. All in all, this work extends the reach of Quantum Monte Carlo methods to new systems and brings the power of existing methods to bear on new problems. Portions of this work have been published in Rubenstein, PRE (2010) and Rubenstein, PRA (2012) [167;169]. Other papers not discussed here published during my Ph.D. include Rubenstein, BPJ (2008) and Rubenstein, PRL (2012) [166;168]. The work in Chapters 6 and 7 is currently unpublished. [166] Brenda M. Rubenstein, Ivan Coluzza, and Mark A. Miller. Controlling the folding and substrate-binding of proteins using polymer brushes. Physical Review Letters, 108(20):208104, May 2012. [167] Brenda M. Rubenstein, J.E. Gubernatis, and J.D. Doll. Comparative monte carlo efficiency by monte carlo analysis. Physical Review E, 82(3):036701, September 2010. [168] Brenda M. Rubenstein and Laura J. Kaufman. The role of extracellular matrix in glioma invasion: A cellular potts model approach. Biophysical Journal, 95(12):5661-- 5680, December 2008. [169] Brenda M. Rubenstein, Shiwei Zhang, and David R. Reichman. Finite-temperature auxiliary-field quantum monte carlo for bose-fermi mixtures. Physical Review A, 86(5):053606, November 2012.

  17. Monte Carlo simulation of launchsite winds at Kennedy Space Center

    NASA Technical Reports Server (NTRS)

    Queen, Eric M.; Moerder, Daniel D.; Warner, Michael S.

    1991-01-01

    This paper develops and validates an easily implemented model for simulating random horizontal wind profiles over the Kennedy Space Center (KSC) at Cape Canaveral, Florida. The model is intended for use in Monte Carlo launch vehicle simulations of the type employed in mission planning, where the large number of profiles needed for statistical fidelity of such simulation experiments makes the use of actual wind measurements impractical. The model is based on measurements made at KSC and represents vertical correlations by a decaying exponential model which is parameterized via least-squares parameter optimization against the sample data. The validity of the model is evaluated by comparing two Monte Carlo simulations of an asymmetric, heavy-lift launch vehicle. In the first simulation, the measured wind profiles are used, while in the second, the wind profiles are generated using the stochastic model. The simulations indicate that the use of either the measured or simulated wind field results in similar launch vehicle performance.

  18. Advanced interacting sequential Monte Carlo sampling for inverse scattering

    NASA Astrophysics Data System (ADS)

    Giraud, F.; Minvielle, P.; Del Moral, P.

    2013-09-01

    The following electromagnetism (EM) inverse problem is addressed. It consists in estimating the local radioelectric properties of materials recovering an object from global EM scattering measurements, at various incidences and wave frequencies. This large scale ill-posed inverse problem is explored by an intensive exploitation of an efficient 2D Maxwell solver, distributed on high performance computing machines. Applied to a large training data set, a statistical analysis reduces the problem to a simpler probabilistic metamodel, from which Bayesian inference can be performed. Considering the radioelectric properties as a hidden dynamic stochastic process that evolves according to the frequency, it is shown how advanced Markov chain Monte Carlo methodscalled sequential Monte Carlo or interacting particlescan take benefit of the structure and provide local EM property estimates.

  19. Nuclear pairing within a configuration-space Monte Carlo approach

    NASA Astrophysics Data System (ADS)

    Lingle, Mark; Volya, Alexander

    2015-06-01

    Pairing correlations in nuclei play a decisive role in determining nuclear drip lines, binding energies, and many collective properties. In this work a new configuration-space Monte Carlo (CSMC) method for treating nuclear pairing correlations is developed, implemented, and demonstrated. In CSMC the Hamiltonian matrix is stochastically generated in Krylov subspace, resulting in the Monte Carlo version of Lanczos-like diagonalization. The advantages of this approach over other techniques are discussed; the absence of the fermionic sign problem, probabilistic interpretation of quantum-mechanical amplitudes, and ability to handle truly large-scale problems with defined precision and error control are noteworthy merits of CSMC. The features of our CSMC approach are shown using models and realistic examples. Special attention is given to difficult limits: situations with nonconstant pairing strengths, cases with nearly degenerate excited states, limits when pairing correlations in finite systems are weak, and problems when the relevant configuration space is large.

  20. Quantum speedup of Monte Carlo methods

    PubMed Central

    Montanaro, Ashley

    2015-01-01

    Monte Carlo methods use random sampling to estimate numerical quantities which are hard to compute deterministically. One important example is the use in statistical physics of rapidly mixing Markov chains to approximately compute partition functions. In this work, we describe a quantum algorithm which can accelerate Monte Carlo methods in a very general setting. The algorithm estimates the expected output value of an arbitrary randomized or quantum subroutine with bounded variance, achieving a near-quadratic speedup over the best possible classical algorithm. Combining the algorithm with the use of quantum walks gives a quantum speedup of the fastest known classical algorithms with rigorous performance bounds for computing partition functions, which use multiple-stage Markov chain Monte Carlo techniques. The quantum algorithm can also be used to estimate the total variation distance between probability distributions efficiently. PMID:26528079

  1. Quantum Monte Carlo calculations of light nuclei.

    SciTech Connect

    Pieper, S. C.

    1998-08-25

    Quantum Monte Carlo calculations using realistic two- and three-nucleon interactions are presented for nuclei with up to eight nucleons. We have computed the ground and a few excited states of all such nuclei with Greens function Monte Carlo (GFMC) and all of the experimentally known excited states using variational Monte Carlo (VMC). The GFMC calculations show that for a given Hamiltonian, the VMC calculations of excitation spectra are reliable, but the VMC ground-state energies are significantly above the exact values. We find that the Hamiltonian we are using (which was developed based on {sup 3}H, {sup 4}He, and nuclear matter calculations) underpredicts the binding energy of p-shell nuclei. However our results for excitation spectra are very good and one can see both shell-model and collective spectra resulting from fundamental many-nucleon calculations. Possible improvements in the three-nucleon potential are also be discussed.

  2. Shell model the Monte Carlo way

    SciTech Connect

    Ormand, W.E.

    1995-03-01

    The formalism for the auxiliary-field Monte Carlo approach to the nuclear shell model is presented. The method is based on a linearization of the two-body part of the Hamiltonian in an imaginary-time propagator using the Hubbard-Stratonovich transformation. The foundation of the method, as applied to the nuclear many-body problem, is discussed. Topics presented in detail include: (1) the density-density formulation of the method, (2) computation of the overlaps, (3) the sign of the Monte Carlo weight function, (4) techniques for performing Monte Carlo sampling, and (5) the reconstruction of response functions from an imaginary-time auto-correlation function using MaxEnt techniques. Results obtained using schematic interactions, which have no sign problem, are presented to demonstrate the feasibility of the method, while an extrapolation method for realistic Hamiltonians is presented. In addition, applications at finite temperature are outlined.

  3. Accelerating population balance-Monte Carlo simulation for coagulation dynamics from the Markov jump model, stochastic algorithm and GPU parallel computing

    SciTech Connect

    Xu, Zuwei; Zhao, Haibo Zheng, Chuguang

    2015-01-15

    This paper proposes a comprehensive framework for accelerating population balance-Monte Carlo (PBMC) simulation of particle coagulation dynamics. By combining Markov jump model, weighted majorant kernel and GPU (graphics processing unit) parallel computing, a significant gain in computational efficiency is achieved. The Markov jump model constructs a coagulation-rule matrix of differentially-weighted simulation particles, so as to capture the time evolution of particle size distribution with low statistical noise over the full size range and as far as possible to reduce the number of time loopings. Here three coagulation rules are highlighted and it is found that constructing appropriate coagulation rule provides a route to attain the compromise between accuracy and cost of PBMC methods. Further, in order to avoid double looping over all simulation particles when considering the two-particle events (typically, particle coagulation), the weighted majorant kernel is introduced to estimate the maximum coagulation rates being used for acceptance–rejection processes by single-looping over all particles, and meanwhile the mean time-step of coagulation event is estimated by summing the coagulation kernels of rejected and accepted particle pairs. The computational load of these fast differentially-weighted PBMC simulations (based on the Markov jump model) is reduced greatly to be proportional to the number of simulation particles in a zero-dimensional system (single cell). Finally, for a spatially inhomogeneous multi-dimensional (multi-cell) simulation, the proposed fast PBMC is performed in each cell, and multiple cells are parallel processed by multi-cores on a GPU that can implement the massively threaded data-parallel tasks to obtain remarkable speedup ratio (comparing with CPU computation, the speedup ratio of GPU parallel computing is as high as 200 in a case of 100 cells with 10 000 simulation particles per cell). These accelerating approaches of PBMC are demonstrated in a physically realistic Brownian coagulation case. The computational accuracy is validated with benchmark solution of discrete-sectional method. The simulation results show that the comprehensive approach can attain very favorable improvement in cost without sacrificing computational accuracy.

  4. Accelerating population balance-Monte Carlo simulation for coagulation dynamics from the Markov jump model, stochastic algorithm and GPU parallel computing

    NASA Astrophysics Data System (ADS)

    Xu, Zuwei; Zhao, Haibo; Zheng, Chuguang

    2015-01-01

    This paper proposes a comprehensive framework for accelerating population balance-Monte Carlo (PBMC) simulation of particle coagulation dynamics. By combining Markov jump model, weighted majorant kernel and GPU (graphics processing unit) parallel computing, a significant gain in computational efficiency is achieved. The Markov jump model constructs a coagulation-rule matrix of differentially-weighted simulation particles, so as to capture the time evolution of particle size distribution with low statistical noise over the full size range and as far as possible to reduce the number of time loopings. Here three coagulation rules are highlighted and it is found that constructing appropriate coagulation rule provides a route to attain the compromise between accuracy and cost of PBMC methods. Further, in order to avoid double looping over all simulation particles when considering the two-particle events (typically, particle coagulation), the weighted majorant kernel is introduced to estimate the maximum coagulation rates being used for acceptance-rejection processes by single-looping over all particles, and meanwhile the mean time-step of coagulation event is estimated by summing the coagulation kernels of rejected and accepted particle pairs. The computational load of these fast differentially-weighted PBMC simulations (based on the Markov jump model) is reduced greatly to be proportional to the number of simulation particles in a zero-dimensional system (single cell). Finally, for a spatially inhomogeneous multi-dimensional (multi-cell) simulation, the proposed fast PBMC is performed in each cell, and multiple cells are parallel processed by multi-cores on a GPU that can implement the massively threaded data-parallel tasks to obtain remarkable speedup ratio (comparing with CPU computation, the speedup ratio of GPU parallel computing is as high as 200 in a case of 100 cells with 10 000 simulation particles per cell). These accelerating approaches of PBMC are demonstrated in a physically realistic Brownian coagulation case. The computational accuracy is validated with benchmark solution of discrete-sectional method. The simulation results show that the comprehensive approach can attain very favorable improvement in cost without sacrificing computational accuracy.

  5. Monte Carlo Applications in Fusion Neutronics

    NASA Astrophysics Data System (ADS)

    Fischer, U.

    An overview is given of Monte Carlo applications in fusion neutronics as being addressed at Forschungszentrum Karlsruhe in the framework of the European Fusion Technology Programme. Main applications are in the area of design and development of components for future fusion reactors such as ITER, the International Thermonuclear Experimental Reactor, and a Demo-type European tokamak reactor, further in the development of intense neutron sources for material research, and, finally, in the conduction and analyses of integral 14 MeV neutron experiments. The overview includes a short review of methods, tools and data and addresses key issues of Monte Carlo applications in fusion neutronics.

  6. Fast quantum Monte Carlo on a GPU

    NASA Astrophysics Data System (ADS)

    Lutsyshyn, Y.

    2015-02-01

    We present a scheme for the parallelization of quantum Monte Carlo method on graphical processing units, focusing on variational Monte Carlo simulation of bosonic systems. We use asynchronous execution schemes with shared memory persistence, and obtain an excellent utilization of the accelerator. The CUDA code is provided along with a package that simulates liquid helium-4. The program was benchmarked on several models of Nvidia GPU, including Fermi GTX560 and M2090, and the Kepler architecture K20 GPU. Special optimization was developed for the Kepler cards, including placement of data structures in the register space of the Kepler GPUs. Kepler-specific optimization is discussed.

  7. Monte Carlo evaluation of thermal desorption rates

    SciTech Connect

    Adams, J.E.; Doll, J.D.

    1981-05-01

    The recently reported method for computing thermal desorption rates via a Monte Carlo evaluation of the appropriate transition state theory expression (J. E. Adams and J. D. Doll, J. Chem. Phys. 74, 1467 (1980)) is extended, by the use of importance sampling, so as to generate the complete temperature dependence in a single calculation. We also describe a straightforward means of calculating the activation energy for the desorption process within the same Monte Carlo framework. The result obtained in this way represents, for the case of a simple desorptive event, an upper bound to the true value.

  8. Monte Carlo simulation of an expanding gas

    NASA Technical Reports Server (NTRS)

    Boyd, Iain D.

    1989-01-01

    By application of simple computer graphics techniques, the statistical performance of two Monte Carlo methods used in the simulation of rarefied gas flows are assessed. Specifically, two direct simulation Monte Carlo (DSMC) methods developed by Bird and Nanbu are considered. The graphics techniques are found to be of great benefit in the reduction and interpretation of the large volume of data generated, thus enabling important conclusions to be drawn about the simulation results. Hence, it is discovered that the method of Nanbu suffers from increased statistical fluctuations, thereby prohibiting its use in the solution of practical problems.

  9. The Rational Hybrid Monte Carlo algorithm

    NASA Astrophysics Data System (ADS)

    Clark, Michael

    2006-12-01

    The past few years have seen considerable progress in algorithmic development for the generation of gauge fields including the effects of dynamical fermions. The Rational Hybrid Monte Carlo (RHMC) algorithm, where Hybrid Monte Carlo is performed using a rational approximation in place the usual inverse quark matrix kernel is one of these developments. This algorithm has been found to be extremely beneficial in many areas of lattice QCD (chiral fermions, finite temperature, Wilson fermions etc.). We review the algorithm and some of these benefits, and we compare against other recent algorithm developements. We conclude with an update of the Berlin wall plot comparing costs of all popular fermion formulations.

  10. Linked coupled cluster Monte Carlo

    NASA Astrophysics Data System (ADS)

    Franklin, R. S. T.; Spencer, J. S.; Zoccante, A.; Thom, A. J. W.

    2016-01-01

    We consider a new formulation of the stochastic coupled cluster method in terms of the similarity transformed Hamiltonian. We show that improvement in the granularity with which the wavefunction is represented results in a reduction in the critical population required to correctly sample the wavefunction for a range of systems and excitation levels and hence leads to a substantial reduction in the computational cost. This development has the potential to substantially extend the range of the method, enabling it to be used to treat larger systems with excitation levels not easily accessible with conventional deterministic methods.

  11. Linked coupled cluster Monte Carlo.

    PubMed

    Franklin, R S T; Spencer, J S; Zoccante, A; Thom, A J W

    2016-01-28

    We consider a new formulation of the stochastic coupled cluster method in terms of the similarity transformed Hamiltonian. We show that improvement in the granularity with which the wavefunction is represented results in a reduction in the critical population required to correctly sample the wavefunction for a range of systems and excitation levels and hence leads to a substantial reduction in the computational cost. This development has the potential to substantially extend the range of the method, enabling it to be used to treat larger systems with excitation levels not easily accessible with conventional deterministic methods. PMID:26827206

  12. Structural Reliability and Monte Carlo Simulation.

    ERIC Educational Resources Information Center

    Laumakis, P. J.; Harlow, G.

    2002-01-01

    Analyzes a simple boom structure and assesses its reliability using elementary engineering mechanics. Demonstrates the power and utility of Monte-Carlo simulation by showing that such a simulation can be implemented more readily with results that compare favorably to the theoretical calculations. (Author/MM)

  13. A comparison of Monte Carlo generators

    SciTech Connect

    Golan, Tomasz

    2015-05-15

    A comparison of GENIE, NEUT, NUANCE, and NuWro Monte Carlo neutrino event generators is presented using a set of four observables: protons multiplicity, total visible energy, most energetic proton momentum, and π{sup +} two-dimensional energy vs cosine distribution.

  14. Coded aperture optimization using Monte Carlo simulations

    NASA Astrophysics Data System (ADS)

    Martineau, A.; Rocchisani, J. M.; Moretti, J. L.

    2010-04-01

    Coded apertures using Uniformly Redundant Arrays (URA) have been unsuccessfully evaluated for two-dimensional and three-dimensional imaging in Nuclear Medicine. The images reconstructed from coded projections contain artifacts and suffer from poor spatial resolution in the longitudinal direction. We introduce a Maximum-Likelihood Expectation-Maximization (MLEM) algorithm for three-dimensional coded aperture imaging which uses a projection matrix calculated by Monte Carlo simulations. The aim of the algorithm is to reduce artifacts and improve the three-dimensional spatial resolution in the reconstructed images. Firstly, we present the validation of GATE (Geant4 Application for Emission Tomography) for Monte Carlo simulations of a coded mask installed on a clinical gamma camera. The coded mask modelling was validated by comparison between experimental and simulated data in terms of energy spectra, sensitivity and spatial resolution. In the second part of the study, we use the validated model to calculate the projection matrix with Monte Carlo simulations. A three-dimensional thyroid phantom study was performed to compare the performance of the three-dimensional MLEM reconstruction with conventional correlation method. The results indicate that the artifacts are reduced and three-dimensional spatial resolution is improved with the Monte Carlo-based MLEM reconstruction.

  15. Krylov-Projected Quantum Monte Carlo Method.

    PubMed

    Blunt, N S; Alavi, Ali; Booth, George H

    2015-07-31

    We present an approach to the calculation of arbitrary spectral, thermal, and excited state properties within the full configuration interaction quzantum Monte Carlo framework. This is achieved via an unbiased projection of the Hamiltonian eigenvalue problem into a space of stochastically sampled Krylov vectors, thus, enabling the calculation of real-frequency spectral and thermal properties and avoiding explicit analytic continuation. We use this approach to calculate temperature-dependent properties and one- and two-body spectral functions for various Hubbard models, as well as isolated excited states in ab initio systems. PMID:26274406

  16. A stochastic model and Monte Carlo algorithm for fluctuation-induced H2 formation on the surface of interstellar dust grains

    NASA Astrophysics Data System (ADS)

    Sabelfeld, K. K.

    2015-09-01

    A stochastic algorithm for simulation of fluctuation-induced kinetics of H2 formation on grain surfaces is suggested as a generalization of the technique developed in our recent studies [1] where this method was developed to describe the annihilation of spatially separate electrons and holes in a disordered semiconductor. The stochastic model is based on the spatially inhomogeneous, nonlinear integro-differential Smoluchowski equations with random source term. In this paper we derive the general system of Smoluchowski type equations for the formation of H2 from two hydrogen atoms on the surface of interstellar dust grains with physisorption and chemisorption sites. We focus in this study on the spatial distribution, and numerically investigate the segregation in the case of a source with a continuous generation in time and randomly distributed in space. The stochastic particle method presented is based on a probabilistic interpretation of the underlying process as a stochastic Markov process of interacting particle system in discrete but randomly progressed time instances. The segregation is analyzed through the correlation analysis of the vector random field of concentrations which appears to be isotropic in space and stationary in time.

  17. Extension of the fully coupled Monte Carlo/S sub N response matrix method to problems including upscatter and fission

    SciTech Connect

    Baker, R.S.; Filippone, W.F. . Dept. of Nuclear and Energy Engineering); Alcouffe, R.E. )

    1991-01-01

    The neutron transport equation is solved by a hybrid method that iteratively couples regions where deterministic (S{sub N}) and stochastic (Monte Carlo) methods are applied. Unlike previous hybrid methods, the Monte Carlo and S{sub N} regions are fully coupled in the sense that no assumption is made about geometrical separation of decoupling. The fully coupled Monte Carlo/S{sub N} technique consists of defining spatial and/or energy regions of a problem in which either a Monte Carlo calculation or an S{sub N} calculation is to be performed. The Monte Carlo and S{sub N} regions are then connected through the common angular boundary fluxes, which are determined iteratively using the response matrix technique, and group sources. The hybrid method provides a new method of solving problems involving both optically thick and optically thin regions that neither Monte Carlo nor S{sub N} is well suited for by itself. The fully coupled Monte Carlo/S{sub N} method has been implemented in the S{sub N} code TWODANT by adding special-purpose Monte Carlo subroutines to calculate the response matrices and group sources, and linkage subroutines to carry out the interface flux iterations. The common angular boundary fluxes are included in the S{sub N} code as interior boundary sources, leaving the logic for the solution of the transport flux unchanged, while, with minor modifications, the diffusion synthetic accelerator remains effective in accelerating the S{sub N} calculations. The Monte Carlo routines have been successfully vectorized, with approximately a factor of five increases in speed over the nonvectorized version. The hybrid method is capable of solving forward, inhomogeneous source problems in X-Y and R-Z geometries. This capability now includes mulitigroup problems involving upscatter and fission in non-highly multiplying systems. 8 refs., 8 figs., 1 tab.

  18. MontePython: Implementing Quantum Monte Carlo using Python

    NASA Astrophysics Data System (ADS)

    Nilsen, Jon Kristian

    2007-11-01

    We present a cross-language C++/Python program for simulations of quantum mechanical systems with the use of Quantum Monte Carlo (QMC) methods. We describe a system for which to apply QMC, the algorithms of variational Monte Carlo and diffusion Monte Carlo and we describe how to implement theses methods in pure C++ and C++/Python. Furthermore we check the efficiency of the implementations in serial and parallel cases to show that the overhead using Python can be negligible. Program summaryProgram title: MontePython Catalogue identifier: ADZP_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/ADZP_v1_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: Standard CPC licence, http://cpc.cs.qub.ac.uk/licence/licence.html No. of lines in distributed program, including test data, etc.: 49 519 No. of bytes in distributed program, including test data, etc.: 114 484 Distribution format: tar.gz Programming language: C++, Python Computer: PC, IBM RS6000/320, HP, ALPHA Operating system: LINUX Has the code been vectorised or parallelized?: Yes, parallelized with MPI Number of processors used: 1-96 RAM: Depends on physical system to be simulated Classification: 7.6; 16.1 Nature of problem: Investigating ab initio quantum mechanical systems, specifically Bose-Einstein condensation in dilute gases of 87Rb Solution method: Quantum Monte Carlo Running time: 225 min with 20 particles (with 4800 walkers moved in 1750 time steps) on 1 AMD Opteron TM Processor 2218 processor; Production run for, e.g., 200 particles takes around 24 hours on 32 such processors.

  19. Markov Chain Monte-Carlo Models of Starburst Clusters

    NASA Astrophysics Data System (ADS)

    Melnick, Jorge

    2015-01-01

    There are a number of stochastic effects that must be considered when comparing models to observations of starburst clusters: the IMF is never fully populated; the stars can never be strictly coeval; stars rotate and their photometric properties depend on orientation; a significant fraction of massive stars are in interacting binaries; and the extinction varies from star to star. The probability distributions of each of these effects are not a priori known, but must be extracted from the observations. Markov Chain Monte-Carlo methods appear to provide the best statistical approach. Here I present an example of stochastic age effects upon the upper mass limit of the IMF of the Arches cluster as derived from near-IR photometry.

  20. Improved diffusion Monte Carlo and the Brownian fan

    NASA Astrophysics Data System (ADS)

    Weare, J.; Hairer, M.

    2012-12-01

    Diffusion Monte Carlo (DMC) is a workhorse of stochastic computing. It was invented forty years ago as the central component in a Monte Carlo technique for estimating various characteristics of quantum mechanical systems. Since then it has been used in applied in a huge number of fields, often as a central component in sequential Monte Carlo techniques (e.g. the particle filter). DMC computes averages of some underlying stochastic dynamics weighted by a functional of the path of the process. The weight functional could represent the potential term in a Feynman-Kac representation of a partial differential equation (as in quantum Monte Carlo) or it could represent the likelihood of a sequence of noisy observations of the underlying system (as in particle filtering). DMC alternates between an evolution step in which a collection of samples of the underlying system are evolved for some short time interval, and a branching step in which, according to the weight functional, some samples are copied and some samples are eliminated. Unfortunately for certain choices of the weight functional DMC fails to have a meaningful limit as one decreases the evolution time interval between branching steps. We propose a modification of the standard DMC algorithm. The new algorithm has a lower variance per workload, regardless of the regime considered. In particular, it makes it feasible to use DMC in situations where the ``naive'' generalization of the standard algorithm would be impractical, due to an exponential explosion of its variance. We numerically demonstrate the effectiveness of the new algorithm on a standard rare event simulation problem (probability of an unlikely transition in a Lennard-Jones cluster), as well as a high-frequency data assimilation problem. We then provide a detailed heuristic explanation of why, in the case of rare event simulation, the new algorithm is expected to converge to a limiting process as the underlying stepsize goes to 0. This is shown rigorously in the simplest possible situation of a random walk, biased by a linear potential. The resulting limiting object, which we call the ``Brownian fan'', is a very natural new mathematical object of independent interest.The reconstruction (dotted lines) of a trajectory of stochastic Lorenz 63 (solid lines) by DMC (the standard particle filter). The reconstruction by the modified DMC algorithm.

  1. Monte Carlo simulations of fluid vesicles

    NASA Astrophysics Data System (ADS)

    Sreeja, K. K.; Ipsen, John H.; Kumar, P. B. Sunil

    2015-07-01

    Lipid vesicles are closed two dimensional fluid surfaces that are studied extensively as model systems for understanding the physical properties of biological membranes. Here we review the recent developments in the Monte Carlo techniques for simulating fluid vesicles and discuss some of their applications. The technique, which treats the membrane as an elastic sheet, is most suitable for the study of large scale conformations of membranes. The model can be used to study vesicles with fixed and varying topologies. Here we focus on the case of multi-component membranes with the local lipid and protein composition coupled to the membrane curvature leading to a variety of shapes. The phase diagram is more intriguing in the case of fluid vesicles having an in-plane orientational order that induce anisotropic directional curvatures. Methods to explore the steady state morphological structures due to active flux of materials have also been described in the context of Monte Carlo simulations.

  2. Ex Post Facto Monte Carlo Variance Reduction

    SciTech Connect

    Booth, Thomas E.

    2004-11-15

    The variance in Monte Carlo particle transport calculations is often dominated by a few particles whose importance increases manyfold on a single transport step. This paper describes a novel variance reduction method that uses a large importance change as a trigger to resample the offending transport step. That is, the method is employed only after (ex post facto) a random walk attempts a transport step that would otherwise introduce a large variance in the calculation.Improvements in two Monte Carlo transport calculations are demonstrated empirically using an ex post facto method. First, the method is shown to reduce the variance in a penetration problem with a cross-section window. Second, the method empirically appears to modify a point detector estimator from an infinite variance estimator to a finite variance estimator.

  3. Status of Monte Carlo at Los Alamos

    SciTech Connect

    Thompson, W.L.; Cashwell, E.D.

    1980-01-01

    At Los Alamos the early work of Fermi, von Neumann, and Ulam has been developed and supplemented by many followers, notably Cashwell and Everett, and the main product today is the continuous-energy, general-purpose, generalized-geometry, time-dependent, coupled neutron-photon transport code called MCNP. The Los Alamos Monte Carlo research and development effort is concentrated in Group X-6. MCNP treats an arbitrary three-dimensional configuration of arbitrary materials in geometric cells bounded by first- and second-degree surfaces and some fourth-degree surfaces (elliptical tori). Monte Carlo has evolved into perhaps the main method for radiation transport calculations at Los Alamos. MCNP is used in every technical division at the Laboratory by over 130 users about 600 times a month accounting for nearly 200 hours of CDC-7600 time.

  4. Status of Monte Carlo at Los Alamos

    SciTech Connect

    Thompson, W.L.; Cashwell, E.D.; Godfrey, T.N.K.; Schrandt, R.G.; Deutsch, O.L.; Booth, T.E.

    1980-05-01

    Four papers were presented by Group X-6 on April 22, 1980, at the Oak Ridge Radiation Shielding Information Center (RSIC) Seminar-Workshop on Theory and Applications of Monte Carlo Methods. These papers are combined into one report for convenience and because they are related to each other. The first paper (by Thompson and Cashwell) is a general survey about X-6 and MCNP and is an introduction to the other three papers. It can also serve as a resume of X-6. The second paper (by Godfrey) explains some of the details of geometry specification in MCNP. The third paper (by Cashwell and Schrandt) illustrates calculating flux at a point with MCNP; in particular, the once-more-collided flux estimator is demonstrated. Finally, the fourth paper (by Thompson, Deutsch, and Booth) is a tutorial on some variance-reduction techniques. It should be required for a fledging Monte Carlo practitioner.

  5. Monte Carlo Simulation of THz Multipliers

    NASA Technical Reports Server (NTRS)

    East, J.; Blakey, P.

    1997-01-01

    Schottky Barrier diode frequency multipliers are critical components in submillimeter and Thz space based earth observation systems. As the operating frequency of these multipliers has increased, the agreement between design predictions and experimental results has become poorer. The multiplier design is usually based on a nonlinear model using a form of harmonic balance and a model for the Schottky barrier diode. Conventional voltage dependent lumped element models do a poor job of predicting THz frequency performance. This paper will describe a large signal Monte Carlo simulation of Schottky barrier multipliers. The simulation is a time dependent particle field Monte Carlo simulation with ohmic and Schottky barrier boundary conditions included that has been combined with a fixed point solution for the nonlinear circuit interaction. The results in the paper will point out some important time constants in varactor operation and will describe the effects of current saturation and nonlinear resistances on multiplier operation.

  6. An enhanced Monte Carlo outlier detection method.

    PubMed

    Zhang, Liangxiao; Li, Peiwu; Mao, Jin; Ma, Fei; Ding, Xiaoxia; Zhang, Qi

    2015-09-30

    Outlier detection is crucial in building a highly predictive model. In this study, we proposed an enhanced Monte Carlo outlier detection method by establishing cross-prediction models based on determinate normal samples and analyzing the distribution of prediction errors individually for dubious samples. One simulated and three real datasets were used to illustrate and validate the performance of our method, and the results indicated that this method outperformed Monte Carlo outlier detection in outlier diagnosis. After these outliers were removed, the value of validation by Kovats retention indices and the root mean square error of prediction decreased from 3.195 to 1.655, and the average cross-validation prediction error decreased from 2.0341 to 1.2780. This method helps establish a good model by eliminating outliers. © 2015 Wiley Periodicals, Inc. PMID:26226927

  7. Monte Carlo simulations on SIMD computer architectures

    SciTech Connect

    Burmester, C.P.; Gronsky, R.; Wille, L.T.

    1992-03-01

    Algorithmic considerations regarding the implementation of various materials science applications of the Monte Carlo technique to single instruction multiple data (SMM) computer architectures are presented. In particular, implementation of the Ising model with nearest, next nearest, and long range screened Coulomb interactions on the SIMD architecture MasPar MP-1 (DEC mpp-12000) series of massively parallel computers is demonstrated. Methods of code development which optimize processor array use and minimize inter-processor communication are presented including lattice partitioning and the use of processor array spanning tree structures for data reduction. Both geometric and algorithmic parallel approaches are utilized. Benchmarks in terms of Monte Carlo updates per second for the MasPar architecture are presented and compared to values reported in the literature from comparable studies on other architectures.

  8. Monte Carlo modeling of eye iris color

    NASA Astrophysics Data System (ADS)

    Koblova, Ekaterina V.; Bashkatov, Alexey N.; Dolotov, Leonid E.; Sinichkin, Yuri P.; Kamenskikh, Tatyana G.; Genina, Elina A.; Tuchin, Valery V.

    2007-05-01

    Based on the presented two-layer eye iris model, the iris diffuse reflectance has been calculated by Monte Carlo technique in the spectral range 400-800 nm. The diffuse reflectance spectra have been recalculated in L*a*b* color coordinate system. Obtained results demonstrated that the iris color coordinates (hue and chroma) can be used for estimation of melanin content in the range of small melanin concentrations, i.e. for estimation of melanin content in blue and green eyes.

  9. The CCFM Monte Carlo generator CASCADE

    NASA Astrophysics Data System (ADS)

    Jung, H.

    2002-02-01

    CASCADE is a full hadron level Monte Carlo event generator for ep, γp and p p¯ processes, which uses the CCFM evolution equation for the initial state cascade in a backward evolution approach supplemented with off-shell matrix elements for the hard scattering. A detailed program description is given, with emphasis on parameters the user wants to change and common block variables which completely specify the generated events.

  10. Accelerated Monte Carlo by Embedded Cluster Dynamics

    NASA Astrophysics Data System (ADS)

    Brower, R. C.; Gross, N. A.; Moriarty, K. J. M.

    1991-07-01

    We present an overview of the new methods for embedding Ising spins in continuous fields to achieve accelerated cluster Monte Carlo algorithms. The methods of Brower and Tamayo and Wolff are summarized and variations are suggested for the O( N) models based on multiple embedded Z2 spin components and/or correlated projections. Topological features are discussed for the XY model and numerical simulations presented for d=2, d=3 and mean field theory lattices.

  11. Monte Carlo simulation of Alaska wolf survival

    NASA Astrophysics Data System (ADS)

    Feingold, S. J.

    1996-02-01

    Alaskan wolves live in a harsh climate and are hunted intensively. Penna's biological aging code, using Monte Carlo methods, has been adapted to simulate wolf survival. It was run on the case in which hunting causes the disruption of wolves' social structure. Social disruption was shown to increase the number of deaths occurring at a given level of hunting. For high levels of social disruption, the population did not survive.

  12. Inhomogeneous Monte Carlo simulations of dermoscopic spectroscopy

    NASA Astrophysics Data System (ADS)

    Gareau, Daniel S.; Li, Ting; Jacques, Steven; Krueger, James

    2012-03-01

    Clinical skin-lesion diagnosis uses dermoscopy: 10X epiluminescence microscopy. Skin appearance ranges from black to white with shades of blue, red, gray and orange. Color is an important diagnostic criteria for diseases including melanoma. Melanin and blood content and distribution impact the diffuse spectral remittance (300-1000nm). Skin layers: immersion medium, stratum corneum, spinous epidermis, basal epidermis and dermis as well as laterally asymmetric features (eg. melanocytic invasion) were modeled in an inhomogeneous Monte Carlo model.

  13. Applications of Maxent to quantum Monte Carlo

    SciTech Connect

    Silver, R.N.; Sivia, D.S.; Gubernatis, J.E. ); Jarrell, M. . Dept. of Physics)

    1990-01-01

    We consider the application of maximum entropy methods to the analysis of data produced by computer simulations. The focus is the calculation of the dynamical properties of quantum many-body systems by Monte Carlo methods, which is termed the Analytical Continuation Problem.'' For the Anderson model of dilute magnetic impurities in metals, we obtain spectral functions and transport coefficients which obey Kondo Universality.'' 24 refs., 7 figs.

  14. An introduction to Monte Carlo methods

    NASA Astrophysics Data System (ADS)

    Walter, J.-C.; Barkema, G. T.

    2015-01-01

    Monte Carlo simulations are methods for simulating statistical systems. The aim is to generate a representative ensemble of configurations to access thermodynamical quantities without the need to solve the system analytically or to perform an exact enumeration. The main principles of Monte Carlo simulations are ergodicity and detailed balance. The Ising model is a lattice spin system with nearest neighbor interactions that is appropriate to illustrate different examples of Monte Carlo simulations. It displays a second order phase transition between disordered (high temperature) and ordered (low temperature) phases, leading to different strategies of simulations. The Metropolis algorithm and the Glauber dynamics are efficient at high temperature. Close to the critical temperature, where the spins display long range correlations, cluster algorithms are more efficient. We introduce the rejection free (or continuous time) algorithm and describe in details an interesting alternative representation of the Ising model using graphs instead of spins with the so-called Worm algorithm. We conclude with an important discussion of the dynamical effects such as thermalization and correlation time.

  15. Numerical reproducibility for implicit Monte Carlo simulations

    SciTech Connect

    Cleveland, M.; Brunner, T.; Gentile, N.

    2013-07-01

    We describe and compare different approaches for achieving numerical reproducibility in photon Monte Carlo simulations. Reproducibility is desirable for code verification, testing, and debugging. Parallelism creates a unique problem for achieving reproducibility in Monte Carlo simulations because it changes the order in which values are summed. This is a numerical problem because double precision arithmetic is not associative. In [1], a way of eliminating this roundoff error using integer tallies was described. This approach successfully achieves reproducibility at the cost of lost accuracy by rounding double precision numbers to fewer significant digits. This integer approach, and other extended reproducibility techniques, are described and compared in this work. Increased precision alone is not enough to ensure reproducibility of photon Monte Carlo simulations. A non-arbitrary precision approaches required a varying degree of rounding to achieve reproducibility. For the problems investigated in this work double precision global accuracy was achievable by using 100 bits of precision or greater on all unordered sums which where subsequently rounded to double precision at the end of every time-step. (authors)

  16. Monte Carlo dose mapping on deforming anatomy

    PubMed Central

    Zhong, Hualiang; Siebers, Jeffrey V

    2010-01-01

    This paper proposes a Monte Carlo-based energy and mass congruent mapping (EMCM) method to calculate the dose on deforming anatomy. Different from dose interpolation methods, EMCM separately maps each voxels deposited energy and mass from a source image to a reference image with a displacement vector field (DVF) generated by deformable image registration (DIR). EMCM was compared with other dose mapping methods: energy-based dose interpolation (EBDI) and trilinear dose interpolation (TDI). These methods were implemented in EGSnrc/DOSXYZnrc, validated using a numerical deformable phantom and compared for clinical CT images. On the numerical phantom with an analytically invertible deformation map, EMCM mapped the dose exactly the same as its analytic solution, while EBDI and TDI had average dose errors of 2.5% and 6.0%. For a lung patients IMRT treatment plan, EBDI and TDI differed from EMCM by 1.96% and 7.3% in the lung patients entire dose region, respectively. As a 4D Monte Carlo dose calculation technique, EMCM is accurate and its speed is comparable to 3D Monte Carlo simulation. This method may serve as a valuable tool for accurate dose accumulation as well as for 4D dosimetry QA. PMID:19741278

  17. Monte Carlo dose mapping on deforming anatomy.

    PubMed

    Zhong, Hualiang; Siebers, Jeffrey V

    2009-10-01

    This paper proposes a Monte Carlo-based energy and mass congruent mapping (EMCM) method to calculate the dose on deforming anatomy. Different from dose interpolation methods, EMCM separately maps each voxel's deposited energy and mass from a source image to a reference image with a displacement vector field (DVF) generated by deformable image registration (DIR). EMCM was compared with other dose mapping methods: energy-based dose interpolation (EBDI) and trilinear dose interpolation (TDI). These methods were implemented in EGSnrc/DOSXYZnrc, validated using a numerical deformable phantom and compared for clinical CT images. On the numerical phantom with an analytically invertible deformation map, EMCM mapped the dose exactly the same as its analytic solution, while EBDI and TDI had average dose errors of 2.5% and 6.0%. For a lung patient's IMRT treatment plan, EBDI and TDI differed from EMCM by 1.96% and 7.3% in the lung patient's entire dose region, respectively. As a 4D Monte Carlo dose calculation technique, EMCM is accurate and its speed is comparable to 3D Monte Carlo simulation. This method may serve as a valuable tool for accurate dose accumulation as well as for 4D dosimetry QA. PMID:19741278

  18. Monte Carlo dose mapping on deforming anatomy

    NASA Astrophysics Data System (ADS)

    Zhong, Hualiang; Siebers, Jeffrey V.

    2009-10-01

    This paper proposes a Monte Carlo-based energy and mass congruent mapping (EMCM) method to calculate the dose on deforming anatomy. Different from dose interpolation methods, EMCM separately maps each voxel's deposited energy and mass from a source image to a reference image with a displacement vector field (DVF) generated by deformable image registration (DIR). EMCM was compared with other dose mapping methods: energy-based dose interpolation (EBDI) and trilinear dose interpolation (TDI). These methods were implemented in EGSnrc/DOSXYZnrc, validated using a numerical deformable phantom and compared for clinical CT images. On the numerical phantom with an analytically invertible deformation map, EMCM mapped the dose exactly the same as its analytic solution, while EBDI and TDI had average dose errors of 2.5% and 6.0%. For a lung patient's IMRT treatment plan, EBDI and TDI differed from EMCM by 1.96% and 7.3% in the lung patient's entire dose region, respectively. As a 4D Monte Carlo dose calculation technique, EMCM is accurate and its speed is comparable to 3D Monte Carlo simulation. This method may serve as a valuable tool for accurate dose accumulation as well as for 4D dosimetry QA.

  19. Quantum Monte Carlo study of first-row atoms using transcorrelated variational Monte Carlo trial functions.

    PubMed

    Prasad, Rajendra; Umezawa, Naoto; Domin, Dominik; Salomon-Ferrer, Romelia; Lester, William A

    2007-04-28

    The effect of using the transcorrelated variational Monte Carlo (TC-VMC) approach to construct a trial function for fixed node diffusion Monte Carlo (DMC) energy calculations has been investigated for the first-row atoms, Li to Ne. The computed energies are compared with fixed node DMC energies obtained using trial functions constructed from Hartree-Fock and density functional levels of theory. Despite major VMC energy improvement with TC-VMC trial functions, no improvement in DMC energy was observed using these trial functions for the first-row atoms studied. The implications of these results on the nodes of the trial wave functions are discussed. PMID:17477591

  20. A fully coupled Monte Carlo/discrete ordinates solution to the neutron transport equation. Final report

    SciTech Connect

    Filippone, W.L.; Baker, R.S.

    1990-12-31

    The neutron transport equation is solved by a hybrid method that iteratively couples regions where deterministic (S{sub N}) and stochastic (Monte Carlo) methods are applied. Unlike previous hybrid methods, the Monte Carlo and S{sub N} regions are fully coupled in the sense that no assumption is made about geometrical separation or decoupling. The hybrid method provides a new means of solving problems involving both optically thick and optically thin regions that neither Monte Carlo nor S{sub N} is well suited for by themselves. The fully coupled Monte Carlo/S{sub N} technique consists of defining spatial and/or energy regions of a problem in which either a Monte Carlo calculation or an S{sub N} calculation is to be performed. The Monte Carlo region may comprise the entire spatial region for selected energy groups, or may consist of a rectangular area that is either completely or partially embedded in an arbitrary S{sub N} region. The Monte Carlo and S{sub N} regions are then connected through the common angular boundary fluxes, which are determined iteratively using the response matrix technique, and volumetric sources. The hybrid method has been implemented in the S{sub N} code TWODANT by adding special-purpose Monte Carlo subroutines to calculate the response matrices and volumetric sources, and linkage subrountines to carry out the interface flux iterations. The common angular boundary fluxes are included in the S{sub N} code as interior boundary sources, leaving the logic for the solution of the transport flux unchanged, while, with minor modifications, the diffusion synthetic accelerator remains effective in accelerating S{sub N} calculations. The special-purpose Monte Carlo routines used are essentially analog, with few variance reduction techniques employed. However, the routines have been successfully vectorized, with approximately a factor of five increase in speed over the non-vectorized version.

  1. Status of Monte-Carlo Event Generators

    SciTech Connect

    Hoeche, Stefan; /SLAC

    2011-08-11

    Recent progress on general-purpose Monte-Carlo event generators is reviewed with emphasis on the simulation of hard QCD processes and subsequent parton cascades. Describing full final states of high-energy particle collisions in contemporary experiments is an intricate task. Hundreds of particles are typically produced, and the reactions involve both large and small momentum transfer. The high-dimensional phase space makes an exact solution of the problem impossible. Instead, one typically resorts to regarding events as factorized into different steps, ordered descending in the mass scales or invariant momentum transfers which are involved. In this picture, a hard interaction, described through fixed-order perturbation theory, is followed by multiple Bremsstrahlung emissions off initial- and final-state and, finally, by the hadronization process, which binds QCD partons into color-neutral hadrons. Each of these steps can be treated independently, which is the basic concept inherent to general-purpose event generators. Their development is nowadays often focused on an improved description of radiative corrections to hard processes through perturbative QCD. In this context, the concept of jets is introduced, which allows to relate sprays of hadronic particles in detectors to the partons in perturbation theory. In this talk, we briefly review recent progress on perturbative QCD in event generation. The main focus lies on the general-purpose Monte-Carlo programs HERWIG, PYTHIA and SHERPA, which will be the workhorses for LHC phenomenology. A detailed description of the physics models included in these generators can be found in [8]. We also discuss matrix-element generators, which provide the parton-level input for general-purpose Monte Carlo.

  2. Exascale Monte Carlo R&D

    SciTech Connect

    Marcus, Ryan C.

    2012-07-24

    Overview of this presentation is (1) Exascale computing - different technologies, getting there; (2) high-performance proof-of-concept MCMini - features and results; and (3) OpenCL toolkit - Oatmeal (OpenCL Automatic Memory Allocation Library) - purpose and features. Despite driver issues, OpenCL seems like a good, hardware agnostic tool. MCMini demonstrates the possibility for GPGPU-based Monte Carlo methods - it shows great scaling for HPC application and algorithmic equivalence. Oatmeal provides a flexible framework to aid in the development of scientific OpenCL codes.

  3. Monte Carlo simulation of the enantioseparation process

    NASA Astrophysics Data System (ADS)

    Bustos, V. A.; Acosta, G.; Gomez, M. R.; Pereyra, V. D.

    2012-09-01

    By means of Monte Carlo simulation, a study of enantioseparation by capillary electrophoresis has been carried out. A simplified system consisting of two enantiomers S (R) and a selector chiral C, which reacts with the enantiomers to form complexes RC (SC), has been considered. The dependence of Δμ (enantioseparation) with the concentration of chiral selector and with temperature have been analyzed by simulation. The effect of the binding constant and the charge of the complexes are also analyzed. The results are qualitatively satisfactory, despite the simplicity of the model.

  4. A Monte Carlo algorithm for degenerate plasmas

    SciTech Connect

    Turrell, A.E. Sherlock, M.; Rose, S.J.

    2013-09-15

    A procedure for performing Monte Carlo calculations of plasmas with an arbitrary level of degeneracy is outlined. It has possible applications in inertial confinement fusion and astrophysics. Degenerate particles are initialised according to the FermiDirac distribution function, and scattering is via a Pauli blocked binary collision approximation. The algorithm is tested against degenerate electronion equilibration, and the degenerate resistivity transport coefficient from unmagnetised first order transport theory. The code is applied to the cold fuel shell and alpha particle equilibration problem of inertial confinement fusion.

  5. Reverse Monte Carlo simulation of liquid water

    NASA Astrophysics Data System (ADS)

    Jedlovszky, P.; Bak, I.; Plinks, G.

    1994-04-01

    Reverse Monte Carlo simulation of liquid water has been carried out on the basis of partial pair correlation functions determined by Soper and Phillips. The configurations obtained from this simulation were analyzed in detail. The results were compared with those obtained from molecular dynamics (MD) simulation in order to interpret the differences between the experimental and the MD partial pair correlation function sets. By evaluating the experimental data we found a more distorted geometry of the hydrogen bonds, and also that a significant fraction of the nearest-neighbour molecules distributes randomly rather than tetrahedrally around a central water molecule.

  6. Monte Carlo simulation of reentry plasmas

    NASA Technical Reports Server (NTRS)

    Carlson, Ann B.; Moss, James N.; Hassan, H. A.

    1989-01-01

    Attention is given to the treatment of ionization and plasma effects in the direct simulation Monte Carlo method. The requirements for accurate modeling of reentry plasmas are discussed along with the difficulties these requirements present. The current method for modeling such plasmas is reviewed and an alternative method is presented. Both methods are applied to the flow of a 10 km/s shock wave in air at 0.1 torr; a flowfield directly relevant to the projected aeroassisted orbital transfer vehicle. The results are compared and the differences between the methods are discussed.

  7. Monte Carlo simulation in Fourier space

    NASA Astrophysics Data System (ADS)

    Trster, Andreas

    2008-07-01

    In the context of solving the long-standing problem of computing Landau-Ginzburg free energies including gradient corrections for the ? model, we recently introduced a new Monte Carlo algorithm for lattice spin systems based exclusively on Fourier amplitudes of the underlying spin configurations [A. Trster, Phys. Rev. B 76 (2007) 012402]. In the present paper we shall provide additional information on the motivation, main ideas and constructions underlying the algorithm. Also we discuss important details of its construction with emphasis on an analysis of its scaling behavior with system size.

  8. Quantum Monte Carlo calculations for light nuclei

    SciTech Connect

    Wiringa, R.B.

    1997-10-01

    Quantum Monte Carlo calculations of ground and low-lying excited states for nuclei with A {le} 8 have been made using a realistic Hamiltonian that fits NN scattering data. Results for more than two dozen different (J{sup {pi}}, T) p-shell states, not counting isobaric analogs, have been obtained. The known excitation spectra of all the nuclei are reproduced reasonably well. Density and momentum distributions and various electromagnetic moments and form factors have also been computed. These are the first microscopic calculations that directly produce nuclear shell structure from realistic NN interactions.

  9. Monte Carlo simulation for the transport beamline

    SciTech Connect

    Romano, F.; Cuttone, G.; Jia, S. B.; Varisano, A.; Attili, A.; Marchetto, F.; Russo, G.; Cirrone, G. A. P.; Schillaci, F.; Scuderi, V.; Carpinelli, M.

    2013-07-26

    In the framework of the ELIMED project, Monte Carlo (MC) simulations are widely used to study the physical transport of charged particles generated by laser-target interactions and to preliminarily evaluate fluence and dose distributions. An energy selection system and the experimental setup for the TARANIS laser facility in Belfast (UK) have been already simulated with the GEANT4 (GEometry ANd Tracking) MC toolkit. Preliminary results are reported here. Future developments are planned to implement a MC based 3D treatment planning in order to optimize shots number and dose delivery.

  10. Monte Carlo methods to calculate impact probabilities

    NASA Astrophysics Data System (ADS)

    Rickman, H.; Wi?niowski, T.; Wajer, P.; Gabryszewski, R.; Valsecchi, G. B.

    2014-09-01

    Context. Unraveling the events that took place in the solar system during the period known as the late heavy bombardment requires the interpretation of the cratered surfaces of the Moon and terrestrial planets. This, in turn, requires good estimates of the statistical impact probabilities for different source populations of projectiles, a subject that has received relatively little attention, since the works of pik (1951, Proc. R. Irish Acad. Sect. A, 54, 165) and Wetherill (1967, J. Geophys. Res., 72, 2429). Aims: We aim to work around the limitations of the pik and Wetherill formulae, which are caused by singularities due to zero denominators under special circumstances. Using modern computers, it is possible to make good estimates of impact probabilities by means of Monte Carlo simulations, and in this work, we explore the available options. Methods: We describe three basic methods to derive the average impact probability for a projectile with a given semi-major axis, eccentricity, and inclination with respect to a target planet on an elliptic orbit. One is a numerical averaging of the Wetherill formula; the next is a Monte Carlo super-sizing method using the target's Hill sphere. The third uses extensive minimum orbit intersection distance (MOID) calculations for a Monte Carlo sampling of potentially impacting orbits, along with calculations of the relevant interval for the timing of the encounter allowing collision. Numerical experiments are carried out for an intercomparison of the methods and to scrutinize their behavior near the singularities (zero relative inclination and equal perihelion distances). Results: We find an excellent agreement between all methods in the general case, while there appear large differences in the immediate vicinity of the singularities. With respect to the MOID method, which is the only one that does not involve simplifying assumptions and approximations, the Wetherill averaging impact probability departs by diverging toward infinity, while the Hill sphere method results in a severely underestimated probability. We provide a discussion of the reasons for these differences, and we finally present the results of the MOID method in the form of probability maps for the Earth and Mars on their current orbits. These maps show a relatively flat probability distribution, except for the occurrence of two ridges found at small inclinations and for coinciding projectile/target perihelion distances. Conclusions: Our results verify the standard formulae in the general case, away from the singularities. In fact, severe shortcomings are limited to the immediate vicinity of those extreme orbits. On the other hand, the new Monte Carlo methods can be used without excessive consumption of computer time, and the MOID method avoids the problems associated with the other methods. Appendices are available in electronic form at http://www.aanda.org

  11. Monte Carlo simulation for the transport beamline

    NASA Astrophysics Data System (ADS)

    Romano, F.; Attili, A.; Cirrone, G. A. P.; Carpinelli, M.; Cuttone, G.; Jia, S. B.; Marchetto, F.; Russo, G.; Schillaci, F.; Scuderi, V.; Tramontana, A.; Varisano, A.

    2013-07-01

    In the framework of the ELIMED project, Monte Carlo (MC) simulations are widely used to study the physical transport of charged particles generated by laser-target interactions and to preliminarily evaluate fluence and dose distributions. An energy selection system and the experimental setup for the TARANIS laser facility in Belfast (UK) have been already simulated with the GEANT4 (GEometry ANd Tracking) MC toolkit. Preliminary results are reported here. Future developments are planned to implement a MC based 3D treatment planning in order to optimize shots number and dose delivery.

  12. State-of-the-art Monte Carlo 1988

    SciTech Connect

    Soran, P.D.

    1988-06-28

    Particle transport calculations in highly dimensional and physically complex geometries, such as detector calibration, radiation shielding, space reactors, and oil-well logging, generally require Monte Carlo transport techniques. Monte Carlo particle transport can be performed on a variety of computers ranging from APOLLOs to VAXs. Some of the hardware and software developments, which now permit Monte Carlo methods to be routinely used, are reviewed in this paper. The development of inexpensive, large, fast computer memory, coupled with fast central processing units, permits Monte Carlo calculations to be performed on workstations, minicomputers, and supercomputers. The Monte Carlo renaissance is further aided by innovations in computer architecture and software development. Advances in vectorization and parallelization architecture have resulted in the development of new algorithms which have greatly reduced processing times. Finally, the renewed interest in Monte Carlo has spawned new variance reduction techniques which are being implemented in large computer codes. 45 refs.

  13. Monte Carlo modeling of spatial coherence: free-space diffraction

    PubMed Central

    Fischer, David G.; Prahl, Scott A.; Duncan, Donald D.

    2008-01-01

    We present a Monte Carlo method for propagating partially coherent fields through complex deterministic optical systems. A Gaussian copula is used to synthesize a random source with an arbitrary spatial coherence function. Physical optics and Monte Carlo predictions of the first- and second-order statistics of the field are shown for coherent and partially coherent sources for free-space propagation, imaging using a binary Fresnel zone plate, and propagation through a limiting aperture. Excellent agreement between the physical optics and Monte Carlo predictions is demonstrated in all cases. Convergence criteria are presented for judging the quality of the Monte Carlo predictions. PMID:18830335

  14. Importance sampling based direct simulation Monte Carlo method

    NASA Astrophysics Data System (ADS)

    Vedula, Prakash; Otten, Dustin

    2010-11-01

    We propose a novel and efficient approach, termed as importance sampling based direct simulation Monte Carlo (ISDSMC), for prediction of nonequilibrium flows via solution of the Boltzmann equation. Besides leading to a reduction in computational cost, ISDSMC also results in a reduction in statistical scatter compared to conventional direct simulation Monte Carlo (DSMC) and hence appears to be potentially useful for prediction of a variety of flows, especially where the signal to noise ratio is small (e.g. microflows) . In this particle in cell approach, the computational particles are initially assigned weights (or importance) based on constraints on generalized moments of velocity. Solution of the Boltzmann equation is achieved by use of (i) a streaming operator which streams the computational particles and (ii) a collision operator where the representative collision pairs are selected stochastically based on particle weights via an acceptance-rejection algorithm. Performance of ISDSMC approach is evaluated using analysis of three canonical microflows, namely (i) thermal Couette flow, (ii) velocity-slip Couette flow and (iii) Poiseulle flow. Our results based on ISDSMC indicate good agreement with those obtained from conventional DSMC methods. The potential advantages of this (ISDSMC) approach to granular flows will also be demonstrated using simulations of homogeneous relaxation of a granular gas.

  15. Quantum Monte Carlo methods for nuclear physics

    SciTech Connect

    Carlson, Joseph A.; Gandolfi, Stefano; Pederiva, Francesco; Pieper, Steven C.; Schiavilla, Rocco; Schmidt, K. E,; Wiringa, Robert B.

    2014-10-19

    Quantum Monte Carlo methods have proved very valuable to study the structure and reactions of light nuclei and nucleonic matter starting from realistic nuclear interactions and currents. These ab-initio calculations reproduce many low-lying states, moments and transitions in light nuclei, and simultaneously predict many properties of light nuclei and neutron matter over a rather wide range of energy and momenta. We review the nuclear interactions and currents, and describe the continuum Quantum Monte Carlo methods used in nuclear physics. These methods are similar to those used in condensed matter and electronic structure but naturally include spin-isospin, tensor, spin-orbit, and three-body interactions. We present a variety of results including the low-lying spectra of light nuclei, nuclear form factors, and transition matrix elements. We also describe low-energy scattering techniques, studies of the electroweak response of nuclei relevant in electron and neutrino scattering, and the properties of dense nucleonic matter as found in neutron stars. A coherent picture of nuclear structure and dynamics emerges based upon rather simple but realistic interactions and currents.

  16. Quantum Monte Carlo methods for nuclear physics

    DOE PAGESBeta

    Carlson, Joseph A.; Gandolfi, Stefano; Pederiva, Francesco; Pieper, Steven C.; Schiavilla, Rocco; Schmidt, K. E,; Wiringa, Robert B.

    2014-10-19

    Quantum Monte Carlo methods have proved very valuable to study the structure and reactions of light nuclei and nucleonic matter starting from realistic nuclear interactions and currents. These ab-initio calculations reproduce many low-lying states, moments and transitions in light nuclei, and simultaneously predict many properties of light nuclei and neutron matter over a rather wide range of energy and momenta. We review the nuclear interactions and currents, and describe the continuum Quantum Monte Carlo methods used in nuclear physics. These methods are similar to those used in condensed matter and electronic structure but naturally include spin-isospin, tensor, spin-orbit, and three-bodymore » interactions. We present a variety of results including the low-lying spectra of light nuclei, nuclear form factors, and transition matrix elements. We also describe low-energy scattering techniques, studies of the electroweak response of nuclei relevant in electron and neutrino scattering, and the properties of dense nucleonic matter as found in neutron stars. A coherent picture of nuclear structure and dynamics emerges based upon rather simple but realistic interactions and currents.« less

  17. Quantum Monte Carlo methods for nuclear physics

    DOE PAGESBeta

    Carlson, J.; Gandolfi, S.; Pederiva, F.; Pieper, Steven C.; Schiavilla, R.; Schmidt, K. E.; Wiringa, R. B.

    2015-09-09

    Quantum Monte Carlo methods have proved valuable to study the structure and reactions of light nuclei and nucleonic matter starting from realistic nuclear interactions and currents. These ab-initio calculations reproduce many low-lying states, moments, and transitions in light nuclei, and simultaneously predict many properties of light nuclei and neutron matter over a rather wide range of energy and momenta. The nuclear interactions and currents are reviewed along with a description of the continuum quantum Monte Carlo methods used in nuclear physics. These methods are similar to those used in condensed matter and electronic structure but naturally include spin-isospin, tensor, spin-orbit,more » and three-body interactions. A variety of results are presented, including the low-lying spectra of light nuclei, nuclear form factors, and transition matrix elements. Low-energy scattering techniques, studies of the electroweak response of nuclei relevant in electron and neutrino scattering, and the properties of dense nucleonic matter as found in neutron stars are also described. Furthermore, a coherent picture of nuclear structure and dynamics emerges based upon rather simple but realistic interactions and currents.« less

  18. THE MCNPX MONTE CARLO RADIATION TRANSPORT CODE

    SciTech Connect

    WATERS, LAURIE S.; MCKINNEY, GREGG W.; DURKEE, JOE W.; FENSIN, MICHAEL L.; JAMES, MICHAEL R.; JOHNS, RUSSELL C.; PELOWITZ, DENISE B.

    2007-01-10

    MCNPX (Monte Carlo N-Particle eXtended) is a general-purpose Monte Carlo radiation transport code with three-dimensional geometry and continuous-energy transport of 34 particles and light ions. It contains flexible source and tally options, interactive graphics, and support for both sequential and multi-processing computer platforms. MCNPX is based on MCNP4B, and has been upgraded to most MCNP5 capabilities. MCNP is a highly stable code tracking neutrons, photons and electrons, and using evaluated nuclear data libraries for low-energy interaction probabilities. MCNPX has extended this base to a comprehensive set of particles and light ions, with heavy ion transport in development. Models have been included to calculate interaction probabilities when libraries are not available. Recent additions focus on the time evolution of residual nuclei decay, allowing calculation of transmutation and delayed particle emission. MCNPX is now a code of great dynamic range, and the excellent neutronics capabilities allow new opportunities to simulate devices of interest to experimental particle physics; particularly calorimetry. This paper describes the capabilities of the current MCNPX version 2.6.C, and also discusses ongoing code development.

  19. Discrete range clustering using Monte Carlo methods

    NASA Technical Reports Server (NTRS)

    Chatterji, G. B.; Sridhar, B.

    1993-01-01

    For automatic obstacle avoidance guidance during rotorcraft low altitude flight, a reliable model of the nearby environment is needed. Such a model may be constructed by applying surface fitting techniques to the dense range map obtained by active sensing using radars. However, for covertness, passive sensing techniques using electro-optic sensors are desirable. As opposed to the dense range map obtained via active sensing, passive sensing algorithms produce reliable range at sparse locations, and therefore, surface fitting techniques to fill the gaps in the range measurement are not directly applicable. Both for automatic guidance and as a display for aiding the pilot, these discrete ranges need to be grouped into sets which correspond to objects in the nearby environment. The focus of this paper is on using Monte Carlo methods for clustering range points into meaningful groups. One of the aims of the paper is to explore whether simulated annealing methods offer significant advantage over the basic Monte Carlo method for this class of problems. We compare three different approaches and present application results of these algorithms to a laboratory image sequence and a helicopter flight sequence.

  20. Multilevel Monte Carlo simulation of Coulomb collisions

    SciTech Connect

    Rosin, M.S.; Ricketson, L.F.; Dimits, A.M.; Caflisch, R.E.; Cohen, B.I.

    2014-10-01

    We present a new, for plasma physics, highly efficient multilevel Monte Carlo numerical method for simulating Coulomb collisions. The method separates and optimally minimizes the finite-timestep and finite-sampling errors inherent in the Langevin representation of the LandauFokkerPlanck equation. It does so by combining multiple solutions to the underlying equations with varying numbers of timesteps. For a desired level of accuracy ?, the computational cost of the method is O(?{sup ?2}) or O(?{sup ?2}(ln?){sup 2}), depending on the underlying discretization, Milstein or EulerMaruyama respectively. This is to be contrasted with a cost of O(?{sup ?3}) for direct simulation Monte Carlo or binary collision methods. We successfully demonstrate the method with a classic beam diffusion test case in 2D, making use of the Lvy area approximation for the correlated Milstein cross terms, and generating a computational saving of a factor of 100 for ?=10{sup ?5}. We discuss the importance of the method for problems in which collisions constitute the computational rate limiting step, and its limitations.

  1. Multilevel Monte Carlo simulation of Coulomb collisions

    NASA Astrophysics Data System (ADS)

    Rosin, M. S.; Ricketson, L. F.; Dimits, A. M.; Caflisch, R. E.; Cohen, B. I.

    2014-10-01

    We present a new, for plasma physics, highly efficient multilevel Monte Carlo numerical method for simulating Coulomb collisions. The method separates and optimally minimizes the finite-timestep and finite-sampling errors inherent in the Langevin representation of the Landau-Fokker-Planck equation. It does so by combining multiple solutions to the underlying equations with varying numbers of timesteps. For a desired level of accuracy ?, the computational cost of the method is O(?-2) or O(?-2(), depending on the underlying discretization, Milstein or Euler-Maruyama respectively. This is to be contrasted with a cost of O(?-3) for direct simulation Monte Carlo or binary collision methods. We successfully demonstrate the method with a classic beam diffusion test case in 2D, making use of the Lvy area approximation for the correlated Milstein cross terms, and generating a computational saving of a factor of 100 for ?=10-5. We discuss the importance of the method for problems in which collisions constitute the computational rate limiting step, and its limitations.

  2. Parallel and Portable Monte Carlo Particle Transport

    NASA Astrophysics Data System (ADS)

    Lee, S. R.; Cummings, J. C.; Nolen, S. D.; Keen, N. D.

    1997-08-01

    We have developed a multi-group, Monte Carlo neutron transport code in C++ using object-oriented methods and the Parallel Object-Oriented Methods and Applications (POOMA) class library. This transport code, called MC++, currently computes k and ? eigenvalues of the neutron transport equation on a rectilinear computational mesh. It is portable to and runs in parallel on a wide variety of platforms, including MPPs, clustered SMPs, and individual workstations. It contains appropriate classes and abstractions for particle transport and, through the use of POOMA, for portable parallelism. Current capabilities are discussed, along with physics and performance results for several test problems on a variety of hardware, including all three Accelerated Strategic Computing Initiative (ASCI) platforms. Current parallel performance indicates the ability to compute ?-eigenvalues in seconds or minutes rather than days or weeks. Current and future work on the implementation of a general transport physics framework (TPF) is also described. This TPF employs modern C++ programming techniques to provide simplified user interfaces, generic STL-style programming, and compile-time performance optimization. Physics capabilities of the TPF will be extended to include continuous energy treatments, implicit Monte Carlo algorithms, and a variety of convergence acceleration techniques such as importance combing.

  3. Quantum Monte Carlo methods for nuclear physics

    SciTech Connect

    Carlson, J.; Gandolfi, S.; Pederiva, F.; Pieper, Steven C.; Schiavilla, R.; Schmidt, K. E.; Wiringa, R. B.

    2015-09-09

    Quantum Monte Carlo methods have proved valuable to study the structure and reactions of light nuclei and nucleonic matter starting from realistic nuclear interactions and currents. These ab-initio calculations reproduce many low-lying states, moments, and transitions in light nuclei, and simultaneously predict many properties of light nuclei and neutron matter over a rather wide range of energy and momenta. The nuclear interactions and currents are reviewed along with a description of the continuum quantum Monte Carlo methods used in nuclear physics. These methods are similar to those used in condensed matter and electronic structure but naturally include spin-isospin, tensor, spin-orbit, and three-body interactions. A variety of results are presented, including the low-lying spectra of light nuclei, nuclear form factors, and transition matrix elements. Low-energy scattering techniques, studies of the electroweak response of nuclei relevant in electron and neutrino scattering, and the properties of dense nucleonic matter as found in neutron stars are also described. Furthermore, a coherent picture of nuclear structure and dynamics emerges based upon rather simple but realistic interactions and currents.

  4. Monte Carlo radiative transfer in protoplanetary disks

    NASA Astrophysics Data System (ADS)

    Pinte, C.; Ménard, F.; Duchêne, G.; Bastien, P.

    2006-12-01

    Aims.We present a new continuum 3D radiative transfer code, MCFOST, based on a Monte-Carlo method. MCFOST can be used to calculate (i) monochromatic images in scattered light and/or thermal emission; (ii) polarisation maps; (iii) interferometric visibilities; (iv) spectral energy distributions; and (v) dust temperature distributions of protoplanetary disks. Methods: .Several improvements to the standard Monte Carlo method are implemented in MCFOST to increase efficiency and reduce convergence time, including wavelength distribution adjustments, mean intensity calculations, and an adaptive sampling of the radiation field. The reliability and efficiency of the code are tested against a previously-defined benchmark, using a 2D disk configuration. No significant difference (no more than 10% and usually much less) is found between the temperatures and SEDs calculated by MCFOST and by other codes included in the benchmark. Results: . A study of the lowest disk mass detectable by Spitzer, around young stars, is presented and the colours of "representative" parametric disks compared to recent IRAC and MIPS Spitzer colours of solar-like young stars located in nearby star-forming regions.

  5. Quantum Monte Carlo methods for nuclear physics

    NASA Astrophysics Data System (ADS)

    Carlson, J.; Gandolfi, S.; Pederiva, F.; Pieper, Steven C.; Schiavilla, R.; Schmidt, K. E.; Wiringa, R. B.

    2015-07-01

    Quantum Monte Carlo methods have proved valuable to study the structure and reactions of light nuclei and nucleonic matter starting from realistic nuclear interactions and currents. These ab initio calculations reproduce many low-lying states, moments, and transitions in light nuclei, and simultaneously predict many properties of light nuclei and neutron matter over a rather wide range of energy and momenta. The nuclear interactions and currents are reviewed along with a description of the continuum quantum Monte Carlo methods used in nuclear physics. These methods are similar to those used in condensed matter and electronic structure but naturally include spin-isospin, tensor, spin-orbit, and three-body interactions. A variety of results are presented, including the low-lying spectra of light nuclei, nuclear form factors, and transition matrix elements. Low-energy scattering techniques, studies of the electroweak response of nuclei relevant in electron and neutrino scattering, and the properties of dense nucleonic matter as found in neutron stars are also described. A coherent picture of nuclear structure and dynamics emerges based upon rather simple but realistic interactions and currents.

  6. Metallic lithium by quantum Monte Carlo

    SciTech Connect

    Sugiyama, G.; Zerah, G.; Alder, B.J.

    1986-12-01

    Lithium was chosen as the simplest known metal for the first application of quantum Monte Carlo methods in order to evaluate the accuracy of conventional one-electron band theories. Lithium has been extensively studied using such techniques. Band theory calculations have certain limitations in general and specifically in their application to lithium. Results depend on such factors as charge shape approximations (muffin tins), pseudopotentials (a special problem for lithium where the lack of rho core states requires a strong pseudopotential), and the form and parameters chosen for the exchange potential. The calculations are all one-electron methods in which the correlation effects are included in an ad hoc manner. This approximation may be particularly poor in the high compression regime, where the core states become delocalized. Furthermore, band theory provides only self-consistent results rather than strict limits on the energies. The quantum Monte Carlo method is a totally different technique using a many-body rather than a mean field approach which yields an upper bound on the energies. 18 refs., 4 figs., 1 tab.

  7. Simple Monte Carlo model for crowd dynamics

    NASA Astrophysics Data System (ADS)

    Piazza, Francesco

    2010-08-01

    In this paper, we introduce a simple Monte Carlo method for simulating the dynamics of a crowd. Within our model a collection of hard-disk agents is subjected to a series of two-stage steps, implying (i) the displacement of one specific agent followed by (ii) a rearrangement of the rest of the group through a Monte Carlo dynamics. The rules for the combined steps are determined by the specific setting of the granular flow, so that our scheme should be easily adapted to describe crowd dynamics issues of many sorts, from stampedes in panic scenarios to organized flow around obstacles or through bottlenecks. We validate our scheme by computing the serving times statistics of a group of agents crowding to be served around a desk. In the case of a size homogeneous crowd, we recover intuitive results prompted by physical sense. However, as a further illustration of our theoretical framework, we show that heterogeneous systems display a less obvious behavior, as smaller agents feature shorter serving times. Finally, we analyze our results in the light of known properties of nonequilibrium hard-disk fluids and discuss general implications of our model.

  8. Quantum Monte Carlo for atoms and molecules

    SciTech Connect

    Barnett, R.N.

    1989-11-01

    The diffusion quantum Monte Carlo with fixed nodes (QMC) approach has been employed in studying energy-eigenstates for 1--4 electron systems. Previous work employing the diffusion QMC technique yielded energies of high quality for H{sub 2}, LiH, Li{sub 2}, and H{sub 2}O. Here, the range of calculations with this new approach has been extended to include additional first-row atoms and molecules. In addition, improvements in the previously computed fixed-node energies of LiH, Li{sub 2}, and H{sub 2}O have been obtained using more accurate trial functions. All computations were performed within, but are not limited to, the Born-Oppenheimer approximation. In our computations, the effects of variation of Monte Carlo parameters on the QMC solution of the Schroedinger equation were studied extensively. These parameters include the time step, renormalization time and nodal structure. These studies have been very useful in determining which choices of such parameters will yield accurate QMC energies most efficiently. Generally, very accurate energies (90--100% of the correlation energy is obtained) have been computed with single-determinant trail functions multiplied by simple correlation functions. Improvements in accuracy should be readily obtained using more complex trial functions.

  9. Scalable Domain Decomposed Monte Carlo Particle Transport

    NASA Astrophysics Data System (ADS)

    O'Brien, Matthew Joseph

    In this dissertation, we present the parallel algorithms necessary to run domain decomposed Monte Carlo particle transport on large numbers of processors (millions of processors). Previous algorithms were not scalable, and the parallel overhead became more computationally costly than the numerical simulation. The main algorithms we consider are: Domain decomposition of constructive solid geometry: enables extremely large calculations in which the background geometry is too large to fit in the memory of a single computational node. Load Balancing: keeps the workload per processor as even as possible so the calculation runs efficiently. Global Particle Find: if particles are on the wrong processor, globally resolve their locations to the correct processor based on particle coordinate and background domain. Visualizing constructive solid geometry, sourcing particles, deciding that particle streaming communication is completed and spatial redecomposition. These algorithms are some of the most important parallel algorithms required for domain decomposed Monte Carlo particle transport. We demonstrate that our previous algorithms were not scalable, prove that our new algorithms are scalable, and run some of the algorithms up to 2 million MPI processes on the Sequoia supercomputer.

  10. Calculating Pi Using the Monte Carlo Method

    NASA Astrophysics Data System (ADS)

    Williamson, Timothy

    2013-11-01

    During the summer of 2012, I had the opportunity to participate in a research experience for teachers at the center for sustainable energy at Notre Dame University (RET @ cSEND) working with Professor John LoSecco on the problem of using antineutrino detection to accurately determine the fuel makeup and operating power of nuclear reactors. During full power operation, a reactor may produce 1021 antineutrinos per second with approximately 100 per day being detected. While becoming familiar with the design and operation of the detectors, and how total antineutrino flux could be obtained from such a small sample, I read about a simulation program called Monte Carlo. Further investigation led me to the Monte Carlo method page of Wikipedia2 where I saw an example of approximating pi using this simulation. Other examples where this method was applied were typically done with computer simulations2 or purely mathematical.3 It is my belief that this method may be easily related to the students by performing the simple activity of sprinkling rice on an arc drawn in a square. The activity that follows was inspired by those simulations and was used by my AP Physics class last year with very good results.

  11. Anomalous Scaling in Passive Scalar Advection: Monte Carlo Lagrangian Trajectories

    NASA Astrophysics Data System (ADS)

    Gat, Omri; Procaccia, Itamar; Zeitak, Reuven

    1998-06-01

    We present a numerical method which is used to calculate anomalous scaling exponents of structure functions in the Kraichnan passive scalar advection model [R. H. Kraichnan, Phys. Fluids 11, 945 (1968)]. This Monte Carlo method, which is applicable in any space dimension, is based on the Lagrangian path interpretation of passive scalar dynamics, and uses the recently discovered equivalence between scaling exponents of structure functions and relaxation rates in the stochastic shape dynamics of groups of Lagrangian particles. We calculate third and fourth order anomalous exponents for several dimensions, comparing with the predictions of perturbative calculations in large dimensions. We find that Kraichnan's closure appears to give results in close agreement with the numerics. The third order exponents are compatible with our own previous nonperturbative calculations.

  12. Normality of Monte Carlo criticality eigenfunction decomposition coefficients

    SciTech Connect

    Toth, B. E.; Martin, W. R.; Griesheimer, D. P.

    2013-07-01

    A proof is presented, which shows that after a single Monte Carlo (MC) neutron transport power method iteration without normalization, the coefficients of an eigenfunction decomposition of the fission source density are normally distributed when using analog or implicit capture MC. Using a Pearson correlation coefficient test, the proof is corroborated by results from a uniform slab reactor problem, and those results also suggest that the coefficients are normally distributed with normalization. The proof and numerical test results support the application of earlier work on the convergence of eigenfunctions under stochastic operators. Knowledge of the Gaussian shape of decomposition coefficients allows researchers to determine an appropriate level of confidence in the distribution of fission sites taken from a MC simulation. This knowledge of the shape of the probability distributions of decomposition coefficients encourages the creation of new predictive convergence diagnostics. (authors)

  13. Estimating rock mass properties using Monte Carlo simulation: Ankara andesites

    NASA Astrophysics Data System (ADS)

    Sari, Mehmet; Karpuz, Celal; Ayday, Can

    2010-07-01

    In the paper, a previously introduced method ( Sari, 2009) is applied to the problem of estimating the rock mass properties of Ankara andesites. For this purpose, appropriate closed form (parametric) distributions are described for intact rock and discontinuity parameters of the Ankara andesites at three distinct weathering grades. Then, these distributions are included as inputs in the Rock Mass Rating ( RMR) classification system prepared in a spreadsheet model. A stochastic analysis is carried out to evaluate the influence of correlations between relevant distributions on the simulated RMR values. The model is also used in Monte Carlo simulations to estimate the possible ranges of the Hoek-Brown strength parameters of the rock under investigation. The proposed approach provides a straightforward and effective assessment of the variability of the rock mass properties. Hence, a wide array of mechanical characteristics can be adequately represented in any preliminary design consideration for a given rock mass.

  14. Quantum Monte Carlo Endstation for Petascale Computing

    SciTech Connect

    Lubos Mitas

    2011-01-26

    NCSU research group has been focused on accomplising the key goals of this initiative: establishing new generation of quantum Monte Carlo (QMC) computational tools as a part of Endstation petaflop initiative for use at the DOE ORNL computational facilities and for use by computational electronic structure community at large; carrying out high accuracy quantum Monte Carlo demonstration projects in application of these tools to the forefront electronic structure problems in molecular and solid systems; expanding the impact of QMC methods and approaches; explaining and enhancing the impact of these advanced computational approaches. In particular, we have developed quantum Monte Carlo code (QWalk, www.qwalk.org) which was significantly expanded and optimized using funds from this support and at present became an actively used tool in the petascale regime by ORNL researchers and beyond. These developments have been built upon efforts undertaken by the PI's group and collaborators over the period of the last decade. The code was optimized and tested extensively on a number of parallel architectures including petaflop ORNL Jaguar machine. We have developed and redesigned a number of code modules such as evaluation of wave functions and orbitals, calculations of pfaffians and introduction of backflow coordinates together with overall organization of the code and random walker distribution over multicore architectures. We have addressed several bottlenecks such as load balancing and verified efficiency and accuracy of the calculations with the other groups of the Endstation team. The QWalk package contains about 50,000 lines of high quality object-oriented C++ and includes also interfaces to data files from other conventional electronic structure codes such as Gamess, Gaussian, Crystal and others. This grant supported PI for one month during summers, a full-time postdoc and partially three graduate students over the period of the grant duration, it has resulted in 13 published papers, 15 invited talks and lectures nationally and internationally. My former graduate student and postdoc Dr. Michal Bajdich, who was supported byt this grant, is currently a postdoc with ORNL in the group of Dr. F. Reboredo and Dr. P. Kent and is using the developed tools in a number of DOE projects. The QWalk package has become a truly important research tool used by the electronic structure community and has attracted several new developers in other research groups. Our tools use several types of correlated wavefunction approaches, variational, diffusion and reptation methods, large-scale optimization methods for wavefunctions and enables to calculate energy differences such as cohesion, electronic gaps, but also densities and other properties, using multiple runs one can obtain equations of state for given structures and beyond. Our codes use efficient numerical and Monte Carlo strategies (high accuracy numerical orbitals, multi-reference wave functions, highly accurate correlation factors, pairing orbitals, force biased and correlated sampling Monte Carlo), are robustly parallelized and enable to run on tens of thousands cores very efficiently. Our demonstration applications were focused on the challenging research problems in several fields of materials science such as transition metal solids. We note that our study of FeO solid was the first QMC calculation of transition metal oxides at high pressures.

  15. Monte Carlo study of alloy nanostructures

    NASA Astrophysics Data System (ADS)

    Yang, Bo

    2006-04-01

    Alloy materials with nanostructures are receiving growing interest for magnetic storage applications. Substantial experimental efforts are being devoted to synthesis of alloy nanostructures for magnetic reading-writing heads and as storage media. Further developments in this area would benefit from a detailed understanding of the thermodynamic factors underlying structural formation and transformation in relevant nanoscale geometries. This thesis is devoted to the development and application of lattice-model-based Monte Carlo simulations for investigating the phase diagrams and thermodynamic properties of alloys in two nanostructure geometries: epitaxial ultrathin films and faceted nanoparticles. Recently, ultrathin alloy films composed of size-mismatched bulk-immiscible metals, have been observed to form self-assembled lateral multilayer (SALM) structures, which provide a novel method to fabricate magnetic readers based on the giant magnetoresistance (GMR) effect. We investigate the energetic factors leading to the formation of SALM structures, focusing specifically on Fe-Ag/Mo(110) where experiments observed compositionally modulated stripe patterns with 2 nm periodicities. A lattice model framework is developed to simulate the thermodynamic stability of these films. We find that the competition between the chemical and elastic interactions leads to a minimum energy for stripes with a specific periodicity. Monte-Carlo simulations lead to predictions for the periodicity of the stripes and order-disorder transition temperatures consistent with experimental observations. Novel methods have been developed to synthesize nanoscale L10 structures with large magnetocrystalline anisotropy, but a general problem is that as-synthesized these particles form in the disordered nonmagnetic phase and an annealing step is required to induce transformation into the desired L10 structure. Since annealing can also lead to the undesirable particle coalescence, optimization of processing methods can benefit from detailed understanding of the thermodynamic and kinetic factors underlying the ordering process. We have adapted a lattice-model framework previously applied to studies of surface segregation and surface-alloying in Monte Carlo studies of nanoparticle ordering as functions of size, composition and shape. We find that both decreasing size and increasing surface segregation can reduce the ordering temperature. The connection between the results and properties is discussed, and directions for future computational research on this topic are suggested.

  16. Theory and Applications of Quantum Monte Carlo

    NASA Astrophysics Data System (ADS)

    Deible, Michael John

    With the development of peta-scale computers and exa-scale only a few years away, the quantum Monte Carlo (QMC) method, with favorable scaling and inherent parrallelizability, is poised to increase its impact on the electronic structure community. The most widely used variation of QMC is the diffusion Monte Carlo (DMC) method. The accuracy of the DMC method is only limited by the trial wave function that it employs. The effect of the trial wave function is studied here by initially developing correlation-consistent Gaussian basis sets for use in DMC calculations. These basis sets give a low variance in variance Monte Carlo calculations and improved convergence in DMC. The orbital type used in the trial wave function is then investigated, and it is shown that Brueckner orbitals result in a DMC energy comparable to a DMC energy with orbitals from density functional theory and significantly lower than orbitals from Hartree-Fock theory. Three large weakly interacting systems are then studied; a water-16 isomer, a methane clathrate, and a carbon dioxide clathrate. The DMC method is seen to be in good agreement with MP2 calculations and provides reliable benchmarks. Several strongly correlated systems are then studied. An H4 model system that allows for a fine tuning of the multi-configurational character of the wave function shows when the accuracy of the DMC method with a single Slater-determinant trial function begins to deviate from multi-reference benchmarks. The weakly interacting face-to-face ethylene dimer is studied with and without a rotation around the pi bond, which is used to increase the multi-configurational nature of the wave function. This test shows that the effect of a multi-configurational wave function in weakly interacting systems causes DMC with a single Slater-determinant to be unable to achieve sub-chemical accuracy. The beryllium dimer is studied, and it is shown that a very large determinant expansion is required for DMC to predict a binding energy that is in close agreement with experiment. Finally, water interacting with increasingly large acenes is studied, as is the benzene and anthracene dimer. Deviations from benchmarks are discussed.

  17. A Primer in Monte Carlo Integration Using Mathcad

    ERIC Educational Resources Information Center

    Hoyer, Chad E.; Kegerreis, Jeb S.

    2013-01-01

    The essentials of Monte Carlo integration are presented for use in an upper-level physical chemistry setting. A Mathcad document that aids in the dissemination and utilization of this information is described and is available in the Supporting Information. A brief outline of Monte Carlo integration is given, along with ideas and pedagogy for

  18. Economic Risk Analysis: Using Analytical and Monte Carlo Techniques.

    ERIC Educational Resources Information Center

    O'Donnell, Brendan R.; Hickner, Michael A.; Barna, Bruce A.

    2002-01-01

    Describes the development and instructional use of a Microsoft Excel spreadsheet template that facilitates analytical and Monte Carlo risk analysis of investment decisions. Discusses a variety of risk assessment methods followed by applications of the analytical and Monte Carlo methods. Uses a case study to illustrate use of the spreadsheet tool

  19. Economic Risk Analysis: Using Analytical and Monte Carlo Techniques.

    ERIC Educational Resources Information Center

    O'Donnell, Brendan R.; Hickner, Michael A.; Barna, Bruce A.

    2002-01-01

    Describes the development and instructional use of a Microsoft Excel spreadsheet template that facilitates analytical and Monte Carlo risk analysis of investment decisions. Discusses a variety of risk assessment methods followed by applications of the analytical and Monte Carlo methods. Uses a case study to illustrate use of the spreadsheet tool…

  20. abcpmc: Approximate Bayesian Computation for Population Monte-Carlo code

    NASA Astrophysics Data System (ADS)

    Akeret, Joel

    2015-04-01

    abcpmc is a Python Approximate Bayesian Computing (ABC) Population Monte Carlo (PMC) implementation based on Sequential Monte Carlo (SMC) with Particle Filtering techniques. It is extendable with k-nearest neighbour (KNN) or optimal local covariance matrix (OLCM) pertubation kernels and has built-in support for massively parallelized sampling on a cluster using MPI.

  1. A Primer in Monte Carlo Integration Using Mathcad

    ERIC Educational Resources Information Center

    Hoyer, Chad E.; Kegerreis, Jeb S.

    2013-01-01

    The essentials of Monte Carlo integration are presented for use in an upper-level physical chemistry setting. A Mathcad document that aids in the dissemination and utilization of this information is described and is available in the Supporting Information. A brief outline of Monte Carlo integration is given, along with ideas and pedagogy for…

  2. Monte Carlo modeling and meteor showers

    NASA Technical Reports Server (NTRS)

    Kulikova, N. V.

    1987-01-01

    Prediction of short lived increases in the cosmic dust influx, the concentration in lower thermosphere of atoms and ions of meteor origin and the determination of the frequency of micrometeor impacts on spacecraft are all of scientific and practical interest and all require adequate models of meteor showers at an early stage of their existence. A Monte Carlo model of meteor matter ejection from a parent body at any point of space was worked out by other researchers. This scheme is described. According to the scheme, the formation of ten well known meteor streams was simulated and the possibility of genetic affinity of each of them with the most probable parent comet was analyzed. Some of the results are presented.

  3. Exploring theory space with Monte Carlo reweighting

    DOE PAGESBeta

    Gainer, James S.; Lykken, Joseph; Matchev, Konstantin T.; Mrenna, Stephen; Park, Myeonghun

    2014-10-13

    Theories of new physics often involve a large number of unknown parameters which need to be scanned. Additionally, a putative signal in a particular channel may be due to a variety of distinct models of new physics. This makes experimental attempts to constrain the parameter space of motivated new physics models with a high degree of generality quite challenging. We describe how the reweighting of events may allow this challenge to be met, as fully simulated Monte Carlo samples generated for arbitrary benchmark models can be effectively re-used. In particular, we suggest procedures that allow more efficient collaboration between theoristsmoreand experimentalists in exploring large theory parameter spaces in a rigorous way at the LHC.less

  4. Parallel tempering Monte Carlo in LAMMPS.

    SciTech Connect

    Rintoul, Mark Daniel; Plimpton, Steven James; Sears, Mark P.

    2003-11-01

    We present here the details of the implementation of the parallel tempering Monte Carlo technique into a LAMMPS, a heavily used massively parallel molecular dynamics code at Sandia. This technique allows for many replicas of a system to be run at different simulation temperatures. At various points in the simulation, configurations can be swapped between different temperature environments and then continued. This allows for large regions of energy space to be sampled very quickly, and allows for minimum energy configurations to emerge in very complex systems, such as large biomolecular systems. By including this algorithm into an existing code, we immediately gain all of the previous work that had been put into LAMMPS, and allow this technique to quickly be available to the entire Sandia and international LAMMPS community. Finally, we present an example of this code applied to folding a small protein.

  5. Monte Carlo simulations of medical imaging modalities

    SciTech Connect

    Estes, G.P.

    1998-09-01

    Because continuous-energy Monte Carlo radiation transport calculations can be nearly exact simulations of physical reality (within data limitations, geometric approximations, transport algorithms, etc.), it follows that one should be able to closely approximate the results of many experiments from first-principles computations. This line of reasoning has led to various MCNP studies that involve simulations of medical imaging modalities and other visualization methods such as radiography, Anger camera, computerized tomography (CT) scans, and SABRINA particle track visualization. It is the intent of this paper to summarize some of these imaging simulations in the hope of stimulating further work, especially as computer power increases. Improved interpretation and prediction of medical images should ultimately lead to enhanced medical treatments. It is also reasonable to assume that such computations could be used to design new or more effective imaging instruments.

  6. Monte Carlo Simulation of Endlinking Oligomers

    NASA Technical Reports Server (NTRS)

    Hinkley, Jeffrey A.; Young, Jennifer A.

    1998-01-01

    This report describes initial efforts to model the endlinking reaction of phenylethynyl-terminated oligomers. Several different molecular weights were simulated using the Bond Fluctuation Monte Carlo technique on a 20 x 20 x 20 unit lattice with periodic boundary conditions. After a monodisperse "melt" was equilibrated, chain ends were linked whenever they came within the allowed bond distance. Ends remained reactive throughout, so that multiple links were permitted. Even under these very liberal crosslinking assumptions, geometrical factors limited the degree of crosslinking. Average crosslink functionalities were 2.3 to 2.6; surprisingly, they did not depend strongly on the chain length. These results agreed well with the degrees of crosslinking inferred from experiment in a cured phenylethynyl-terminated polyimide oligomer.

  7. Monte-Carlo Simulation Balancing in Practice

    NASA Astrophysics Data System (ADS)

    Huang, Shih-Chieh; Coulom, Rmi; Lin, Shun-Shii

    Simulation balancing is a new technique to tune parameters of a playout policy for a Monte-Carlo game-playing program. So far, this algorithm had only been tested in a very artificial setting: it was limited to 55 and 66 Go, and required a stronger external program that served as a supervisor. In this paper, the effectiveness of simulation balancing is demonstrated in a more realistic setting. A state-of-the-art program, Erica, learned an improved playout policy on the 99 board, without requiring any external expert to provide position evaluations. The evaluations were collected by letting the program analyze positions by itself. The previous version of Erica learned pattern weights with the minorization-maximization algorithm. Thanks to simulation balancing, its playing strength was improved from a winning rate of 69% to 78% against Fuego 0.4.

  8. Monte Carlo applications to acoustical field solutions

    NASA Technical Reports Server (NTRS)

    Haviland, J. K.; Thanedar, B. D.

    1973-01-01

    The Monte Carlo technique is proposed for the determination of the acoustical pressure-time history at chosen points in a partial enclosure, the central idea of this technique being the tracing of acoustical rays. A statistical model is formulated and an algorithm for pressure is developed, the conformity of which is examined by two approaches and is shown to give the known results. The concepts that are developed are applied to the determination of the transient field due to a sound source in a homogeneous medium in a rectangular enclosure with perfect reflecting walls, and the results are compared with those presented by Mintzer based on the Laplace transform approach, as well as with a normal mode solution.

  9. Exploring theory space with Monte Carlo reweighting

    SciTech Connect

    Gainer, James S.; Lykken, Joseph; Matchev, Konstantin T.; Mrenna, Stephen; Park, Myeonghun

    2014-10-13

    Theories of new physics often involve a large number of unknown parameters which need to be scanned. Additionally, a putative signal in a particular channel may be due to a variety of distinct models of new physics. This makes experimental attempts to constrain the parameter space of motivated new physics models with a high degree of generality quite challenging. We describe how the reweighting of events may allow this challenge to be met, as fully simulated Monte Carlo samples generated for arbitrary benchmark models can be effectively re-used. Specifically, we suggest procedures that allow more efficient collaboration between theorists and experimentalists in exploring large theory parameter spaces in a rigorous way at the LHC.

  10. Monte Carlo simulation of ferroelectric domain growth

    NASA Astrophysics Data System (ADS)

    Li, B. L.; Liu, X. P.; Fang, F.; Zhu, J. L.; Liu, J.-M.

    2006-01-01

    The kinetics of two-dimensional isothermal domain growth in a quenched ferroelectric system is investigated using Monte Carlo simulation based on a realistic Ginzburg-Landau ferroelectric model with cubic-tetragonal (square-rectangle) phase transitions. The evolution of the domain pattern and domain size with annealing time is simulated, and the stability of trijunctions and tetrajunctions of domain walls is analyzed. It is found that in this much realistic model with strong dipole alignment anisotropy and long-range Coulomb interaction, the powerlaw for normal domain growth still stands applicable. Towards the late stage of domain growth, both the average domain area and reciprocal density of domain wall junctions increase linearly with time, and the one-parameter dynamic scaling of the domain growth is demonstrated.

  11. Nonlocal pseudopotentials and diffusion Monte Carlo

    SciTech Connect

    Mitas, L. . Department of Physics); Shirley, E.L. . Materials Research Lab. Illinois Univ., Urbana, IL . Dept. of Physics); Ceperley, D.M. . Center for Supercomputing Research and Development University of Illinois at Urbana-Champaign, Urbana, Illinois . Department of Physics)

    1991-09-01

    We have applied the technique of evaluating a nonlocal pseudopotential with a trial function to give an approximate, local many-body pseudopotential which was used in a valence-only diffusion Monte Carlo (DMC) calculation. The pair and triple correlation terms in the trial function have been carefully optimized to minimize the effect of the locality approximation. We discuss the accuracy and computational demands of the nonlocal pseudopotential evaluation for the DMC method. Calculations of Si, Sc, and Cu ionic and atomic states and the Si{sub 2} dimer are reported. In most cases {similar to}90% of the correlation energy was recovered at the variational level and excellent estimations of the ground state energies were obtained by the DMC simulations. The small statistical error allowed us to determine the quality of the assumed pseudopotentials by comparison of the DMC results with experimental values.

  12. Monte Carlo simulation of radiating reentry flows

    NASA Technical Reports Server (NTRS)

    Taylor, Jeff C.; Carlson, Ann B.; Hassan, H. A.

    1993-01-01

    The Direct Simulation Monte Carlo (DSMC) method is applied to a radiating, hypersonic, axisymmetric flow over a blunt body in the near continuum regime. The ability of the method to predict the flowfield radiation and the radiative heating is investigated for flow over the Project Fire II configuration at 11.36 kilometers per second at an altitude of 76.42 kilometers. Two methods that differ in the manner in which they treat ionization and estimate electronic excitation are employed. The calculated results are presented and compared with both experimental data and solutions where radiation effects were not included. Differences in the results are discussed. Both methods ignore self absorption and, as a result, overpredict measured radiative heating.

  13. Methods for Monte Carlo simulations of biomacromolecules

    PubMed Central

    Vitalis, Andreas; Pappu, Rohit V.

    2010-01-01

    The state-of-the-art for Monte Carlo (MC) simulations of biomacromolecules is reviewed. Available methodologies for sampling conformational equilibria and associations of biomacromolecules in the canonical ensemble, given a continuum description of the solvent environment, are reviewed. Detailed sections are provided dealing with the choice of degrees of freedom, the efficiencies of MC algorithms and algorithmic peculiarities, as well as the optimization of simple movesets. The issue of introducing correlations into elementary MC moves, and the applicability of such methods to simulations of biomacromolecules is discussed. A brief discussion of multicanonical methods and an overview of recent simulation work highlighting the potential of MC methods are also provided. It is argued that MC simulations, while underutilized biomacromolecular simulation community, hold promise for simulations of complex systems and phenomena that span multiple length scales, especially when used in conjunction with implicit solvation models or other coarse graining strategies. PMID:20428473

  14. Exploring theory space with Monte Carlo reweighting

    DOE PAGESBeta

    Gainer, James S.; Lykken, Joseph; Matchev, Konstantin T.; Mrenna, Stephen; Park, Myeonghun

    2014-10-13

    Theories of new physics often involve a large number of unknown parameters which need to be scanned. Additionally, a putative signal in a particular channel may be due to a variety of distinct models of new physics. This makes experimental attempts to constrain the parameter space of motivated new physics models with a high degree of generality quite challenging. We describe how the reweighting of events may allow this challenge to be met, as fully simulated Monte Carlo samples generated for arbitrary benchmark models can be effectively re-used. Specifically, we suggest procedures that allow more efficient collaboration between theorists andmore » experimentalists in exploring large theory parameter spaces in a rigorous way at the LHC.« less

  15. Exploring theory space with Monte Carlo reweighting

    SciTech Connect

    Gainer, James S.; Lykken, Joseph; Matchev, Konstantin T.; Mrenna, Stephen; Park, Myeonghun

    2014-10-13

    Theories of new physics often involve a large number of unknown parameters which need to be scanned. Additionally, a putative signal in a particular channel may be due to a variety of distinct models of new physics. This makes experimental attempts to constrain the parameter space of motivated new physics models with a high degree of generality quite challenging. We describe how the reweighting of events may allow this challenge to be met, as fully simulated Monte Carlo samples generated for arbitrary benchmark models can be effectively re-used. In particular, we suggest procedures that allow more efficient collaboration between theorists and experimentalists in exploring large theory parameter spaces in a rigorous way at the LHC.

  16. Configurational temperature: Verification of Monte Carlo simulations

    NASA Astrophysics Data System (ADS)

    Butler, B. D.; Ayton, Gary; Jepps, Owen G.; Evans, Denis J.

    1998-10-01

    A new diagnostic that is useful for checking the algorithmic correctness of Monte Carlo computer programs is presented. The check is made by comparing the Boltzmann temperature, which is input to the program and used to accept or reject moves, with a configurational temperature kBTconfig=|∇qΦ|2/∇q2Φ. Here, Φ is the potential energy of the system and ∇q represents the dimensionless gradient operator with respect to the particle positions q. We show, using a simulation of Lennard-Jones particles, that the configurational temperature rapidly and accurately tracks changes made to the input temperature even when the system is not in global thermodynamic equilibrium. Coding and/or algorithm errors can be detected by checking that the input temperature and Tconfig agree. The effects of system size and continuity of Φ and its first derivative on Tconfig are also discussed.

  17. Entanglement Spectroscopy using Quantum Monte Carlo

    NASA Astrophysics Data System (ADS)

    Chung, Chia-Min; Bonnes, Lars; Chen, Pochung; Luchli, Andreas

    2014-03-01

    We present a numerical scheme to reconstruct a subset of the entanglement spectrum of quantum many body systems using quantum Monte Carlo. The approach builds on the replica trick to evaluate particle number resolved traces of the first n of powers of a reduced density matrix. From this information we reconstruct n entanglement spectrum levels using a polynomial root solver. We illustrate the power and limitations of the method by an application to the extended Bose-Hubbard model in one dimension where we are able to resolve the quasi-degeneracy of the entanglement spectrum in the Haldane-Insulator phase. In general the method is able to reconstruct the largest few eigenvalues in each symmetry sector and typically performs better when the eigenvalues are not too different.

  18. Entanglement spectroscopy using quantum Monte Carlo

    NASA Astrophysics Data System (ADS)

    Chung, Chia-Min; Bonnes, Lars; Chen, Pochung; Luchli, Andreas M.

    2014-05-01

    We present a numerical scheme to reconstruct a subset of the entanglement spectrum of quantum many body systems using quantum Monte Carlo. The approach builds on the replica trick to evaluate particle number resolved traces of the first n of powers of a reduced density matrix. From this information we reconstruct n entanglement spectrum levels using a polynomial root solver. We illustrate the power and limitations of the method by an application to the extended Bose-Hubbard model in one dimension where we are able to resolve the quasidegeneracy of the entanglement spectrum in the Haldane-insulator phase. In general, the method is able to reconstruct the largest few eigenvalues in each symmetry sector and typically performs better when the eigenvalues are not too different.

  19. Accuracy control in Monte Carlo radiative calculations

    NASA Technical Reports Server (NTRS)

    Almazan, P. Planas

    1993-01-01

    The general accuracy law that rules the Monte Carlo, ray-tracing algorithms used commonly for the calculation of the radiative entities in the thermal analysis of spacecraft are presented. These entities involve transfer of radiative energy either from a single source to a target (e.g., the configuration factors). or from several sources to a target (e.g., the absorbed heat fluxes). In fact, the former is just a particular case of the latter. The accuracy model is later applied to the calculation of some specific radiative entities. Furthermore, some issues related to the implementation of such a model in a software tool are discussed. Although only the relative error is considered through the discussion, similar results can be derived for the absolute error.

  20. Monte Carlo approaches to effective field theories

    SciTech Connect

    Carlson, J. ); Schmidt, K.E. . Dept. of Physics)

    1991-01-01

    In this paper, we explore the application of continuum Monte Carlo methods to effective field theory models. Effective field theories, in this context, are those in which a Fock space decomposition of the state is useful. These problems arise both in nuclear and condensed matter physica. In nuclear physics, much work has been done on effective field theories of mesons and baryons. While the theories are not fundamental, they should be able to describe nuclear properties at low energy and momentum scales. After describing the methods, we solve two simple scalar field theory problems; the polaron and two nucleons interacting through scalar meson exchange. The methods presented here are rather straightforward extensions of methods used to solve quantum mechanics problems. Monte Carlo methods are used to avoid the truncation inherent in a Tamm-Dancoff approach and its associated difficulties. Nevertheless, the methods will be most valuable when the Fock space decomposition of the states is useful. Hence, while they are not intended for ab initio studies of QCD, they may prove valuable in studies of light nuclei, or for systems of interacting electrons and phonons. In these problems a Fock space decomposition can be used to reduce the number of degrees of freedom and to retain the rotational symmetries exactly. The problems we address here are comparatively simple, but offer useful initial tests of the method. We present results for the polaron and two non-relativistic nucleons interacting through scalar meson exchange. In each case, it is possible to integrate out the boson degrees of freedom exactly, and obtain a retarded form of the action that depends only upon the fermion paths. Here we keep the explicit bosons, though, since we would like to retain information about the boson components of the states and it will be necessary to keep these components in order to treat non-scalar of interacting bosonic fields.

  1. Monte Carlo modelling of TRIGA research reactor

    NASA Astrophysics Data System (ADS)

    El Bakkari, B.; Nacir, B.; El Bardouni, T.; El Younoussi, C.; Merroun, O.; Htet, A.; Boulaich, Y.; Zoubair, M.; Boukhal, H.; Chakir, M.

    2010-10-01

    The Moroccan 2 MW TRIGA MARK II research reactor at Centre des Etudes Nuclaires de la Mamora (CENM) achieved initial criticality on May 2, 2007. The reactor is designed to effectively implement the various fields of basic nuclear research, manpower training, and production of radioisotopes for their use in agriculture, industry, and medicine. This study deals with the neutronic analysis of the 2-MW TRIGA MARK II research reactor at CENM and validation of the results by comparisons with the experimental, operational, and available final safety analysis report (FSAR) values. The study was prepared in collaboration between the Laboratory of Radiation and Nuclear Systems (ERSN-LMR) from Faculty of Sciences of Tetuan (Morocco) and CENM. The 3-D continuous energy Monte Carlo code MCNP (version 5) was used to develop a versatile and accurate full model of the TRIGA core. The model represents in detailed all components of the core with literally no physical approximation. Continuous energy cross-section data from the more recent nuclear data evaluations (ENDF/B-VI.8, ENDF/B-VII.0, JEFF-3.1, and JENDL-3.3) as well as S( ?, ?) thermal neutron scattering functions distributed with the MCNP code were used. The cross-section libraries were generated by using the NJOY99 system updated to its more recent patch file "up259". The consistency and accuracy of both the Monte Carlo simulation and neutron transport physics were established by benchmarking the TRIGA experiments. Core excess reactivity, total and integral control rods worth as well as power peaking factors were used in the validation process. Results of calculations are analysed and discussed.

  2. Vectorized Monte Carlo methods for reactor lattice analysis

    NASA Technical Reports Server (NTRS)

    Brown, F. B.

    1984-01-01

    Some of the new computational methods and equivalent mathematical representations of physics models used in the MCV code, a vectorized continuous-enery Monte Carlo code for use on the CYBER-205 computer are discussed. While the principal application of MCV is the neutronics analysis of repeating reactor lattices, the new methods used in MCV should be generally useful for vectorizing Monte Carlo for other applications. For background, a brief overview of the vector processing features of the CYBER-205 is included, followed by a discussion of the fundamentals of Monte Carlo vectorization. The physics models used in the MCV vectorized Monte Carlo code are then summarized. The new methods used in scattering analysis are presented along with details of several key, highly specialized computational routines. Finally, speedups relative to CDC-7600 scalar Monte Carlo are discussed.

  3. Crossing the mesoscale no-mans land via parallel kinetic Monte Carlo.

    SciTech Connect

    Garcia Cardona, Cristina; Webb, Edmund Blackburn, III; Wagner, Gregory John; Tikare, Veena; Holm, Elizabeth Ann; Plimpton, Steven James; Thompson, Aidan Patrick; Slepoy, Alexander; Zhou, Xiao Wang; Battaile, Corbett Chandler; Chandross, Michael Evan

    2009-10-01

    The kinetic Monte Carlo method and its variants are powerful tools for modeling materials at the mesoscale, meaning at length and time scales in between the atomic and continuum. We have completed a 3 year LDRD project with the goal of developing a parallel kinetic Monte Carlo capability and applying it to materials modeling problems of interest to Sandia. In this report we give an overview of the methods and algorithms developed, and describe our new open-source code called SPPARKS, for Stochastic Parallel PARticle Kinetic Simulator. We also highlight the development of several Monte Carlo models in SPPARKS for specific materials modeling applications, including grain growth, bubble formation, diffusion in nanoporous materials, defect formation in erbium hydrides, and surface growth and evolution.

  4. A hybrid Monte Carlo and response matrix Monte Carlo method in criticality calculation

    SciTech Connect

    Li, Z.; Wang, K.

    2012-07-01

    Full core calculations are very useful and important in reactor physics analysis, especially in computing the full core power distributions, optimizing the refueling strategies and analyzing the depletion of fuels. To reduce the computing time and accelerate the convergence, a method named Response Matrix Monte Carlo (RMMC) method based on analog Monte Carlo simulation was used to calculate the fixed source neutron transport problems in repeated structures. To make more accurate calculations, we put forward the RMMC method based on non-analog Monte Carlo simulation and investigate the way to use RMMC method in criticality calculations. Then a new hybrid RMMC and MC (RMMC+MC) method is put forward to solve the criticality problems with combined repeated and flexible geometries. This new RMMC+MC method, having the advantages of both MC method and RMMC method, can not only increase the efficiency of calculations, also simulate more complex geometries rather than repeated structures. Several 1-D numerical problems are constructed to test the new RMMC and RMMC+MC method. The results show that RMMC method and RMMC+MC method can efficiently reduce the computing time and variations in the calculations. Finally, the future research directions are mentioned and discussed at the end of this paper to make RMMC method and RMMC+MC method more powerful. (authors)

  5. Iterative acceleration methods for Monte Carlo and deterministic criticality calculations

    SciTech Connect

    Urbatsch, T.J.

    1995-11-01

    If you have ever given up on a nuclear criticality calculation and terminated it because it took so long to converge, you might find this thesis of interest. The author develops three methods for improving the fission source convergence in nuclear criticality calculations for physical systems with high dominance ratios for which convergence is slow. The Fission Matrix Acceleration Method and the Fission Diffusion Synthetic Acceleration (FDSA) Method are acceleration methods that speed fission source convergence for both Monte Carlo and deterministic methods. The third method is a hybrid Monte Carlo method that also converges for difficult problems where the unaccelerated Monte Carlo method fails. The author tested the feasibility of all three methods in a test bed consisting of idealized problems. He has successfully accelerated fission source convergence in both deterministic and Monte Carlo criticality calculations. By filtering statistical noise, he has incorporated deterministic attributes into the Monte Carlo calculations in order to speed their source convergence. He has used both the fission matrix and a diffusion approximation to perform unbiased accelerations. The Fission Matrix Acceleration method has been implemented in the production code MCNP and successfully applied to a real problem. When the unaccelerated calculations are unable to converge to the correct solution, they cannot be accelerated in an unbiased fashion. A Hybrid Monte Carlo method weds Monte Carlo and a modified diffusion calculation to overcome these deficiencies. The Hybrid method additionally possesses reduced statistical errors.

  6. Recent advances and future prospects for Monte Carlo

    SciTech Connect

    Brown, Forrest B

    2010-01-01

    The history of Monte Carlo methods is closely linked to that of computers: The first known Monte Carlo program was written in 1947 for the ENIAC; a pre-release of the first Fortran compiler was used for Monte Carlo In 1957; Monte Carlo codes were adapted to vector computers in the 1980s, clusters and parallel computers in the 1990s, and teraflop systems in the 2000s. Recent advances include hierarchical parallelism, combining threaded calculations on multicore processors with message-passing among different nodes. With the advances In computmg, Monte Carlo codes have evolved with new capabilities and new ways of use. Production codes such as MCNP, MVP, MONK, TRIPOLI and SCALE are now 20-30 years old (or more) and are very rich in advanced featUres. The former 'method of last resort' has now become the first choice for many applications. Calculations are now routinely performed on office computers, not just on supercomputers. Current research and development efforts are investigating the use of Monte Carlo methods on FPGAs. GPUs, and many-core processors. Other far-reaching research is exploring ways to adapt Monte Carlo methods to future exaflop systems that may have 1M or more concurrent computational processes.

  7. Continuous-time quantum Monte Carlo impurity solvers

    NASA Astrophysics Data System (ADS)

    Gull, Emanuel; Werner, Philipp; Fuchs, Sebastian; Surer, Brigitte; Pruschke, Thomas; Troyer, Matthias

    2011-04-01

    Continuous-time quantum Monte Carlo impurity solvers are algorithms that sample the partition function of an impurity model using diagrammatic Monte Carlo techniques. The present paper describes codes that implement the interaction expansion algorithm originally developed by Rubtsov, Savkin, and Lichtenstein, as well as the hybridization expansion method developed by Werner, Millis, Troyer, et al. These impurity solvers are part of the ALPS-DMFT application package and are accompanied by an implementation of dynamical mean-field self-consistency equations for (single orbital single site) dynamical mean-field problems with arbitrary densities of states. Program summaryProgram title: dmft Catalogue identifier: AEIL_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AEIL_v1_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: ALPS LIBRARY LICENSE version 1.1 No. of lines in distributed program, including test data, etc.: 899 806 No. of bytes in distributed program, including test data, etc.: 32 153 916 Distribution format: tar.gz Programming language: C++ Operating system: The ALPS libraries have been tested on the following platforms and compilers: Linux with GNU Compiler Collection (g++ version 3.1 and higher), and Intel C++ Compiler (icc version 7.0 and higher) MacOS X with GNU Compiler (g++ Apple-version 3.1, 3.3 and 4.0) IBM AIX with Visual Age C++ (xlC version 6.0) and GNU (g++ version 3.1 and higher) compilers Compaq Tru64 UNIX with Compq C++ Compiler (cxx) SGI IRIX with MIPSpro C++ Compiler (CC) HP-UX with HP C++ Compiler (aCC) Windows with Cygwin or coLinux platforms and GNU Compiler Collection (g++ version 3.1 and higher) RAM: 10 MB-1 GB Classification: 7.3 External routines: ALPS [1], BLAS/LAPACK, HDF5 Nature of problem: (See [2].) Quantum impurity models describe an atom or molecule embedded in a host material with which it can exchange electrons. They are basic to nanoscience as representations of quantum dots and molecular conductors and play an increasingly important role in the theory of "correlated electron" materials as auxiliary problems whose solution gives the "dynamical mean field" approximation to the self-energy and local correlation functions. Solution method: Quantum impurity models require a method of solution which provides access to both high and low energy scales and is effective for wide classes of physically realistic models. The continuous-time quantum Monte Carlo algorithms for which we present implementations here meet this challenge. Continuous-time quantum impurity methods are based on partition function expansions of quantum impurity models that are stochastically sampled to all orders using diagrammatic quantum Monte Carlo techniques. For a review of quantum impurity models and their applications and of continuous-time quantum Monte Carlo methods for impurity models we refer the reader to [2]. Additional comments: Use of dmft requires citation of this paper. Use of any ALPS program requires citation of the ALPS [1] paper. Running time: 60 s-8 h per iteration.

  8. Quantum Monte Carlo Algorithms for Diagrammatic Vibrational Structure Calculations

    NASA Astrophysics Data System (ADS)

    Hermes, Matthew; Hirata, So

    2015-06-01

    Convergent hierarchies of theories for calculating many-body vibrational ground and excited-state wave functions, such as Mller-Plesset perturbation theory or coupled cluster theory, tend to rely on matrix-algebraic manipulations of large, high-dimensional arrays of anharmonic force constants, tasks which require large amounts of computer storage space and which are very difficult to implement in a parallel-scalable fashion. On the other hand, existing quantum Monte Carlo (QMC) methods for vibrational wave functions tend to lack robust techniques for obtaining excited-state energies, especially for large systems. By exploiting analytical identities for matrix elements of position operators in a harmonic oscillator basis, we have developed stochastic implementations of the size-extensive vibrational self-consistent field (MC-XVSCF) and size-extensive vibrational Mller-Plesset second-order perturbation (MC-XVMP2) theories which do not require storing the potential energy surface (PES). The programmable equations of MC-XVSCF and MC-XVMP2 take the form of a small number of high-dimensional integrals evaluated using Metropolis Monte Carlo techniques. The associated integrands require independent evaluations of only the value, not the derivatives, of the PES at many points, a task which is trivial to parallelize. However, unlike existing vibrational QMC methods, MC-XVSCF and MC-XVMP2 can calculate anharmonic frequencies directly, rather than as a small difference between two noisy total energies, and do not require user-selected coordinates or nodal surfaces. MC-XVSCF and MC-XVMP2 can also directly sample the PES in a given approximation without analytical or grid-based approximations, enabling us to quantify the errors induced by such approximations.

  9. A radiating shock evaluated using Implicit Monte Carlo Diffusion

    SciTech Connect

    Cleveland, M.; Gentile, N.

    2013-07-01

    Implicit Monte Carlo [1] (IMC) has been shown to be very expensive when used to evaluate a radiation field in opaque media. Implicit Monte Carlo Diffusion (IMD) [2], which evaluates a spatial discretized diffusion equation using a Monte Carlo algorithm, can be used to reduce the cost of evaluating the radiation field in opaque media [2]. This work couples IMD to the hydrodynamics equations to evaluate opaque diffusive radiating shocks. The Lowrie semi-analytic diffusive radiating shock benchmark[a] is used to verify our implementation of the coupled system of equations. (authors)

  10. Discrete Diffusion Monte Carlo for Electron Thermal Transport

    NASA Astrophysics Data System (ADS)

    Chenhall, Jeffrey; Cao, Duc; Wollaeger, Ryan; Moses, Gregory

    2014-10-01

    The iSNB (implicit Schurtz Nicolai Busquet electron thermal transport method of Cao et al. is adapted to a Discrete Diffusion Monte Carlo (DDMC) solution method for eventual inclusion in a hybrid IMC-DDMC (Implicit Monte Carlo) method. The hybrid method will combine the efficiency of a diffusion method in short mean free path regions with the accuracy of a transport method in long mean free path regions. The Monte Carlo nature of the approach allows the algorithm to be massively parallelized. Work to date on the iSNB-DDMC method will be presented. This work was supported by Sandia National Laboratory - Albuquerque.

  11. Comparison of effects of copropagated and precomputed atmosphere profiles on Monte Carlo trajectory simulation

    NASA Technical Reports Server (NTRS)

    Queen, Eric M.; Omara, Thomas M.

    1990-01-01

    A realization of a stochastic atmosphere model for use in simulations is presented. The model provides pressure, density, temperature, and wind velocity as a function of latitude, longitude, and altitude, and is implemented in a three degree of freedom simulation package. This implementation is used in the Monte Carlo simulation of an aeroassisted orbital transfer maneuver and results are compared to those of a more traditional approach.

  12. Noise-induced instability in self-consistent Monte Carlo calculations

    SciTech Connect

    Lemons, D.S.; Lackman, J.; Jones, M.E.; Winske, D.

    1995-12-01

    We identify, analyze, and propose remedies for a numerical instability responsible for the growth or decay of sums that should be conserved in Monte Carlo simulations of stochastically interacting particles. ``Noisy`` sums with fluctuations proportional to 1/ {radical}{ital n} , where {ital n} is the number of particles in the simulation, provide feedback that drives the instability. Numerical illustrations of an energy loss or ``cooling`` instability in an Ornstein-Uhlenbeck process support our analysis. (c) 1995 The American Physical Society

  13. Monte Carlo simulations of Protein Adsorption

    NASA Astrophysics Data System (ADS)

    Sharma, Sumit; Kumar, Sanat K.; Belfort, Georges

    2008-03-01

    Amyloidogenic diseases, such as, Alzheimer's are caused by adsorption and aggregation of partially unfolded proteins. Adsorption of proteins is a concern in design of biomedical devices, such as dialysis membranes. Protein adsorption is often accompanied by conformational rearrangements in protein molecules. Such conformational rearrangements are thought to affect many properties of adsorbed protein molecules such as their adhesion strength to the surface, biological activity, and aggregation tendency. It has been experimentally shown that many naturally occurring proteins, upon adsorption to hydrophobic surfaces, undergo a helix to sheet or random coil secondary structural rearrangement. However, to better understand the equilibrium structural complexities of this phenomenon, we have performed Monte Carlo (MC) simulations of adsorption of a four helix bundle, modeled as a lattice protein, and studied the adsorption behavior and equilibrium protein conformations at different temperatures and degrees of surface hydrophobicity. To study the free energy and entropic effects on adsorption, Canonical ensemble MC simulations have been combined with Weighted Histogram Analysis Method(WHAM). Conformational transitions of proteins on surfaces will be discussed as a function of surface hydrophobicity and compared to analogous bulk transitions.

  14. A Monte Carlo model for `jet quenching'

    NASA Astrophysics Data System (ADS)

    Zapp, Korinna; Ingelman, Gunnar; Rathsman, Johan; Stachel, Johanna; Wiedemann, Urs Achim

    2009-04-01

    We have developed the Monte Carlo simulation program Jewel 1.0 (Jet Evolution With Energy Loss), which interfaces a perturbative final-state parton shower with medium effects occurring in ultra-relativistic heavy-ion collisions. This is done by comparing for each jet fragment the probability of further perturbative splitting with the density-dependent probability of scattering with the medium. A simple hadronisation mechanism is included. In the absence of medium effects, we validate Jewel against a set of benchmark jet measurements. For elastic interactions with the medium, we characterise not only the medium-induced modification of the jet, but also the jet-induced modification of the medium. Our main physical result is the observation that collisional and radiative medium modifications lead to characteristic differences in the jet fragmentation pattern, which persist above a soft background cut. We argue that this should allow one to disentangle collisional and radiative parton energy loss mechanisms by measuring the n-jet fraction or a class of jet shape observables.

  15. Measuring Berry curvature with quantum Monte Carlo

    NASA Astrophysics Data System (ADS)

    Kolodrubetz, Michael

    2014-01-01

    The Berry curvature and its descendant, the Berry phase, play an important role in quantum mechanics. They can be used to understand the Aharonov-Bohm effect, define topological Chern numbers, and generally to investigate the geometric properties of a quantum ground state manifold. While Berry curvature has been well studied in the regimes of few-body physics and noninteracting particles, its use in the regime of strong interactions is hindered by the lack of numerical methods to solve for it. In this paper I fill this gap by implementing a quantum Monte Carlo method to solve for the Berry curvature, based on interpreting Berry curvature as a leading correction to imaginary time ramps. I demonstrate my algorithm using the transverse-field Ising model in one and two dimensions, the latter of which is nonintegrable. Despite the fact that the Berry curvature gives information about the phase of the wave function, I show that the algorithm has no sign or phase problem for standard sign-problem-free Hamiltonians. My algorithm scales similarly to conventional methods as a function of system size and energy gap, and therefore should prove a valuable tool in investigating the quantum geometry of many-body systems.

  16. Monte Carlo simulation of chromatin stretching

    NASA Astrophysics Data System (ADS)

    Aumann, Frank; Lankas, Filip; Caudron, Mawen; Langowski, Jrg

    2006-04-01

    We present Monte Carlo (MC) simulations of the stretching of a single 30nm chromatin fiber. The model approximates the DNA by a flexible polymer chain with Debye-Hckel electrostatics and uses a two-angle zigzag model for the geometry of the linker DNA connecting the nucleosomes. The latter are represented by flat disks interacting via an attractive Gay-Berne potential. Our results show that the stiffness of the chromatin fiber strongly depends on the linker DNA length. Furthermore, changing the twisting angle between nucleosomes from 90 to 130 increases the stiffness significantly. An increase in the opening angle from 22 to 34 leads to softer fibers for small linker lengths. We observe that fibers containing a linker histone at each nucleosome are stiffer compared to those without the linker histone. The simulated persistence lengths and elastic moduli agree with experimental data. Finally, we show that the chromatin fiber does not behave as an isotropic elastic rod, but its rigidity depends on the direction of deformation: Chromatin is much more resistant to stretching than to bending.

  17. Algorithmic differentiation of diffusion Monte Carlo

    NASA Astrophysics Data System (ADS)

    Poole, Tom; Foulkes, Matthew; Spencer, James; Haynes, Peter

    2014-03-01

    Algorithmic differentiation (AD) is a programming technique for the efficient evaluation of the derivatives of a computed function. This approach proceeds via the application of the chain rule to the lines of source code that constitute the mathematical operation of a computer program, allowing access to the derivatives of functions that lack an algebraic representation. Another important element of the AD method is that the ``reverse mode'' of operation yields the derivative of a function output with respect to all inputs, simultaneously, in a small multiple of the computational cost of evaluating the underlying function in isolation. These features make this method particularly applicable to the diffusion Monte Carlo (DMC) algorithm where, despite a number of recent advances in the area, total energy derivatives have remained problematic. Here we present results illustrating accurate DMC energy derivatives with respect to both the input wave function parameters and the nuclear positions, with the former enabling DMC wave function optimization and the latter facilitating DMC molecular dynamics simulations.

  18. Atomistic Monte Carlo Simulation of Lipid Membranes

    PubMed Central

    Wüstner, Daniel; Sklenar, Heinz

    2014-01-01

    Biological membranes are complex assemblies of many different molecules of which analysis demands a variety of experimental and computational approaches. In this article, we explain challenges and advantages of atomistic Monte Carlo (MC) simulation of lipid membranes. We provide an introduction into the various move sets that are implemented in current MC methods for efficient conformational sampling of lipids and other molecules. In the second part, we demonstrate for a concrete example, how an atomistic local-move set can be implemented for MC simulations of phospholipid monomers and bilayer patches. We use our recently devised chain breakage/closure (CBC) local move set in the bond-/torsion angle space with the constant-bond-length approximation (CBLA) for the phospholipid dipalmitoylphosphatidylcholine (DPPC). We demonstrate rapid conformational equilibration for a single DPPC molecule, as assessed by calculation of molecular energies and entropies. We also show transition from a crystalline-like to a fluid DPPC bilayer by the CBC local-move MC method, as indicated by the electron density profile, head group orientation, area per lipid, and whole-lipid displacements. We discuss the potential of local-move MC methods in combination with molecular dynamics simulations, for example, for studying multi-component lipid membranes containing cholesterol. PMID:24469314

  19. Monte Carlo Production Management at CMS

    NASA Astrophysics Data System (ADS)

    Boudoul, G.; Franzoni, G.; Norkus, A.; Pol, A.; Srimanobhas, P.; Vlimant, J.-R.

    2015-12-01

    The analysis of the LHC data at the Compact Muon Solenoid (CMS) experiment requires the production of a large number of simulated events. During the RunI of LHC (20102012), CMS has produced over 12 Billion simulated events, organized in approximately sixty different campaigns each emulating specific detector conditions and LHC running conditions (pile up). In order to aggregate the information needed for the configuration and prioritization of the events production, assure the book-keeping of all the processing requests placed by the physics analysis groups, and to interface with the CMS production infrastructure, the web- based service Monte Carlo Management (McM) has been developed and put in production in 2013. McM is based on recent server infrastructure technology (CherryPy + AngularJS) and relies on a CouchDB database back-end. This contribution covers the one and half year of operational experience managing samples of simulated events for CMS, the evolution of its functionalities and the extension of its capability to monitor the status and advancement of the events production.

  20. Realistic Monte Carlo Simulation of PEN Apparatus

    NASA Astrophysics Data System (ADS)

    Glaser, Charles; PEN Collaboration

    2015-04-01

    The PEN collaboration undertook to measure the ?+ -->e+?e(?) branching ratio with a relative uncertainty of 5 10-4 or less at the Paul Scherrer Institute. This observable is highly susceptible to small non V - A contributions, i.e, non-Standard Model physics. The detector system included a beam counter, mini TPC for beam tracking, an active degrader and stopping target, MWPCs and a plastic scintillator hodoscope for particle tracking and identification, and a spherical CsI EM calorimeter. GEANT 4 Monte Carlo simulation is integral to the analysis as it is used to generate fully realistic events for all pion and muon decay channels. The simulated events are constructed so as to match the pion beam profiles, divergence, and momentum distribution. Ensuring the placement of individual detector components at the sub-millimeter level and proper construction of active target waveforms and associated noise, enables us to more fully understand temporal and geometrical acceptances as well as energy, time, and positional resolutions and calibrations in the detector system. This ultimately leads to reliable discrimination of background events, thereby improving cut based or multivariate branching ratio extraction. Work supported by NSF Grants PHY-0970013, 1307328, and others.

  1. Accelerated Monte Carlo Methods for Coulomb Collisions

    NASA Astrophysics Data System (ADS)

    Rosin, Mark; Ricketson, Lee; Dimits, Andris; Caflisch, Russel; Cohen, Bruce

    2014-03-01

    We present a new highly efficient multi-level Monte Carlo (MLMC) simulation algorithm for Coulomb collisions in a plasma. The scheme, initially developed and used successfully for applications in financial mathematics, is applied here to kinetic plasmas for the first time. The method is based on a Langevin treatment of the Landau-Fokker-Planck equation and has a rich history derived from the works of Einstein and Chandrasekhar. The MLMC scheme successfully reduces the computational cost of achieving an RMS error ɛ in the numerical solution to collisional plasma problems from (ɛ-3) - for the standard state-of-the-art Langevin and binary collision algorithms - to a theoretically optimal (ɛ-2) scaling, when used in conjunction with an underlying Milstein discretization to the Langevin equation. In the test case presented here, the method accelerates simulations by factors of up to 100. We summarize the scheme, present some tricks for improving its efficiency yet further, and discuss the method's range of applicability. Work performed for US DOE by LLNL under contract DE-AC52- 07NA27344 and by UCLA under grant DE-FG02-05ER25710.

  2. Fast Monte Carlo full spectrum scene simulation

    NASA Astrophysics Data System (ADS)

    Richtsmeier, Steven; Sundberg, Robert; Haren, Raymond; Clark, Frank O.

    2006-05-01

    This paper discusses the formulation and implementation of an acceleration approach for the MCScene code, a high fidelity model for full optical spectrum (UV to LWIR) hyperspectral image (HSI) simulation. The MCScene simulation is based on a Direct Simulation Monte Carlo approach for modeling 3D atmospheric radiative transport, as well as spatially inhomogeneous surfaces including surface BRDF effects. The model includes treatment of land and ocean surfaces, 3D terrain, 3D surface objects, and effects of finite clouds with surface shadowing. This paper will review an acceleration algorithm that exploits spectral redundancies in hyperspectral images. In this algorithm, the full scene is determined for a subset of spectral channels, and then this multispectral scene is unmixed into spectral end members and end member abundance maps. Next, pure end member pixels are determined at their full hyperspectral resolution, and the full hyperspectral scene is reconstructed from the hyperspectral end member spectra and the multispectral abundance maps. This algorithm effectively performs a hyperspectral simulation while requiring only the computational time of a multispectral simulation. The acceleration algorithm will be demonstrated, and errors associated with the algorithm will be analyzed.

  3. Fast Monte Carlo Full Spectrum Scene Simulation

    NASA Astrophysics Data System (ADS)

    Richtsmeier, Steven; Sundberg, Robert; Clark, Frank O.

    2009-03-01

    This paper discusses the formulation and implementation of an acceleration approach for the MCScene code, a high fidelity model for full optical spectrum (UV to LWIR) hyperspectral image (HSI) simulation. The MCScene simulation is based on a Direct Simulation Monte Carlo approach for modeling 3D atmospheric radiative transport, as well as spatially inhomogeneous surfaces including surface BRDF effects. The model includes treatment of land and ocean surfaces, 3D terrain, 3D surface objects, and effects of finite clouds with surface shadowing. This paper will review an acceleration algorithm that exploits spectral redundancies in hyperspectral images. In this algorithm, the full scene is determined for a subset of spectral channels, and then this multispectral scene is unmixed into spectral end members and end member abundance maps. Next, pure end member pixels are determined at their full hyperspectral resolution, and the full hyperspectral scene is reconstructed from the hyperspectral end member spectra and the multispectral abundance maps. This algorithm effectively performs a hyperspectral simulation while requiring only the computational time of a multispectral simulation. The acceleration algorithm will be demonstrated, and errors associated with the algorithm will be analyzed.

  4. Multideterminant Wave Functions in Quantum Monte Carlo.

    PubMed

    Morales, Miguel A; McMinis, Jeremy; Clark, Bryan K; Kim, Jeongnim; Scuseria, Gustavo E

    2012-07-10

    Quantum Monte Carlo (QMC) methods have received considerable attention over past decades due to their great promise for providing a direct solution to the many-body Schrodinger equation in electronic systems. Thanks to their low scaling with the number of particles, QMC methods present a compelling competitive alternative for the accurate study of large molecular systems and solid state calculations. In spite of such promise, the method has not permeated the quantum chemistry community broadly, mainly because of the fixed-node error, which can be large and whose control is difficult. In this Perspective, we present a systematic application of large scale multideterminant expansions in QMC and report on its impressive performance with first row dimers and the 55 molecules of the G1 test set. We demonstrate the potential of this strategy for systematically reducing the fixed-node error in the wave function and for achieving chemical accuracy in energy predictions. When compared to traditional quantum chemistry methods like MP2, CCSD(T), and various DFT approximations, the QMC results show a marked improvement over all of them. In fact, only the explicitly correlated CCSD(T) method with a large basis set produces more accurate results. Further developments in trial wave functions and algorithmic improvements appear promising for rendering QMC as the benchmark standard in large electronic systems. PMID:26588949

  5. Finding Planet Nine: a Monte Carlo approach

    NASA Astrophysics Data System (ADS)

    de la Fuente Marcos, C.; de la Fuente Marcos, R.

    2016-03-01

    Planet Nine is a hypothetical planet located well beyond Pluto that has been proposed in an attempt to explain the observed clustering in physical space of the perihelia of six extreme trans-Neptunian objects or ETNOs. The predicted approximate values of its orbital elements include a semimajor axis of 700 au, an eccentricity of 0.6, an inclination of 30°, and an argument of perihelion of 150°. Searching for this putative planet is already under way. Here, we use a Monte Carlo approach to create a synthetic population of Planet Nine orbits and study its visibility statistically in terms of various parameters and focusing on the aphelion configuration. Our analysis shows that, if Planet Nine exists and is at aphelion, it might be found projected against one out of four specific areas in the sky. Each area is linked to a particular value of the longitude of the ascending node and two of them are compatible with an apsidal antialignment scenario. In addition and after studying the current statistics of ETNOs, a cautionary note on the robustness of the perihelia clustering is presented.

  6. DETERMINING UNCERTAINTY IN PHYSICAL PARAMETER MEASUREMENTS BY MONTE CARLO SIMULATION

    EPA Science Inventory

    A statistical approach, often called Monte Carlo Simulation, has been used to examine propagation of error with measurement of several parameters important in predicting environmental transport of chemicals. These parameters are vapor pressure, water solubility, octanol-water par...

  7. Monte Carlo computations of the hadronic mass spectrum

    SciTech Connect

    Rebbi, C.

    1982-01-01

    This paper summarizes two talks presented at the Orbis Scientiae Meeting, 1982. Monte Carlo results on the mass gap (or glueball mass) and on the masses of the lightest quark-model hadrons are illustrated.

  8. OBJECT KINETIC MONTE CARLO SIMULATIONS OF CASCADE ANNEALING IN TUNGSTEN

    SciTech Connect

    Nandipati, Giridhar; Setyawan, Wahyu; Heinisch, Howard L.; Roche, Kenneth J.; Kurtz, Richard J.; Wirth, Brian D.

    2014-03-31

    The objective of this work is to study the annealing of primary cascade damage created by primary knock-on atoms (PKAs) of various energies, at various temperatures in bulk tungsten using the object kinetic Monte Carlo (OKMC) method.

  9. A modified Monte Carlo model for the ionospheric heating rates

    NASA Technical Reports Server (NTRS)

    Mayr, H. G.; Fontheim, E. G.; Robertson, S. C.

    1972-01-01

    A Monte Carlo method is adopted as a basis for the derivation of the photoelectron heat input into the ionospheric plasma. This approach is modified in an attempt to minimize the computation time. The heat input distributions are computed for arbitrarily small source elements that are spaced at distances apart corresponding to the photoelectron dissipation range. By means of a nonlinear interpolation procedure their individual heating rate distributions are utilized to produce synthetic ones that fill the gaps between the Monte Carlo generated distributions. By varying these gaps and the corresponding number of Monte Carlo runs the accuracy of the results is tested to verify the validity of this procedure. It is concluded that this model can reduce the computation time by more than a factor of three, thus improving the feasibility of including Monte Carlo calculations in self-consistent ionosphere models.

  10. Monte Carlo simulations in X-ray imaging

    NASA Astrophysics Data System (ADS)

    Giersch, Jrgen; Durst, Jrgen

    2008-06-01

    Monte Carlo simulations have become crucial tools in many fields of X-ray imaging. They help to understand the influence of physical effects such as absorption, scattering and fluorescence of photons in different detector materials on image quality parameters. They allow studying new imaging concepts like photon counting, energy weighting or material reconstruction. Additionally, they can be applied to the fields of nuclear medicine to define virtual setups studying new geometries or image reconstruction algorithms. Furthermore, an implementation of the propagation physics of electrons and photons allows studying the behavior of (novel) X-ray generation concepts. This versatility of Monte Carlo simulations is illustrated with some examples done by the Monte Carlo simulation ROSI. An overview of the structure of ROSI is given as an example of a modern, well-proven, object-oriented, parallel computing Monte Carlo simulation for X-ray imaging.

  11. Combinatorial geometry domain decomposition strategies for Monte Carlo simulations

    SciTech Connect

    Li, G.; Zhang, B.; Deng, L.; Mo, Z.; Liu, Z.; Shangguan, D.; Ma, Y.; Li, S.; Hu, Z.

    2013-07-01

    Analysis and modeling of nuclear reactors can lead to memory overload for a single core processor when it comes to refined modeling. A method to solve this problem is called 'domain decomposition'. In the current work, domain decomposition algorithms for a combinatorial geometry Monte Carlo transport code are developed on the JCOGIN (J Combinatorial Geometry Monte Carlo transport INfrastructure). Tree-based decomposition and asynchronous communication of particle information between domains are described in the paper. Combination of domain decomposition and domain replication (particle parallelism) is demonstrated and compared with that of MERCURY code. A full-core reactor model is simulated to verify the domain decomposition algorithms using the Monte Carlo particle transport code JMCT (J Monte Carlo Transport Code), which has being developed on the JCOGIN infrastructure. Besides, influences of the domain decomposition algorithms to tally variances are discussed. (authors)

  12. Development of Monte Carlo Capability for Orion Parachute Simulations

    NASA Technical Reports Server (NTRS)

    Moore, James W.

    2011-01-01

    Parachute test programs employ Monte Carlo simulation techniques to plan testing and make critical decisions related to parachute loads, rate-of-descent, or other parameters. This paper describes the development and use of a MATLAB-based Monte Carlo tool for three parachute drop test simulations currently used by NASA. The Decelerator System Simulation (DSS) is a legacy 6 Degree-of-Freedom (DOF) simulation used to predict parachute loads and descent trajectories. The Decelerator System Simulation Application (DSSA) is a 6-DOF simulation that is well suited for modeling aircraft extraction and descent of pallet-like test vehicles. The Drop Test Vehicle Simulation (DTVSim) is a 2-DOF trajectory simulation that is convenient for quick turn-around analysis tasks. These three tools have significantly different software architectures and do not share common input files or output data structures. Separate Monte Carlo tools were initially developed for each simulation. A recently-developed simulation output structure enables the use of the more sophisticated DSSA Monte Carlo tool with any of the core-simulations. The task of configuring the inputs for the nominal simulation is left to the existing tools. Once the nominal simulation is configured, the Monte Carlo tool perturbs the input set according to dispersion rules created by the analyst. These rules define the statistical distribution and parameters to be applied to each simulation input. Individual dispersed parameters are combined to create a dispersed set of simulation inputs. The Monte Carlo tool repeatedly executes the core-simulation with the dispersed inputs and stores the results for analysis. The analyst may define conditions on one or more output parameters at which to collect data slices. The tool provides a versatile interface for reviewing output of large Monte Carlo data sets while preserving the capability for detailed examination of individual dispersed trajectories. The Monte Carlo tool described in this paper has proven useful in planning several Crew Exploration Vehicle parachute tests.

  13. Neutron spectral unfolding using the Monte Carlo method

    NASA Astrophysics Data System (ADS)

    O'Brien, Keran; Sanna, Robert

    A solution to the neutron unfolding problem, without approximation or a priori assumptions as to spectral shape, has been devised, based on the Monte Carlo method, and its rate of convergence derived. By application to synthesized measurements with controlled and varying levels of error, the effect of measurement error has been investigated. This Monte Carlo method has also been applied to experimental stray neutron data from measurements inside a reactor containment vessel.

  14. Multiscale Monte Carlo equilibration: Pure Yang-Mills theory

    NASA Astrophysics Data System (ADS)

    Endres, Michael G.; Brower, Richard C.; Detmold, William; Orginos, Kostas; Pochinsky, Andrew V.

    2015-12-01

    We present a multiscale thermalization algorithm for lattice gauge theory, which enables efficient parallel generation of uncorrelated gauge field configurations. The algorithm combines standard Monte Carlo techniques with ideas drawn from real space renormalization group and multigrid methods. We demonstrate the viability of the algorithm for pure Yang-Mills gauge theory for both heat bath and hybrid Monte Carlo evolution, and show that it ameliorates the problem of topological freezing up to controllable lattice spacing artifacts.

  15. Study of the Transition Flow Regime using Monte Carlo Methods

    NASA Technical Reports Server (NTRS)

    Hassan, H. A.

    1999-01-01

    This NASA Cooperative Agreement presents a study of the Transition Flow Regime Using Monte Carlo Methods. The topics included in this final report are: 1) New Direct Simulation Monte Carlo (DSMC) procedures; 2) The DS3W and DS2A Programs; 3) Papers presented; 4) Miscellaneous Applications and Program Modifications; 5) Solution of Transitional Wake Flows at Mach 10; and 6) Turbulence Modeling of Shock-Dominated Fows with a k-Enstrophy Formulation.

  16. Shift: A Massively Parallel Monte Carlo Radiation Transport Package

    SciTech Connect

    Pandya, Tara M; Johnson, Seth R; Davidson, Gregory G; Evans, Thomas M; Hamilton, Steven P

    2015-01-01

    This paper discusses the massively-parallel Monte Carlo radiation transport package, Shift, developed at Oak Ridge National Laboratory. It reviews the capabilities, implementation, and parallel performance of this code package. Scaling results demonstrate very good strong and weak scaling behavior of the implemented algorithms. Benchmark results from various reactor problems show that Shift results compare well to other contemporary Monte Carlo codes and experimental results.

  17. Correlated sampling in quantum Monte Carlo: A route to forces

    SciTech Connect

    Filippi, Claudia; Umrigar, C. J.

    2000-06-15

    In order to find the equilibrium geometries of molecules and solids and to perform ab initio molecular dynamics, it is necessary to calculate the forces on the nuclei. We present a correlated sampling method to efficiently calculate numerical forces and potential energy surfaces in diffusion Monte Carlo. This method employs a coordinate transformation, earlier used in variational Monte Carlo, to greatly reduce the statistical error. Results are presented for first-row diatomic molecules. (c) 2000 The American Physical Society.

  18. Monte Carlo methods and applications in nuclear physics

    SciTech Connect

    Carlson, J.

    1990-01-01

    Monte Carlo methods for studying few- and many-body quantum systems are introduced, with special emphasis given to their applications in nuclear physics. Variational and Green's function Monte Carlo methods are presented in some detail. The status of calculations of light nuclei is reviewed, including discussions of the three-nucleon-interaction, charge and magnetic form factors, the coulomb sum rule, and studies of low-energy radiative transitions. 58 refs., 12 figs.

  19. Multiscale MonteCarlo equilibration: Pure Yang-Mills theory

    DOE PAGESBeta

    Endres, Michael G.; Brower, Richard C.; Orginos, Kostas; Detmold, William; Pochinsky, Andrew V.

    2015-12-29

    In this study, we present a multiscale thermalization algorithm for lattice gauge theory, which enables efficient parallel generation of uncorrelated gauge field configurations. The algorithm combines standard Monte Carlo techniques with ideas drawn from real space renormalization group and multigrid methods. We demonstrate the viability of the algorithm for pure Yang-Mills gauge theory for both heat bath and hybrid Monte Carlo evolution, and show that it ameliorates the problem of topological freezing up to controllable lattice spacing artifacts.

  20. DPEMC: A Monte Carlo for double diffraction

    NASA Astrophysics Data System (ADS)

    Boonekamp, M.; Kcs, T.

    2005-05-01

    We extend the POMWIG Monte Carlo generator developed by B. Cox and J. Forshaw, to include new models of central production through inclusive and exclusive double Pomeron exchange in proton-proton collisions. Double photon exchange processes are described as well, both in proton-proton and heavy-ion collisions. In all contexts, various models have been implemented, allowing for comparisons and uncertainty evaluation and enabling detailed experimental simulations. Program summaryTitle of the program:DPEMC, version 2.4 Catalogue identifier: ADVF Program summary URL:http://cpc.cs.qub.ac.uk/summaries/ADVF Program obtainable from: CPC Program Library, Queen's University of Belfast, N. Ireland Computer: any computer with the FORTRAN 77 compiler under the UNIX or Linux operating systems Operating system: UNIX; Linux Programming language used: FORTRAN 77 High speed storage required:<25 MB No. of lines in distributed program, including test data, etc.: 71 399 No. of bytes in distributed program, including test data, etc.: 639 950 Distribution format: tar.gz Nature of the physical problem: Proton diffraction at hadron colliders can manifest itself in many forms, and a variety of models exist that attempt to describe it [A. Bialas, P.V. Landshoff, Phys. Lett. B 256 (1991) 540; A. Bialas, W. Szeremeta, Phys. Lett. B 296 (1992) 191; A. Bialas, R.A. Janik, Z. Phys. C 62 (1994) 487; M. Boonekamp, R. Peschanski, C. Royon, Phys. Rev. Lett. 87 (2001) 251806; Nucl. Phys. B 669 (2003) 277; R. Enberg, G. Ingelman, A. Kissavos, N. Timneanu, Phys. Rev. Lett. 89 (2002) 081801; R. Enberg, G. Ingelman, L. Motyka, Phys. Lett. B 524 (2002) 273; R. Enberg, G. Ingelman, N. Timneanu, Phys. Rev. D 67 (2003) 011301; B. Cox, J. Forshaw, Comput. Phys. Comm. 144 (2002) 104; B. Cox, J. Forshaw, B. Heinemann, Phys. Lett. B 540 (2002) 26; V. Khoze, A. Martin, M. Ryskin, Phys. Lett. B 401 (1997) 330; Eur. Phys. J. C 14 (2000) 525; Eur. Phys. J. C 19 (2001) 477; Erratum, Eur. Phys. J. C 20 (2001) 599; Eur. Phys. J. C 23 (2002) 311]. This program implements some of the more significant ones, enabling the simulation of central particle production through color singlet exchange between interacting protons or antiprotons. Method of solution: The Monte Carlo method is used to simulate all elementary 2?2 and 2?1 processes available in HERWIG. The color singlet exchanges implemented in DPEMC are implemented as functions reweighting the photon flux already present in HERWIG. Restriction on the complexity of the problem: The program relying extensively on HERWIG, the limitations are the same as in [G. Marchesini, B.R. Webber, G. Abbiendi, I.G. Knowles, M.H. Seymour, L. Stanco, Comput. Phys. Comm. 67 (1992) 465; G. Corcella, I.G. Knowles, G. Marchesini, S. Moretti, K. Odagiri, P. Richardson, M. Seymour, B. Webber, JHEP 0101 (2001) 010]. Typical running time: Approximate times on a 800 MHz Pentium III: 5-20 min per 10 000 unweighted events, depending on the process under consideration.

  1. Monte Carlo study of microdosimetric diamond detectors

    NASA Astrophysics Data System (ADS)

    Solevi, Paola; Magrin, Giulio; Moro, Davide; Mayer, Ramona

    2015-09-01

    Ion-beam therapy provides a high dose conformity and increased radiobiological effectiveness with respect to conventional radiation-therapy. Strict constraints on the maximum uncertainty on the biological weighted dose and consequently on the biological weighting factor require the determination of the radiation quality, defined as the types and energy spectra of the radiation at a specific point. However the experimental determination of radiation quality, in particular for an internal target, is not simple and the features of ion interactions and treatment delivery require dedicated and optimized detectors. Recently chemical vapor deposition (CVD) diamond detectors have been suggested as ion-beam therapy microdosimeters. Diamond detectors can be manufactured with small cross sections and thin shapes, ideal to cope with the high fluence rate. However the sensitive volume of solid state detectors significantly deviates from conventional microdosimeters, with a diameter that can be up to 1000 times the height. This difference requires a redefinition of the concept of sensitive thickness and a deep study of the secondary to primary radiation, of the wall effects and of the impact of the orientation of the detector with respect to the radiation field. The present work intends to study through Monte Carlo simulations the impact of the detector geometry on the determination of radiation quality quantities, in particular on the relative contribution of primary and secondary radiation. The dependence of microdosimetric quantities such as the unrestricted linear energy L and the lineal energy y are investigated for different detector cross sections, by varying the particle type (carbon ions and protons) and its energy.

  2. Monte Carlo study of microdosimetric diamond detectors.

    PubMed

    Solevi, Paola; Magrin, Giulio; Moro, Davide; Mayer, Ramona

    2015-09-21

    Ion-beam therapy provides a high dose conformity and increased radiobiological effectiveness with respect to conventional radiation-therapy. Strict constraints on the maximum uncertainty on the biological weighted dose and consequently on the biological weighting factor require the determination of the radiation quality, defined as the types and energy spectra of the radiation at a specific point. However the experimental determination of radiation quality, in particular for an internal target, is not simple and the features of ion interactions and treatment delivery require dedicated and optimized detectors. Recently chemical vapor deposition (CVD) diamond detectors have been suggested as ion-beam therapy microdosimeters. Diamond detectors can be manufactured with small cross sections and thin shapes, ideal to cope with the high fluence rate. However the sensitive volume of solid state detectors significantly deviates from conventional microdosimeters, with a diameter that can be up to 1000 times the height. This difference requires a redefinition of the concept of sensitive thickness and a deep study of the secondary to primary radiation, of the wall effects and of the impact of the orientation of the detector with respect to the radiation field. The present work intends to study through Monte Carlo simulations the impact of the detector geometry on the determination of radiation quality quantities, in particular on the relative contribution of primary and secondary radiation. The dependence of microdosimetric quantities such as the unrestricted linear energy L and the lineal energy y are investigated for different detector cross sections, by varying the particle type (carbon ions and protons) and its energy. PMID:26309235

  3. Lattice Monte Carlo simulations of polymer melts.

    PubMed

    Hsu, Hsiao-Ping

    2014-12-21

    We use Monte Carlo simulations to study polymer melts consisting of fully flexible and moderately stiff chains in the bond fluctuation model at a volume fraction 0.5. In order to reduce the local density fluctuations, we test a pre-packing process for the preparation of the initial configurations of the polymer melts, before the excluded volume interaction is switched on completely. This process leads to a significantly faster decrease of the number of overlapping monomers on the lattice. This is useful for simulating very large systems, where the statistical properties of the model with a marginally incomplete elimination of excluded volume violations are the same as those of the model with strictly excluded volume. We find that the internal mean square end-to-end distance for moderately stiff chains in a melt can be very well described by a freely rotating chain model with a precise estimate of the bond-bond orientational correlation between two successive bond vectors in equilibrium. The plot of the probability distributions of the reduced end-to-end distance of chains of different stiffness also shows that the data collapse is excellent and described very well by the Gaussian distribution for ideal chains. However, while our results confirm the systematic deviations between Gaussian statistics for the chain structure factor Sc(q) [minimum in the Kratky-plot] found by Wittmer et al. [EPL 77, 56003 (2007)] for fully flexible chains in a melt, we show that for the available chain length these deviations are no longer visible, when the chain stiffness is included. The mean square bond length and the compressibility estimated from collective structure factors depend slightly on the stiffness of the chains. PMID:25527957

  4. kmos: A lattice kinetic Monte Carlo framework

    NASA Astrophysics Data System (ADS)

    Hoffmann, Max J.; Matera, Sebastian; Reuter, Karsten

    2014-07-01

    Kinetic Monte Carlo (kMC) simulations have emerged as a key tool for microkinetic modeling in heterogeneous catalysis and other materials applications. Systems, where site-specificity of all elementary reactions allows a mapping onto a lattice of discrete active sites, can be addressed within the particularly efficient lattice kMC approach. To this end we describe the versatile kmos software package, which offers a most user-friendly implementation, execution, and evaluation of lattice kMC models of arbitrary complexity in one- to three-dimensional lattice systems, involving multiple active sites in periodic or aperiodic arrangements, as well as site-resolved pairwise and higher-order lateral interactions. Conceptually, kmos achieves a maximum runtime performance which is essentially independent of lattice size by generating code for the efficiency-determining local update of available events that is optimized for a defined kMC model. For this model definition and the control of all runtime and evaluation aspects kmos offers a high-level application programming interface. Usage proceeds interactively, via scripts, or a graphical user interface, which visualizes the model geometry, the lattice occupations and rates of selected elementary reactions, while allowing on-the-fly changes of simulation parameters. We demonstrate the performance and scaling of kmos with the application to kMC models for surface catalytic processes, where for given operation conditions (temperature and partial pressures of all reactants) central simulation outcomes are catalytic activity and selectivities, surface composition, and mechanistic insight into the occurrence of individual elementary processes in the reaction network.

  5. Monte Carlo simulation of large electron fields

    PubMed Central

    Faddegon, Bruce A; Perl, Joseph; Asai, Makoto

    2010-01-01

    Two Monte Carlo systems, EGSnrc and Geant4, the latter with two different physics lists, were used to calculate dose distributions in large electron fields used in radiotherapy. Source and geometry parameters were adjusted to match calculated results to measurement. Both codes were capable of accurately reproducing the measured dose distributions of the 6 electron beams available on the accelerator. Depth penetration matched the average measured with a diode and parallel-plate chamber to 0.04 cm or better. Calculated depth dose curves agreed to 2% with diode measurements in the buildup region, although for the lower beam energies there was a discrepancy of up to 5% in this region when calculated results are compared to parallel-plate measurements. Dose profiles at the depth of maximum dose matched to 2-3% in the central 25 cm of the field, corresponding to the field size of the largest applicator. A 4% match was obtained outside the central region. The discrepancy observed in the bremsstrahlung tail in published results that used EGS4 is no longer evident. Simulations with the different codes and physics lists used different source energies, incident beam angles, thicknesses of the primary foils, and distance between the primary and secondary foil. The true source and geometry parameters were not known with sufficient accuracy to determine which parameter set, including the energy of the source, was closest to the truth. These results underscore the requirement for experimental benchmarks of depth penetration and electron scatter for beam energies and foils relevant to radiotherapy. PMID:18296775

  6. Monte-Carlo simulation of Callisto's exosphere

    NASA Astrophysics Data System (ADS)

    Vorburger, A.; Wurz, P.; Lammer, H.; Barabash, S.; Mousis, O.

    2015-12-01

    We model Callisto's exosphere based on its ice as well as non-ice surface via the use of a Monte-Carlo exosphere model. For the ice component we implement two putative compositions that have been computed from two possible extreme formation scenarios of the satellite. One composition represents the oxidizing state and is based on the assumption that the building blocks of Callisto were formed in the protosolar nebula and the other represents the reducing state of the gas, based on the assumption that the satellite accreted from solids condensed in the jovian sub-nebula. For the non-ice component we implemented the compositions of typical CI as well as L type chondrites. Both chondrite types have been suggested to represent Callisto's non-ice composition best. As release processes we consider surface sublimation, ion sputtering and photon-stimulated desorption. Particles are followed on their individual trajectories until they either escape Callisto's gravitational attraction, return to the surface, are ionized, or are fragmented. Our density profiles show that whereas the sublimated species dominate close to the surface on the sun-lit side, their density profiles (with the exception of H and H2) decrease much more rapidly than the sputtered particles. The Neutral gas and Ion Mass (NIM) spectrometer, which is part of the Particle Environment Package (PEP), will investigate Callisto's exosphere during the JUICE mission. Our simulations show that NIM will be able to detect sublimated and sputtered particles from both the ice and non-ice surface. NIM's measured chemical composition will allow us to distinguish between different formation scenarios.

  7. Monte Carlo simulations of the photospheric process

    NASA Astrophysics Data System (ADS)

    Santana, Rodolfo; Crumley, Patrick; Hernández, Roberto A.; Kumar, Pawan

    2016-02-01

    We present a Monte Carlo (MC) code we wrote to simulate the photospheric process and to study the photospheric spectrum above the peak energy. Our simulations were performed with a photon-to-electron ratio Nγ/Ne = 105, as determined by observations of the Gamma-ray Burst prompt emission. We searched an exhaustive parameter space to determine if the photospheric process can match the observed high-energy spectrum of the prompt emission. If we do not consider electron re-heating, we determined that the best conditions to produce the observed high-energy spectrum are low photon temperatures and high optical depths. However, for these simulations, the spectrum peaks at an energy below 300 keV by a factor of ˜10. For the cases we consider with higher photon temperatures and lower optical depths, we demonstrate that additional energy in the electrons is required to produce a power-law spectrum above the peak energy. By considering electron re-heating near the photosphere, the spectra for these simulations have a peak energy ˜300 keV and a power-law spectrum extending to at least 10 MeV with a spectral index consistent with the prompt emission observations. We also performed simulations for different values of Nγ/Ne and determined that the simulation results are very sensitive to Nγ/Ne. Lastly, in addition to Comptonizing a blackbody spectrum, we also simulate the Comptonization of a fν ∝ ν-1/2 fast cooled synchrotron spectrum. The spectrum for these simulations peaks at ˜104 keV, with a flat spectrum fν ∝ ν0 below the peak energy.

  8. Monte Carlo Simulations of Feldspar Dissolution Kinetics

    NASA Astrophysics Data System (ADS)

    Zhang, L.; Luttge, A.

    2003-12-01

    We present a kinetic model based on Monte Carlo simulation to better understand the fundamental behavior of mineral dissolution. This model is applied to the dissolution of crystallographically defined Ca-Na feldspar surfaces. Adsorption energies of H2O and H3O+ molecules to Al-O-Si and Si-O-Si bonds and activation energies to break these bonds were calculated using ab initio and DFT techniques (Xiao and Luttge, 2002). These data were obtained by taking into account both solvation effects and the immersion of molecular clusters in a dielectric continuum, i.e., water. The role of cations in the dissolution of the feldspar structure is assumed trivial and most attention is focused on the behavior of Si/Al tetrahedra. The role of the crystal structure, order/disorder phenomena, bonding characteristics of Si and Al atoms, and possible surface defects like screw dislocations and point defects are discussed. Our model allows detailed investigation of the endmembers of the plagioclase series, albite and anorthite. We analyzed the movement of steps, congruency of dissolution, inhibition, anisotropy effects, and surface composition as a function of both saturation state of the solution and crystallographic orientation of the crystal surface. Additionally, we compared the dissolution rates of albite and anorthite in the context of their ratio of Si-O-Si to Al-O-Si bonds. These results will be extended to the whole feldspar series so as to predict the fundamental behavior of feldspar dissolution and evaluate the general role of Si-O-Si and Al-O-Si bond behavior.

  9. Quantum Monte Carlo Endstation for Petascale Computing

    SciTech Connect

    David Ceperley

    2011-03-02

    The major achievements enabled by QMC Endstation grant include * Performance improvement on clusters of x86 multi-core systems, especially on Cray XT systems * New and improved methods for the wavefunction optimizations * New forms of trial wavefunctions * Implementation of the full application on NVIDIA GPUs using CUDA The scaling studies of QMCPACK on large-scale systems show excellent parallel efficiency up to 216K cores on Jaguarpf (Cray XT5). The GPU implementation shows speedups of 10-15x over the CPU implementation on older generation of x86. We have implemented hybrid OpenMP/MPI scheme in QMC to take advantage of multi-core shared memory processors of petascale systems. Our hybrid scheme has several advantages over the standard MPI-only scheme. * Memory optimized: large read-only data to store one-body orbitals and other shared properties to represent the trial wave function and many-body Hamiltonian can be shared among threads, which reduces the memory footprint of a large-scale problem. * Cache optimized: the data associated with an active Walker are in cache during the compute-intensive drift-diffusion process and the operations on an Walker are optimized for cache reuse. Thread-local objects are used to ensure the data affinity to a thread. * Load balanced: Walkers in an ensemble are evenly distributed among threads and MPI tasks. The two-level parallelism reduces the population imbalance among MPI tasks and reduces the number of point-to-point communications of large messages (serialized objects) for the Walker exchange. * Communication optimized: the communication overhead, especially for the collective operations necessary to determine ET and measure the properties of an ensemble, is significantly lowered by using less MPI tasks. The multiple forms of parallelism afforded by QMC algorithms make them ideal candidates for acceleration in the many-core paradigm. We presented the results of our effort to port the QMCPACK simulation code to the NVIDIA CUDA GPU platform. We restructured the CPU algorithms to express additional parallelism, minimize GPU-CPU communication, and efficiently utilize the GPU memory hierarchy. Using mixed precision on GT200 GPUs and MPI for intercommunication and load balancing, we observe typical full-application speedups of approximately 10x to 15x relative to quad-core Xeon CPUs alone, while reproducing the double-precision CPU results within statistical error. We developed an all-electron quantum Monte Carlo (QMC) method for solids that does not rely on pseudopotentials, and used it to construct a primary ultra-high-pressure calibration based on the equation of state of cubic boron nitride. We computed the static contribution to the free energy with the QMC method and obtained the phonon contribution from density functional theory, yielding a high-accuracy calibration up to 900 GPa usable directly in experiment. We computed the anharmonic Raman frequency shift with QMC simulations as a function of pressure and temperature, allowing optical pressure calibration. In contrast to present experimental approaches, small systematic errors in the theoretical EOS do not increase with pressure, and no extrapolation is needed. This all-electron method is applicable to first-row solids, providing a new reference for ab initio calculations of solids and benchmarks for pseudopotential accuracy. We compared experimental and theoretical results on the momentum distribution and the quasiparticle renormalization factor in sodium. From an x-ray Compton-profile measurement of the valence-electron momentum density, we derived its discontinuity at the Fermi wavevector finding an accurate measure of the renormalization factor that we compared with quantum-Monte-Carlo and G0W0 calculations performed both on crystalline sodium and on the homogeneous electron gas. Our calculated results are in good agreement with the experiment. We have been studying the heat of formation for various Kubas complexes of molecular hydrogen on Ti(1,2)ethylene-nH2 using Diffusion Monte Carlo. This work has been started and is ongoing. We are studying systems involving 1 and 2 Ti bonding sites with up to 10 hydrogen molecules in numerous configurations. This work will establish a benchmark that will test the accuracy of density functional calculations and establish the feasibility of our methods for similar systems.

  10. Monte Carlo simulations of intensity profiles for energetic particle propagation

    NASA Astrophysics Data System (ADS)

    Tautz, R. C.; Bolte, J.; Shalchi, A.

    2016-02-01

    Aims: Numerical test-particle simulations are a reliable and frequently used tool for testing analytical transport theories and predicting mean-free paths. The comparison between solutions of the diffusion equation and the particle flux is used to critically judge the applicability of diffusion to the stochastic transport of energetic particles in magnetized turbulence. Methods: A Monte Carlo simulation code is extended to allow for the generation of intensity profiles and anisotropy-time profiles. Because of the relatively low number density of computational particles, a kernel function has to be used to describe the spatial extent of each particle. Results: The obtained intensity profiles are interpreted as solutions of the diffusion equation by inserting the diffusion coefficients that have been directly determined from the mean-square displacements. The comparison shows that the time dependence of the diffusion coefficients needs to be considered, in particular the initial ballistic phase and the often subdiffusive perpendicular coefficient. Conclusions: It is argued that the perpendicular component of the distribution function is essential if agreement between the diffusion solution and the simulated flux is to be obtained. In addition, time-dependent diffusion can provide a better description than the classic diffusion equation only after the initial ballistic phase.

  11. Probability Forecasting Using Monte Carlo Simulation

    NASA Astrophysics Data System (ADS)

    Duncan, M.; Frisbee, J.; Wysack, J.

    2014-09-01

    Space Situational Awareness (SSA) is defined as the knowledge and characterization of all aspects of space. SSA is now a fundamental and critical component of space operations. Increased dependence on our space assets has in turn lead to a greater need for accurate, near real-time knowledge of all space activities. With the growth of the orbital debris population, satellite operators are performing collision avoidance maneuvers more frequently. Frequent maneuver execution expends fuel and reduces the operational lifetime of the spacecraft. Thus the need for new, more sophisticated collision threat characterization methods must be implemented. The collision probability metric is used operationally to quantify the collision risk. The collision probability is typically calculated days into the future, so that high risk and potential high risk conjunction events are identified early enough to develop an appropriate course of action. As the time horizon to the conjunction event is reduced, the collision probability changes. A significant change in the collision probability will change the satellite mission stakeholder's course of action. So constructing a method for estimating how the collision probability will evolve improves operations by providing satellite operators with a new piece of information, namely an estimate or 'forecast' of how the risk will change as time to the event is reduced. Collision probability forecasting is a predictive process where the future risk of a conjunction event is estimated. The method utilizes a Monte Carlo simulation that produces a likelihood distribution for a given collision threshold. Using known state and state uncertainty information, the simulation generates a set possible trajectories for a given space object pair. Each new trajectory produces a unique event geometry at the time of close approach. Given state uncertainty information for both objects, a collision probability value can be computed for every trail. This yields a collision probability distribution given known, predicted uncertainty. This paper presents the details of the collision probability forecasting method. We examine various conjunction event scenarios and numerically demonstrate the utility of this approach in typical event scenarios. We explore the utility of a probability-based track scenario simulation that models expected tracking data frequency as the tasking levels are increased. The resulting orbital uncertainty is subsequently used in the forecasting algorithm.

  12. Coherent Scattering Imaging Monte Carlo Simulation

    NASA Astrophysics Data System (ADS)

    Hassan, Laila Abdulgalil Rafik

    Conventional mammography has poor contrast between healthy and cancerous tissues due to the small difference in attenuation properties. Coherent scatter potentially provides more information because interference of coherently scattered radiation depends on the average intermolecular spacing, and can be used to characterize tissue types. However, typical coherent scatter analysis techniques are not compatible with rapid low dose screening techniques. Coherent scatter slot scan imaging is a novel imaging technique which provides new information with higher contrast. In this work a simulation of coherent scatter was performed for slot scan imaging to assess its performance and provide system optimization. In coherent scatter imaging, the coherent scatter is exploited using a conventional slot scan mammography system with anti-scatter grids tilted at the characteristic angle of cancerous tissues. A Monte Carlo simulation was used to simulate the coherent scatter imaging. System optimization was performed across several parameters, including source voltage, tilt angle, grid distances, grid ratio, and shielding geometry. The contrast increased as the grid tilt angle increased beyond the characteristic angle for the modeled carcinoma. A grid tilt angle of 16 degrees yielded the highest contrast and signal to noise ratio (SNR). Also, contrast increased as the source voltage increased. Increasing grid ratio improved contrast at the expense of decreasing SNR. A grid ratio of 10:1 was sufficient to give a good contrast without reducing the intensity to a noise level. The optimal source to sample distance was determined to be such that the source should be located at the focal distance of the grid. A carcinoma lump of 0.5x0.5x0.5 cm3 in size was detectable which is reasonable considering the high noise due to the usage of relatively small number of incident photons for computational reasons. A further study is needed to study the effect of breast density and breast thickness on detectability. Coherent scatter analysis using a wide slot setup is promising as an enhancement for screening mammography. Unlike conventional mammography which depends on attenuation difference, coherent scatter imaging gives new information based on tissues typing. A combination of the two methods would yield high spatial resolution from the conventional mammography and high contrast from coherent scatter imaging.

  13. On the utility of graphics cards to perform massively parallel simulation of advanced Monte Carlo methods.

    PubMed

    Lee, Anthony; Yau, Christopher; Giles, Michael B; Doucet, Arnaud; Holmes, Christopher C

    2010-12-01

    We present a case-study on the utility of graphics cards to perform massively parallel simulation of advanced Monte Carlo methods. Graphics cards, containing multiple Graphics Processing Units (GPUs), are self-contained parallel computational devices that can be housed in conventional desktop and laptop computers and can be thought of as prototypes of the next generation of many-core processors. For certain classes of population-based Monte Carlo algorithms they offer massively parallel simulation, with the added advantage over conventional distributed multi-core processors that they are cheap, easily accessible, easy to maintain, easy to code, dedicated local devices with low power consumption. On a canonical set of stochastic simulation examples including population-based Markov chain Monte Carlo methods and Sequential Monte Carlo methods, we nd speedups from 35 to 500 fold over conventional single-threaded computer code. Our findings suggest that GPUs have the potential to facilitate the growth of statistical modelling into complex data rich domains through the availability of cheap and accessible many-core computation. We believe the speedup we observe should motivate wider use of parallelizable simulation methods and greater methodological attention to their design. PMID:22003276

  14. Coupling Deterministic and Monte Carlo Transport Methods for the Simulation of Gamma-Ray Spectroscopy Scenarios

    SciTech Connect

    Smith, Leon E.; Gesh, Christopher J.; Pagh, Richard T.; Miller, Erin A.; Shaver, Mark W.; Ashbaker, Eric D.; Batdorf, Michael T.; Ellis, J. E.; Kaye, William R.; McConn, Ronald J.; Meriwether, George H.; Ressler, Jennifer J.; Valsan, Andrei B.; Wareing, Todd A.

    2008-10-31

    Radiation transport modeling methods used in the radiation detection community fall into one of two broad categories: stochastic (Monte Carlo) and deterministic. Monte Carlo methods are typically the tool of choice for simulating gamma-ray spectrometers operating in homeland and national security settings (e.g. portal monitoring of vehicles or isotope identification using handheld devices), but deterministic codes that discretize the linear Boltzmann transport equation in space, angle, and energy offer potential advantages in computational efficiency for many complex radiation detection problems. This paper describes the development of a scenario simulation framework based on deterministic algorithms. Key challenges include: formulating methods to automatically define an energy group structure that can support modeling of gamma-ray spectrometers ranging from low to high resolution; combining deterministic transport algorithms (e.g. ray-tracing and discrete ordinates) to mitigate ray effects for a wide range of problem types; and developing efficient and accurate methods to calculate gamma-ray spectrometer response functions from the deterministic angular flux solutions. The software framework aimed at addressing these challenges is described and results from test problems that compare coupled deterministic-Monte Carlo methods and purely Monte Carlo approaches are provided.

  15. A novel Kinetic Monte Carlo algorithm for Non-Equilibrium Simulations

    NASA Astrophysics Data System (ADS)

    Jha, Prateek; Kuzovkov, Vladimir; Grzybowski, Bartosz; Olvera de La Cruz, Monica

    2012-02-01

    We have developed an off-lattice kinetic Monte Carlo simulation scheme for reaction-diffusion problems in soft matter systems. The definition of transition probabilities in the Monte Carlo scheme are taken identical to the transition rates in a renormalized master equation of the diffusion process and match that of the Glauber dynamics of Ising model. Our scheme provides several advantages over the Brownian dynamics technique for non-equilibrium simulations. Since particle displacements are accepted/rejected in a Monte Carlo fashion as opposed to moving particles following a stochastic equation of motion, nonphysical movements (e.g., violation of a hard core assumption) are not possible (these moves have zero acceptance). Further, the absence of a stochastic ``noise'' term resolves the computational difficulties associated with generating statistically independent trajectories with definitive mean properties. Finally, since the timestep is independent of the magnitude of the interaction forces, much longer time-steps can be employed than Brownian dynamics. We discuss the applications of this scheme for dynamic self-assembly of photo-switchable nanoparticles and dynamical problems in polymeric systems.

  16. Calibration-constrained Monte Carlo analysis of highly parameterized models using subspace techniques

    NASA Astrophysics Data System (ADS)

    Tonkin, Matthew; Doherty, John

    2009-12-01

    We describe a subspace Monte Carlo (SSMC) technique that reduces the burden of calibration-constrained Monte Carlo when undertaken with highly parameterized models. When Monte Carlo methods are used to evaluate the uncertainty in model outputs, ensuring that parameter realizations reproduce the calibration data requires many model runs to condition each realization. In the new SSMC approach, the model is first calibrated using a subspace regularization method, ideally the hybrid Tikhonov-TSVD "superparameter" approach described by Tonkin and Doherty (2005). Sensitivities calculated with the calibrated model are used to define the calibration null-space, which is spanned by parameter combinations that have no effect on simulated equivalents to available observations. Next, a stochastic parameter generator is used to produce parameter realizations, and for each a difference is formed between the stochastic parameters and the calibrated parameters. This difference is projected onto the calibration null-space and added to the calibrated parameters. If the model is no longer calibrated, parameter combinations that span the calibration solution space are reestimated while retaining the null-space projected parameter differences as additive values. The recalibration can often be undertaken using existing sensitivities, so that conditioning requires only a small number of model runs. Using synthetic and real-world model applications we demonstrate that the SSMC approach is general (it is not limited to any particular model or any particular parameterization scheme) and that it can rapidly produce a large number of conditioned parameter sets.

  17. Frequency domain optical tomography using a Monte Carlo perturbation method

    NASA Astrophysics Data System (ADS)

    Yamamoto, Toshihiro; Sakamoto, Hiroki

    2016-04-01

    A frequency domain Monte Carlo method is applied to near-infrared optical tomography, where an intensity-modulated light source with a given modulation frequency is used to reconstruct optical properties. The frequency domain reconstruction technique allows for better separation between the scattering and absorption properties of inclusions, even for ill-posed inverse problems, due to cross-talk between the scattering and absorption reconstructions. The frequency domain Monte Carlo calculation for light transport in an absorbing and scattering medium has thus far been analyzed mostly for the reconstruction of optical properties in simple layered tissues. This study applies a Monte Carlo calculation algorithm, which can handle complex-valued particle weights for solving a frequency domain transport equation, to optical tomography in two-dimensional heterogeneous tissues. The Jacobian matrix that is needed to reconstruct the optical properties is obtained by a first-order "differential operator" technique, which involves less variance than the conventional "correlated sampling" technique. The numerical examples in this paper indicate that the newly proposed Monte Carlo method provides reconstructed results for the scattering and absorption coefficients that compare favorably with the results obtained from conventional deterministic or Monte Carlo methods.

  18. A modified Monte Carlo model for the ionospheric heating rates.

    NASA Technical Reports Server (NTRS)

    Mayr, H. G.; Fontheim, E. G.; Robertson, S. C.

    1973-01-01

    A Monte Carlo method is adopted as a basis for the derivation of the photoelectron-heat input into the ionospheric plasma. Since a great number of Monte Carlo runs are required normally for the computation of the heating rates, this approach is modified in an attempt to minimize the computation time. The heat-input distributions are computed for arbitrarily small source elements that are spaced apart at distances corresponding to the photoelectron dissipation range. By means of a nonlinear interpolation procedure their individual heating-rate distributions are utilized to produce synthetic ones that fill the gaps between the Monte Carlo generated distributions. By varying these gaps and the corresponding number of Monte Carlo runs the accuracy of the results is tested to verify the validity of this procedure. It is concluded that this model can reduce the computation time by as much as an order of magnitude, thus improving the feasibility of including Monte Carlo calculations in self-consistent ionosphere models.

  19. Monte Carlo dose calculations in advanced radiotherapy

    NASA Astrophysics Data System (ADS)

    Bush, Karl Kenneth

    The remarkable accuracy of Monte Carlo (MC) dose calculation algorithms has led to the widely accepted view that these methods should and will play a central role in the radiotherapy treatment verification and planning of the future. The advantages of using MC clinically are particularly evident for radiation fields passing through inhomogeneities, such as lung and air cavities, and for small fields, including those used in today's advanced intensity modulated radiotherapy techniques. Many investigators have reported significant dosimetric differences between MC and conventional dose calculations in such complex situations, and have demonstrated experimentally the unmatched ability of MC calculations in modeling charged particle disequilibrium. The advantages of using MC dose calculations do come at a cost. The nature of MC dose calculations require a highly detailed, in-depth representation of the physical system (accelerator head geometry/composition, anatomical patient geometry/composition and particle interaction physics) to allow accurate modeling of external beam radiation therapy treatments. To perform such simulations is computationally demanding and has only recently become feasible within mainstream radiotherapy practices. In addition, the output of the accelerator head simulation can be highly sensitive to inaccuracies within a model that may not be known with sufficient detail. The goal of this dissertation is to both improve and advance the implementation of MC dose calculations in modern external beam radiotherapy. To begin, a novel method is proposed to fine-tune the output of an accelerator model to better represent the measured output. In this method an intensity distribution of the electron beam incident on the model is inferred by employing a simulated annealing algorithm. The method allows an investigation of arbitrary electron beam intensity distributions and is not restricted to the commonly assumed Gaussian intensity. In a second component of this dissertation the design, implementation and evaluation of a technique for reducing a latent variance inherent from the recycling of phase space particle tracks in a simulation is presented. In the technique a random azimuthal rotation about the beam's central axis is applied to each recycled particle, achieving a significant reduction of the latent variance. In a third component, the dissertation presents the first MC modeling of Varian's new RapidArc delivery system and a comparison of dose calculations with the Eclipse treatment planning system. A total of four arc plans are compared including an oropharynx patient phantom containing tissue inhomogeneities. Finally, in a step toward introducing MC dose calculation into the planning of treatments such as RapidArc, a technique is presented to feasibly generate and store a large set of MC calculated dose distributions. A novel 3-D dyadic multi-resolution (MR) decomposition algorithm is presented and the compressibility of the dose data using this algorithm is investigated. The presented MC beamlet generation method, in conjunction with the presented 3-D data MR decomposition, represents a viable means to introduce MC dose calculation in the planning and optimization stages of advanced radiotherapy.

  20. Monte Carlo Simulations for Mine Detection

    SciTech Connect

    Toor, A.; Marchetti, A.A.

    2000-03-14

    During January, 1998, collaboration between LLNL, UCI and Exdet, Ltd. arranged for the testing and evaluation of a Russian developed antitank mine detection system at the Buried Objects Detection Facility (BODF) located at the Nevada Test Site. BODF is a secured 30-acre facility with approximately 300 live antitank mines that were buried in 1993 and 1994. The burial depths range from a few cm to 15 cm and the various metal- and plastic-case antitank mines each contain 6-12 kg of high explosive. Contractors who have tested their mine detection equipment at BODF include: SAIC, SRI, ERIM, MIT/Lincoln Laboratory and Loral Defense Systems. In addition LLNL researchers have used BODF to test antitank mine detection systems based on: dual-band infrared imaging, hyper-spectral imaging, synthetic aperture impulse radar and micro-impulse radar. In a blind test the Russian operated system obtained the highest score of any technology tested to date at BODF. The system is based on combining information from two separate sensors; one to detect anomalous concentrations of hydrogen and the other to detect if such anomalies also have the correct nitrogen to carbon ratio for high explosives. The detection sensitivity is set by the geometry and type of neutron moderator and filters surrounding the neutron source and detectors. Detection of hydrogen anomalies is a rapid process based on neutron scattering. The handheld instrument on the end of a wand could scan a large area at a rate of 4-5 square meters per minute. Once the hydrogen anomalies were located a second sensor was used to measure the thermal neutron excited gamma-ray spectrum at each hydrogen anomaly to determine whether that location in addition contained high concentrations of nitrogen. The second process was slower, taking up to 5 minutes for each location. The information from both sensors were then examined by the operator and a declaration was made as to whether or not the anomaly was a buried antitank mine. Although the system worked extremely well on all classes of anti-tank mines, the Russian hardware components were inferior to those that are commercially available in the United States, i.e. the NaI(Tl) crystals had significantly higher background levels and poorer resolution than their U.S. counterparts, the electronics appeared to be decades old and the photomultiplier tubes were noisy and lacked gain stabilization circuitry. During the evaluation of this technology, the question that came to mind was: could state-of-the-art sensors and electronics and improved software algorithms lead to a neutron based system that could reliably detect much smaller buried mines; namely antipersonnel mines containing 30-40 grams of high explosive? Our goal in this study was to conduct Monte Carlo simulations to gain better understanding of both phases of the mine detection system and to develop an understanding for the system's overall capabilities and limitations. In addition, we examined possible extensions of this technology to see whether or not state-of-the-art improvements could lead to a reliable anti-personnel mine detection system.

  1. Residual Monte Carlo high-order solver for Moment-Based Accelerated Thermal Radiative Transfer equations

    SciTech Connect

    Willert, Jeffrey Park, H.

    2014-11-01

    In this article we explore the possibility of replacing Standard Monte Carlo (SMC) transport sweeps within a Moment-Based Accelerated Thermal Radiative Transfer (TRT) algorithm with a Residual Monte Carlo (RMC) formulation. Previous Moment-Based Accelerated TRT implementations have encountered trouble when stochastic noise from SMC transport sweeps accumulates over several iterations and pollutes the low-order system. With RMC we hope to significantly lower the build-up of statistical error at a much lower cost. First, we display encouraging results for a zero-dimensional test problem. Then, we demonstrate that we can achieve a lower degree of error in two one-dimensional test problems by employing an RMC transport sweep with multiple orders of magnitude fewer particles per sweep. We find that by reformulating the high-order problem, we can compute more accurate solutions at a fraction of the cost.

  2. Tool for Rapid Analysis of Monte Carlo Simulations

    NASA Technical Reports Server (NTRS)

    Restrepo, Carolina; McCall, Kurt E.; Hurtado, John E.

    2011-01-01

    Designing a spacecraft, or any other complex engineering system, requires extensive simulation and analysis work. Oftentimes, the large amounts of simulation data generated are very di cult and time consuming to analyze, with the added risk of overlooking potentially critical problems in the design. The authors have developed a generic data analysis tool that can quickly sort through large data sets and point an analyst to the areas in the data set that cause specific types of failures. The Tool for Rapid Analysis of Monte Carlo simulations (TRAM) has been used in recent design and analysis work for the Orion vehicle, greatly decreasing the time it takes to evaluate performance requirements. A previous version of this tool was developed to automatically identify driving design variables in Monte Carlo data sets. This paper describes a new, parallel version, of TRAM implemented on a graphical processing unit, and presents analysis results for NASA's Orion Monte Carlo data to demonstrate its capabilities.

  3. SPQR: a Monte Carlo reactor kinetics code. [LMFBR

    SciTech Connect

    Cramer, S.N.; Dodds, H.L.

    1980-02-01

    The SPQR Monte Carlo code has been developed to analyze fast reactor core accident problems where conventional methods are considered inadequate. The code is based on the adiabatic approximation of the quasi-static method. This initial version contains no automatic material motion or feedback. An existing Monte Carlo code is used to calculate the shape functions and the integral quantities needed in the kinetics module. Several sample problems have been devised and analyzed. Due to the large statistical uncertainty associated with the calculation of reactivity in accident simulations, the results, especially at later times, differ greatly from deterministic methods. It was also found that in large uncoupled systems, the Monte Carlo method has difficulty in handling asymmetric perturbations.

  4. Backward and Forward Monte Carlo Method in Polarized Radiative Transfer

    NASA Astrophysics Data System (ADS)

    Yong, Huang; Guo-Dong, Shi; Ke-Yong, Zhu

    2016-03-01

    In general, the Stocks vector cannot be calculated in reverse in the vector radiative transfer. This paper presents a novel backward and forward Monte Carlo simulation strategy to study the vector radiative transfer in the participated medium. A backward Monte Carlo process is used to calculate the ray trajectory and the endpoint of the ray. The Stocks vector is carried out by a forward Monte Carlo process. A one-dimensional graded index semi-transparent medium was presented as the physical model and the thermal emission consideration of polarization was studied in the medium. The solution process to non-scattering, isotropic scattering, and the anisotropic scattering medium, respectively, is discussed. The influence of the optical thickness and albedo on the Stocks vector are studied. The results show that the U, V-components of the apparent Stocks vector are very small, but the Q-component of the apparent Stocks vector is relatively larger, which cannot be ignored.

  5. Quantum Monte Carlo simulations of tunneling in quantum adiabatic optimization

    NASA Astrophysics Data System (ADS)

    Brady, Lucas T.; van Dam, Wim

    2016-03-01

    We explore to what extent path-integral quantum Monte Carlo methods can efficiently simulate quantum adiabatic optimization algorithms during a quantum tunneling process. Specifically we look at symmetric cost functions defined over n bits with a single potential barrier that a successful quantum adiabatic optimization algorithm will have to tunnel through. The height and width of this barrier depend on n , and by tuning these dependencies, we can make the optimization algorithm succeed or fail in polynomial time. In this article we compare the strength of quantum adiabatic tunneling with that of path-integral quantum Monte Carlo methods. We find numerical evidence that quantum Monte Carlo algorithms will succeed in the same regimes where quantum adiabatic optimization succeeds.

  6. A Quantum Monte Carlo investigation of dispersion interactions in graphite

    NASA Astrophysics Data System (ADS)

    Spanu, Leonardo; Galli, Giulia; Sorella, Sandro

    2009-03-01

    We present a series of Quantum Monte Carlo (QMC) calculations of graphite, aimed at describing on the same footing the strong C-C covalent bonds and the weaker interlayer interactions. In particular, we carried out calculations of binding energies, bond lengths and compressibility by using the Variational Monte Carlo and Lattice Regularized Diffusion Monte Carlo techniques [1]. We use as a variational ansatz the Jastrow Antisymmetrical Wave function, including a pairing determinant and a Jastrow correlation factor [2]. Our results allow for a detailed analysis of dispersion forces between graphite layers, including their behavior at long distances, and yield a quantitative estimate of the layer binding energy. 0.3cm [1] Casula M. et al. Phys. Rev. Lett. 95 100201 (2005) [2] Casula M. et al. J. Chem. Phys. 119, 6500 (2003)

  7. Monte Carlo simulation in statistical physics: an introduction

    NASA Astrophysics Data System (ADS)

    Binder, K., Heermann, D. W.

    Monte Carlo Simulation in Statistical Physics deals with the computer simulation of many-body systems in condensed-matter physics and related fields of physics, chemistry and beyond, to traffic flows, stock market fluctuations, etc.). Using random numbers generated by a computer, probability distributions are calculated, allowing the estimation of the thermodynamic properties of various systems. This book describes the theoretical background to several variants of these Monte Carlo methods and gives a systematic presentation from which newcomers can learn to perform such simulations and to analyze their results. This fourth edition has been updated and a new chapter on Monte Carlo simulation of quantum-mechanical problems has been added. To help students in their work a special web server has been installed to host programs and discussion groups (http://wwwcp.tphys.uni-heidelberg.de). Prof. Binder was the winner of the Berni J. Alder CECAM Award for Computational Physics 2001.

  8. Skin image reconstruction using Monte Carlo based color generation

    NASA Astrophysics Data System (ADS)

    Aizu, Yoshihisa; Maeda, Takaaki; Kuwahara, Tomohiro; Hirao, Tetsuji

    2010-11-01

    We propose a novel method of skin image reconstruction based on color generation using Monte Carlo simulation of spectral reflectance in the nine-layered skin tissue model. The RGB image and spectral reflectance of human skin are obtained by RGB camera and spectrophotometer, respectively. The skin image is separated into the color component and texture component. The measured spectral reflectance is used to evaluate scattering and absorption coefficients in each of the nine layers which are necessary for Monte Carlo simulation. Various skin colors are generated by Monte Carlo simulation of spectral reflectance in given conditions for the nine-layered skin tissue model. The new color component is synthesized to the original texture component to reconstruct the skin image. The method is promising for applications in the fields of dermatology and cosmetics.

  9. Vectorizing and macrotasking Monte Carlo neutral particle algorithms

    SciTech Connect

    Heifetz, D.B.

    1987-04-01

    Monte Carlo algorithms for computing neutral particle transport in plasmas have been vectorized and macrotasked. The techniques used are directly applicable to Monte Carlo calculations of neutron and photon transport, and Monte Carlo integration schemes in general. A highly vectorized code was achieved by calculating test flight trajectories in loops over arrays of flight data, isolating the conditional branches to as few a number of loops as possible. A number of solutions are discussed to the problem of gaps appearing in the arrays due to completed flights, which impede vectorization. A simple and effective implementation of macrotasking is achieved by dividing the calculation of the test flight profile among several processors. A tree of random numbers is used to ensure reproducible results. The additional memory required for each task may preclude using a larger number of tasks. In future machines, the limit of macrotasking may be possible, with each test flight, and split test flight, being a separate task.

  10. A non-Monte Carlo approach to analyzing 1D Anderson localization in dispersive metamaterials

    NASA Astrophysics Data System (ADS)

    Kissel, Glen J.

    2015-09-01

    Monte Carlo simulations have long been used to study Anderson localization in models of one-dimensional random stacks. Because such simulations use substantial computational resources and because the randomness of random number generators for such simulations has been called into question, a non-Monte Carlo approach is of interest. This paper uses a non-Monte Carlo methodology, limited to discrete random variables, to determine the Lyapunov exponent, or its reciprocal, known as the localization length, for a one-dimensional random stack model, proposed by Asatryan, et al., consisting of various combinations of negative, imaginary and positive index materials that include the effects of dispersion and absorption, as well as off-axis incidence and polarization effects. Dielectric permittivity and magnetic permeability are the two variables randomized in the models. In the paper, Furstenberg's integral formula is used to calculate the Lyapunov exponent of an infinite product of random matrices modeling the one-dimensional stack. The integral formula requires integration with respect to the probability distribution of the randomized layer parameters, as well as integration with respect to the so-called invariant probability measure of the direction of the vector propagated by the long chain of random matrices. The non-Monte Carlo approach uses a numerical procedure of Froyland and Aihara which calculates the invariant measure as the left eigenvector of a certain sparse row-stochastic matrix, thus avoiding the use of any random number generator. The results show excellent agreement with the Monte Carlo generated simulations which make use of continuous random variables, while frequently providing reductions in computation time.

  11. Enhancements for Multi-Player Monte-Carlo Tree Search

    NASA Astrophysics Data System (ADS)

    Nijssen, J. (Pim) A. M.; Winands, Mark H. M.

    Monte-Carlo Tree Search (MCTS) is becoming increasingly popular for playing multi-player games. In this paper we propose two enhancements for MCTS in multi-player games: (1) Progressive History and (2) Multi-Player Monte-Carlo Tree Search Solver (MP-MCTS-Solver). We analyze the performance of these enhancements in two different multi-player games: Focus and Chinese Checkers. Based on the experimental results we conclude that Progressive History is a considerable improvement in both games and MP-MCTS-Solver, using the standard update rule, is a genuine improvement in Focus.

  12. Beyond the Born-Oppenheimer approximation with quantum Monte Carlo

    NASA Astrophysics Data System (ADS)

    Tubman, Norm; Kylanpaa, Ilkka; Hammes-Schiffer, Sharon; Ceperley, David

    2015-03-01

    We develop tools that enable the study of non-adiabatic effects with variational and diffusion Monte Carlo methods. We introduce a highly accurate wave function ansatz for electron-ion systems that can involve a combination of both clamped ions and quantum nuclei. We explicitly calculate the ground state energies of H2, LiH, H2O and FHF- using fixed-node quantum Monte Carlo with wave function nodes that explicitly depend on the ion positions. The obtained energies implicitly include the effects arising from quantum nuclei and electron-nucleus coupling. We compare our results to the best theoretical and experimental results available and find excellent agreement.

  13. Towards Fast, Scalable Hard Particle Monte Carlo Simulations on GPUs

    NASA Astrophysics Data System (ADS)

    Anderson, Joshua A.; Irrgang, M. Eric; Glaser, Jens; Harper, Eric S.; Engel, Michael; Glotzer, Sharon C.

    2014-03-01

    Parallel algorithms for Monte Carlo simulations of thermodynamic ensembles of particles have received little attention because of the inherent serial nature of the statistical sampling. We discuss the implementation of Monte Carlo for arbitrary hard shapes in HOOMD-blue, a GPU-accelerated particle simulation tool, to enable million particle simulations in a field where thousands is the norm. In this talk, we discuss our progress on basic parallel algorithms, optimizations that maximize GPU performance, and communication patterns for scaling to multiple GPUs. Research applications include colloidal assembly and other uses in materials design, biological aggregation, and operations research.

  14. Overview of the MCU Monte Carlo Software Package

    NASA Astrophysics Data System (ADS)

    Kalugin, M. A.; Oleynik, D. S.; Shkarovsky, D. A.

    2014-06-01

    MCU (Monte Carlo Universal) is a project on development and practical use of a universal computer code for simulation of particle transport (neutrons, photons, electrons, positrons) in three-dimensional systems by means of the Monte Carlo method. This paper provides the information on the current state of the project. The developed libraries of constants are briefly described, and the potentialities of the MCU-5 package modules and the executable codes compiled from them are characterized. Examples of important problems of reactor physics solved with the code are presented.

  15. A review of best practices for Monte Carlo criticality calculations

    SciTech Connect

    Brown, Forrest B

    2009-01-01

    Monte Carlo methods have been used to compute k{sub eff} and the fundamental mode eigenfunction of critical systems since the 1950s. While such calculations have become routine using standard codes such as MCNP and SCALE/KENO, there still remain 3 concerns that must be addressed to perform calculations correctly: convergence of k{sub eff} and the fission distribution, bias in k{sub eff} and tally results, and bias in statistics on tally results. This paper provides a review of the fundamental problems inherent in Monte Carlo criticality calculations. To provide guidance to practitioners, suggested best practices for avoiding these problems are discussed and illustrated by examples.

  16. Mesh Optimization for Monte Carlo-Based Optical Tomography

    PubMed Central

    Edmans, Andrew; Intes, Xavier

    2015-01-01

    Mesh-based Monte Carlo techniques for optical imaging allow for accurate modeling of light propagation in complex biological tissues. Recently, they have been developed within an efficient computational framework to be used as a forward model in optical tomography. However, commonly employed adaptive mesh discretization techniques have not yet been implemented for Monte Carlo based tomography. Herein, we propose a methodology to optimize the mesh discretization and analytically rescale the associated Jacobian based on the characteristics of the forward model. We demonstrate that this method maintains the accuracy of the forward model even in the case of temporal data sets while allowing for significant coarsening or refinement of the mesh. PMID:26566523

  17. Kinetic Monte Carlo method applied to nucleic acid hairpin folding

    NASA Astrophysics Data System (ADS)

    Sauerwine, Ben; Widom, Michael

    2011-12-01

    Kinetic Monte Carlo on coarse-grained systems, such as nucleic acid secondary structure, is advantageous for being able to access behavior at long time scales, even minutes or hours. Transition rates between coarse-grained states depend upon intermediate barriers, which are not directly simulated. We propose an Arrhenius rate model and an intermediate energy model that incorporates the effects of the barrier between simulated states without enlarging the state space itself. Applying our Arrhenius rate model to DNA hairpin folding, we demonstrate improved agreement with experiment compared to the usual kinetic Monte Carlo model. Further improvement results from including rigidity of single-stranded stacking.

  18. Monte Carlo Simulations of Phosphate Polyhedron Connectivity in Glasses

    SciTech Connect

    ALAM,TODD M.

    1999-12-21

    Monte Carlo simulations of phosphate tetrahedron connectivity distributions in alkali and alkaline earth phosphate glasses are reported. By utilizing a discrete bond model, the distribution of next-nearest neighbor connectivities between phosphate polyhedron for random, alternating and clustering bonding scenarios was evaluated as a function of the relative bond energy difference. The simulated distributions are compared to experimentally observed connectivities reported for solid-state two-dimensional exchange and double-quantum NMR experiments of phosphate glasses. These Monte Carlo simulations demonstrate that the polyhedron connectivity is best described by a random distribution in lithium phosphate and calcium phosphate glasses.

  19. Collective translational and rotational Monte Carlo moves for attractive particles.

    PubMed

    R?i?ka, t?pn; Allen, Michael P

    2014-03-01

    Virtual move Monte Carlo is a Monte Carlo (MC) cluster algorithm forming clusters via local energy gradients and approximating the collective kinetic or dynamic motion of attractive colloidal particles. We carefully describe, analyze, and test the algorithm. To formally validate the algorithm through highlighting its symmetries, we present alternative and compact ways of selecting and accepting clusters which illustrate the formal use of abstract concepts in the design of biased MC techniques: the superdetailed balance and the early rejection scheme. A brief and comprehensive summary of the algorithms is presented, which makes them accessible without needing to understand the details of the derivation. PMID:24730967

  20. Bias in Dynamic Monte Carlo Alpha Calculations

    SciTech Connect

    Sweezy, Jeremy Ed; Nolen, Steven Douglas; Adams, Terry R.; Trahan, Travis John

    2015-02-06

    A 1/N bias in the estimate of the neutron time-constant (commonly denoted as α) has been seen in dynamic neutronic calculations performed with MCATK. In this paper we show that the bias is most likely caused by taking the logarithm of a stochastic quantity. We also investigate the known bias due to the particle population control method used in MCATK. We conclude that this bias due to the particle population control method is negligible compared to other sources of bias.

  1. ITER Neutronics Modeling Using Hybrid Monte Carlo/Deterministic and CAD-Based Monte Carlo Methods

    SciTech Connect

    Ibrahim, A.; Mosher, Scott W; Evans, Thomas M; Peplow, Douglas E.; Sawan, M.; Wilson, P.; Wagner, John C; Heltemes, Thad

    2011-01-01

    The immense size and complex geometry of the ITER experimental fusion reactor require the development of special techniques that can accurately and efficiently perform neutronics simulations with minimal human effort. This paper shows the effect of the hybrid Monte Carlo (MC)/deterministic techniques - Consistent Adjoint Driven Importance Sampling (CADIS) and Forward-Weighted CADIS (FW-CADIS) - in enhancing the efficiency of the neutronics modeling of ITER and demonstrates the applicability of coupling these methods with computer-aided-design-based MC. Three quantities were calculated in this analysis: the total nuclear heating in the inboard leg of the toroidal field coils (TFCs), the prompt dose outside the biological shield, and the total neutron and gamma fluxes over a mesh tally covering the entire reactor. The use of FW-CADIS in estimating the nuclear heating in the inboard TFCs resulted in a factor of ~ 275 increase in the MC figure of merit (FOM) compared with analog MC and a factor of ~ 9 compared with the traditional methods of variance reduction. By providing a factor of ~ 21 000 increase in the MC FOM, the radiation dose calculation showed how the CADIS method can be effectively used in the simulation of problems that are practically impossible using analog MC. The total flux calculation demonstrated the ability of FW-CADIS to simultaneously enhance the MC statistical precision throughout the entire ITER geometry. Collectively, these calculations demonstrate the ability of the hybrid techniques to accurately model very challenging shielding problems in reasonable execution times.

  2. A Monte Carlo Approach for Adaptive Testing with Content Constraints

    ERIC Educational Resources Information Center

    Belov, Dmitry I.; Armstrong, Ronald D.; Weissman, Alexander

    2008-01-01

    This article presents a new algorithm for computerized adaptive testing (CAT) when content constraints are present. The algorithm is based on shadow CAT methodology to meet content constraints but applies Monte Carlo methods and provides the following advantages over shadow CAT: (a) lower maximum item exposure rates, (b) higher utilization of the

  3. The Use of Monte Carlo Techniques to Teach Probability.

    ERIC Educational Resources Information Center

    Newell, G. J.; MacFarlane, J. D.

    1985-01-01

    Presents sports-oriented examples (cricket and football) in which Monte Carlo methods are used on microcomputers to teach probability concepts. Both examples include computer programs (with listings) which utilize the microcomputer's random number generator. Instructional strategies, with further challenges to help students understand the role of

  4. Monte Carlo Results from a Computer Program for Tailored Testing.

    ERIC Educational Resources Information Center

    Cudeck, Robert A.; And Others

    INTERTAIL, the computer program which implements an approach to tailored testing outlined by Cliff (1975), was examined with errorless data in several Monte Carlo studies. Three replications of each cell of a 3 x 3 table with 10, 20 and 40 items and persons were analyzed. Mean rank correlation coefficients between the true order, specified by

  5. Error estimations and their biases in Monte Carlo eigenvalue calculations

    SciTech Connect

    Ueki, Taro; Mori, Takamasa; Nakagawa, Masayuki

    1997-01-01

    In the Monte Carlo eigenvalue calculation of neutron transport, the eigenvalue is calculated as the average of multiplication factors from cycles, which are called the cycle k{sub eff}`s. Biases in the estimators of the variance and intercycle covariances in Monte Carlo eigenvalue calculations are analyzed. The relations among the real and apparent values of variances and intercycle covariances are derived, where real refers to a true value that is calculated from independently repeated Monte Carlo runs and apparent refers to the expected value of estimates from a single Monte Carlo run. Next, iterative methods based on the foregoing relations are proposed to estimate the standard deviation of the eigenvalue. The methods work well for the cases in which the ratios of the real to apparent values of variances are between 1.4 and 3.1. Even in the case where the foregoing ratio is >5, >70% of the standard deviation estimates fall within 40% from the true value.

  6. Improved geometry representations for Monte Carlo radiation transport.

    SciTech Connect

    Martin, Matthew Ryan

    2004-08-01

    ITS (Integrated Tiger Series) permits a state-of-the-art Monte Carlo solution of linear time-integrated coupled electron/photon radiation transport problems with or without the presence of macroscopic electric and magnetic fields of arbitrary spatial dependence. ITS allows designers to predict product performance in radiation environments.

  7. Monitor unit calculation for Monte Carlo treatment planning

    NASA Astrophysics Data System (ADS)

    Ma, C.-M.; Price, R. A., Jr.; Li, J. S.; Chen, L.; Wang, L.; Fourkal, E.; Qin, L.; Yang, J.

    2004-05-01

    In this work, we investigate a formalism for monitor unit (MU) calculation in Monte Carlo based treatment planning. By relating MU to dose measured under reference calibration conditions (central axis, depth of dose maximum in water, 10 cm 10 cm field defined at 100 cm source-to-surface distance) our formalism determines the MU required for a treatment plan based on the prescription dose and Monte Carlo calculated dose distribution. Detailed descriptions and formulae are given for various clinical situations including conventional treatments and advanced techniques such as intensity-modulated radiotherapy (IMRT) and modulated electron radiotherapy (MERT). Analysis is made of the effects of source modelling, beam modifier simulation and patient dose calculation accuracy, all of which are important factors for absolute dose calculations using Monte Carlo simulations. We have tested the formalism through phantom measurements and the predicted MU values were consistent with measured values to within 2%. The formalism has been used for MU calculation and plan comparison for advanced treatment techniques such as MERT, extracranial stereotactic IMRT, MRI-based treatment planning and intensity-modulated laser-proton therapy studies. It is also used for absolute dose calculations using Monte Carlo simulations for treatment verification, which has become part of our comprehensive IMRT quality assurance programme.

  8. Does standard Monte Carlo give justice to instantons?

    NASA Astrophysics Data System (ADS)

    Fucito, F.; Solomon, S.

    1984-01-01

    The results of the standard local Monte Carlo are changed by offering instantons as candidates in the Metropolis procedure. We also define an O(3) topological charge with no contribution from planar dislocations. The RG behavior is still not recovered. Bantrell Fellow in Theoretical Physics.

  9. A Variational Monte Carlo Approach to Atomic Structure

    ERIC Educational Resources Information Center

    Davis, Stephen L.

    2007-01-01

    The practicality and usefulness of variational Monte Carlo calculations to atomic structure are demonstrated. It is found to succeed in quantitatively illustrating electron shielding, effective nuclear charge, l-dependence of the orbital energies, and singlet-tripetenergy splitting and ionization energy trends in atomic structure theory.

  10. Monte Carlo simulation of entry in the Martian atmosphere

    NASA Technical Reports Server (NTRS)

    Hash, David B.; Hassan, H. A.

    1992-01-01

    The Direct Simulation Monte Carlo method of Bird is used to investigate the characteristics of low density hypersonic flowfields for typical aerobrakes during Martian atmospheric entry. The method allows for both thermal and chemical nonequilibrium. Results are presented for a sixty-degree spherically blunt cone for various nose radii and altitudes.

  11. Calculating Potential Energy Curves with Quantum Monte Carlo

    NASA Astrophysics Data System (ADS)

    Powell, Andrew D.; Dawes, Richard

    2014-06-01

    Quantum Monte Carlo (QMC) is a computational technique that can be applied to the electronic Schrdinger equation for molecules. QMC methods such as Variational Monte Carlo (VMC) and Diffusion Monte Carlo (DMC) have demonstrated the capability of capturing large fractions of the correlation energy, thus suggesting their possible use for high-accuracy quantum chemistry calculations. QMC methods scale particularly well with respect to parallelization making them an attractive consideration in anticipation of next-generation computing architectures which will involve massive parallelization with millions of cores. Due to the statistical nature of the approach, in contrast to standard quantum chemistry methods, uncertainties (error-bars) are associated with each calculated energy. This study focuses on the cost, feasibility and practical application of calculating potential energy curves for small molecules with QMC methods. Trial wave functions were constructed with the multi-configurational self-consistent field (MCSCF) method from GAMESS-US.[1] The CASINO Monte Carlo quantum chemistry package [2] was used for all of the DMC calculations. An overview of our progress in this direction will be given. References: M. W. Schmidt et al. J. Comput. Chem. 14, 1347 (1993). R. J. Needs et al. J. Phys.: Condensed Matter 22, 023201 (2010).

  12. Monte Carlo event generators for hadron-hadron collisions

    SciTech Connect

    Knowles, I.G.; Protopopescu, S.D.

    1993-06-01

    A brief review of Monte Carlo event generators for simulating hadron-hadron collisions is presented. Particular emphasis is placed on comparisons of the approaches used to describe physics elements and identifying their relative merits and weaknesses. This review summarizes a more detailed report.

  13. Exploring Mass Perception with Markov Chain Monte Carlo

    ERIC Educational Resources Information Center

    Cohen, Andrew L.; Ross, Michael G.

    2009-01-01

    Several previous studies have examined the ability to judge the relative mass of objects in idealized collisions. With a newly developed technique of psychological Markov chain Monte Carlo sampling (A. N. Sanborn & T. L. Griffiths, 2008), this work explores participants; perceptions of different collision mass ratios. The results reveal

  14. Monte Carlo study of TLD measurements in air cavities.

    PubMed

    Haraldsson, Pia; Kns, Tommy; Nystrm, Hkan; Engstrm, Per

    2003-09-21

    Thermoluminescent dosimeters (TLDs) are used for verification of the delivered dose during IMRT treatment of head and neck carcinomas. The TLDs are put into a plastic tube, which is placed in the nasal cavities through the treated volume. In this study, the dose distribution to a phantom having a cylindrical air cavity containing a tube was calculated by Monte Carlo methods and the results were compared with data from a treatment planning system (TPS) to evaluate the accuracy of the TLD measurements. The phantom was defined in the DOSXYZnrc Monte Carlo code and calculations were performed with 6 MV fields, with the TLD tube placed at different positions within the cylindrical air cavity. A similar phantom was defined in the pencil beam based TPS. Differences between the Monte Carlo and the TPS calculations of the absorbed dose to the TLD tube were found to be small for an open symmetrical field. For a half-beam field through the air cavity, there was a larger discrepancy. Furthermore, dose profiles through the cylindrical air cavity show, as expected, that the treatment planning system overestimates the absorbed dose in the air cavity. This study shows that when using an open symmetrical field, Monte Carlo calculations of absorbed doses to a TLD tube in a cylindrical air cavity give results comparable to a pencil beam based treatment planning system. PMID:14529213

  15. SABRINA: an interactive solid geometry modeling program for Monte Carlo

    SciTech Connect

    West, J.T.

    1985-01-01

    SABRINA is a fully interactive three-dimensional geometry modeling program for MCNP. In SABRINA, a user interactively constructs either body geometry, or surface geometry models, and interactively debugs spatial descriptions for the resulting objects. This enhanced capability significantly reduces the effort in constructing and debugging complicated three-dimensional geometry models for Monte Carlo Analysis.

  16. Bayesian Monte Carlo Method for Nuclear Data Evaluation

    SciTech Connect

    Koning, A.J.

    2015-01-15

    A Bayesian Monte Carlo method is outlined which allows a systematic evaluation of nuclear reactions using TALYS. The result will be either an EXFOR-weighted covariance matrix or a collection of random files, each accompanied by an experiment based weight.

  17. Monte Carlo simulation to study the kinetics of CO methanation

    NASA Astrophysics Data System (ADS)

    Guo, Ziang Yun; Zhong, Bing; Peng, Shao Yi

    1995-02-01

    A Monte Carlo model, based on the methanation of carbon monoxide and hydrogen on a catalytic surface represented by a square lattice, is presented. It is found that the accumulation of adsorbed species on the surface can bring about the 'poisoning' phenomenon seen in catalysts, and that surface diffusion and suitable gas composition can eliminate the phenomenon.

  18. Monte Carlo: in the beginning and some great expectations

    SciTech Connect

    Metropolis, N.

    1985-01-01

    The central theme will be on the historical setting and origins of the Monte Carlo Method. The scene was post-war Los Alamos Scientific Laboratory. There was an inevitability about the Monte Carlo Event: the ENIAC had recently enjoyed its meteoric rise (on a classified Los Alamos problem); Stan Ulam had returned to Los Alamos; John von Neumann was a frequent visitor. Techniques, algorithms, and applications developed rapidly at Los Alamos. Soon, the fascination of the Method reached wider horizons. The first paper was submitted for publication in the spring of 1949. In the summer of 1949, the first open conference was held at the University of California at Los Angeles. Of some interst perhaps is an account of Fermi's earlier, independent application in neutron moderation studies while at the University of Rome. The quantum leap expected with the advent of massively parallel processors will provide stimuli for very ambitious applications of the Monte Carlo Method in disciplines ranging from field theories to cosmology, including more realistic models in the neurosciences. A structure of multi-instruction sets for parallel processing is ideally suited for the Monte Carlo approach. One may even hope for a modest hardening of the soft sciences.

  19. Monte Carlo Approach for Reliability Estimations in Generalizability Studies.

    ERIC Educational Resources Information Center

    Dimitrov, Dimiter M.

    A Monte Carlo approach is proposed, using the Statistical Analysis System (SAS) programming language, for estimating reliability coefficients in generalizability theory studies. Test scores are generated by a probabilistic model that considers the probability for a person with a given ability score to answer an item with a given difficulty

  20. Monte Carlo renormalization-group analysis of percolation.

    PubMed

    Brown, Albert; Edelman, Alexander; Rocks, Jason; Coniglio, Antonio; Swendsen, Robert H

    2013-10-01

    We describe a Monte Carlo renormalization group approach to the calculation of critical behavior for percolation models. This approach can be utilized to determine the renormalized bond probabilities and the values of the critical exponents. We illustrate the method for two-dimensional bond percolation, but the method is also applicable to other percolation models and other dimensions. PMID:24229304

  1. Ordinal Hypothesis in ANOVA Designs: A Monte Carlo Study.

    ERIC Educational Resources Information Center

    Braver, Sanford L.; Sheets, Virgil L.

    Numerous designs using analysis of variance (ANOVA) to test ordinal hypotheses were assessed using a Monte Carlo simulation. Each statistic was computed on each of over 10,000 random samples drawn from a variety of population conditions. The number of groups, population variance, and patterns of population means were varied. In the non-null

  2. Dynamic Structure Factor in BCC Helium from Quantum Monte Carlo

    NASA Astrophysics Data System (ADS)

    Arovas, Daniel; Gazit, Snir; Podolsky, Daniel; Auerbach, Assa; Nonne, Heloise

    2014-03-01

    An unexpected optic-like mode has been observed by inelastic neutron scattering in BCC Helium-4. We report on worm algorithm quantum Monte Carlo calculations of the dynamic structure factor in order to compare with experiment. A theoretical model based on a dynamical Landau-Ginzburg action is also analyzed. Israel Science Foundation, US-Israel Binational Science Foundation

  3. Monte Carlo radiation transport: A revolution in science

    SciTech Connect

    Hendricks, J.

    1993-04-01

    When Enrico Fermi, Stan Ulam, Nicholas Metropolis, John von Neuman, and Robert Richtmyer invented the Monte Carlo method fifty years ago, little could they imagine the far-flung consequences, the international applications, and the revolution in science epitomized by their abstract mathematical method. The Monte Carlo method is used in a wide variety of fields to solve exact computational models approximately by statistical sampling. It is an alternative to traditional physics modeling methods which solve approximate computational models exactly by deterministic methods. Modern computers and improved methods, such as variance reduction, have enhanced the method to the point of enabling a true predictive capability in areas such as radiation or particle transport. This predictive capability has contributed to a radical change in the way science is done: design and understanding come from computations built upon experiments rather than being limited to experiments, and the computer codes doing the computations have become the repository for physics knowledge. The MCNP Monte Carlo computer code effort at Los Alamos is an example of this revolution. Physicians unfamiliar with physics details can design cancer treatments using physics buried in the MCNP computer code. Hazardous environments and hypothetical accidents can be explored. Many other fields, from underground oil well exploration to aerospace, from physics research to energy production, from safety to bulk materials processing, benefit from MCNP, the Monte Carlo method, and the revolution in science.

  4. Monte Carlo Capabilities of the SCALE Code System

    NASA Astrophysics Data System (ADS)

    Rearden, B. T.; Petrie, L. M.; Peplow, D. E.; Bekar, K. B.; Wiarda, D.; Celik, C.; Perfetti, C. M.; Ibrahim, A. M.; Hart, S. W. D.; Dunn, M. E.

    2014-06-01

    SCALE is a widely used suite of tools for nuclear systems modeling and simulation that provides comprehensive, verified and validated, user-friendly capabilities for criticality safety, reactor physics, radiation shielding, and sensitivity and uncertainty analysis. For more than 30 years, regulators, licensees, and research institutions around the world have used SCALE for nuclear safety analysis and design. SCALE provides a "plug-and-play" framework that includes three deterministic and three Monte Carlo radiation transport solvers that can be selected based on the desired solution, including hybrid deterministic/Monte Carlo simulations. SCALE includes the latest nuclear data libraries for continuous-energy and multigroup radiation transport as well as activation, depletion, and decay calculations. SCALE's graphical user interfaces assist with accurate system modeling, visualization, and convenient access to desired results. SCALE 6.2, to be released in 2014, will provide several new capabilities and significant improvements in many existing features, especially with expanded continuous-energy Monte Carlo capabilities for criticality safety, shielding, depletion, and sensitivity and uncertainty analysis. An overview of the Monte Carlo capabilities of SCALE is provided here, with emphasis on new features for SCALE 6.2.

  5. NOTE: Monte Carlo study of TLD measurements in air cavities

    NASA Astrophysics Data System (ADS)

    Haraldsson, Pia; Kns, Tommy; Nystrm, Hkan; Engstrm, Per

    2003-09-01

    Thermoluminescent dosimeters (TLDs) are used for verification of the delivered dose during IMRT treatment of head and neck carcinomas. The TLDs are put into a plastic tube, which is placed in the nasal cavities through the treated volume. In this study, the dose distribution to a phantom having a cylindrical air cavity containing a tube was calculated by Monte Carlo methods and the results were compared with data from a treatment planning system (TPS) to evaluate the accuracy of the TLD measurements. The phantom was defined in the DOSXYZnrc Monte Carlo code and calculations were performed with 6 MV fields, with the TLD tube placed at different positions within the cylindrical air cavity. A similar phantom was defined in the pencil beam based TPS. Differences between the Monte Carlo and the TPS calculations of the absorbed dose to the TLD tube were found to be small for an open symmetrical field. For a half-beam field through the air cavity, there was a larger discrepancy. Furthermore, dose profiles through the cylindrical air cavity show, as expected, that the treatment planning system overestimates the absorbed dose in the air cavity. This study shows that when using an open symmetrical field, Monte Carlo calculations of absorbed doses to a TLD tube in a cylindrical air cavity give results comparable to a pencil beam based treatment planning system.

  6. Harnessing graphical structure in Markov chain Monte Carlo learning

    SciTech Connect

    Stolorz, P.E.; Chew P.C.

    1996-12-31

    The Monte Carlo method is recognized as a useful tool in learning and probabilistic inference methods common to many datamining problems. Generalized Hidden Markov Models and Bayes nets are especially popular applications. However, the presence of multiple modes in many relevant integrands and summands often renders the method slow and cumbersome. Recent mean field alternatives designed to speed things up have been inspired by experience gleaned from physics. The current work adopts an approach very similar to this in spirit, but focusses instead upon dynamic programming notions as a basis for producing systematic Monte Carlo improvements. The idea is to approximate a given model by a dynamic programming-style decomposition, which then forms a scaffold upon which to build successively more accurate Monte Carlo approximations. Dynamic programming ideas alone fail to account for non-local structure, while standard Monte Carlo methods essentially ignore all structure. However, suitably-crafted hybrids can successfully exploit the strengths of each method, resulting in algorithms that combine speed with accuracy. The approach relies on the presence of significant {open_quotes}local{close_quotes} information in the problem at hand. This turns out to be a plausible assumption for many important applications. Example calculations are presented, and the overall strengths and weaknesses of the approach are discussed.

  7. Microbial contamination in poultry chillers estimated by Monte Carlo simulations

    Technology Transfer Automated Retrieval System (TEKTRAN)

    The risk of microbial contamination during poultry processing may be reduced by the operating characteristics of the chiller. The performance of air chillers and immersion chillers were compared in terms of pre-chill and post-chill contamination using Monte Carlo simulations. Three parameters were u...

  8. Force induced melting of DNA hairpin: A Monte Carlo study

    NASA Astrophysics Data System (ADS)

    Kalyan, M. Suman; Murthy, K. P. N.

    2013-02-01

    In this paper we present the thermodynamic properties of DNA hairpin studied by using non-Boltzmann Monte Carlo methods. The force-temperature phase diagram and Landau free energy near and at critical temperatures are obtained. From free energy curves it is observed that the transition from closed loop state to open state is of first order.

  9. Reagents for Electrophilic Amination: A Quantum Monte CarloStudy

    SciTech Connect

    Amador-Bedolla, Carlos; Salomon-Ferrer, Romelia; Lester Jr.,William A.; Vazquez-Martinez, Jose A.; Aspuru-Guzik, Alan

    2006-11-01

    Electroamination is an appealing synthetic strategy toconstruct carbon-nitrogen bonds. We explore the use of the quantum MonteCarlo method and a proposed variant of the electron-pair localizationfunction--the electron-pair localization function density--as a measureof the nucleophilicity of nitrogen lone-pairs as a possible screeningprocedure for electrophilic reagents.

  10. Observations on variational and projector Monte Carlo methods

    NASA Astrophysics Data System (ADS)

    Umrigar, C. J.

    2015-10-01

    Variational Monte Carlo and various projector Monte Carlo (PMC) methods are presented in a unified manner. Similarities and differences between the methods and choices made in designing the methods are discussed. Both methods where the Monte Carlo walk is performed in a discrete space and methods where it is performed in a continuous space are considered. It is pointed out that the usual prescription for importance sampling may not be advantageous depending on the particular quantum Monte Carlo method used and the observables of interest, so alternate prescriptions are presented. The nature of the sign problem is discussed for various versions of PMC methods. A prescription for an exact PMC method in real space, i.e., a method that does not make a fixed-node or similar approximation and does not have a finite basis error, is presented. This method is likely to be practical for systems with a small number of electrons. Approximate PMC methods that are applicable to larger systems and go beyond the fixed-node approximation are also discussed.

  11. Difficulties in vector-parallel processing of Monte Carlo codes

    SciTech Connect

    Higuchi, Kenji; Asai, Kiyoshi; Hasegawa, Yukihiro

    1997-09-01

    Experiences with vectorization of production-level Monte Carlo codes such as KENO-IV, MCNP, VIM, and MORSE have shown that it is difficult to attain high speedup ratios on vector processors because of indirect addressing, nests of conditional branches, short vector length, cache misses, and operations for realization of robustness and generality. A previous work has already shown that the first, second, and third difficulties can be resolved by using special computer hardware for vector processing of Monte Carlo codes. Here, the fourth and fifth difficulties are discussed in detail using the results for a vectorized version of the MORSE code. As for the fourth difficulty, it is shown that the cache miss-hit ratio affects execution times of the vectorized Monte Carlo codes and the ratio strongly depends on the number of the particles simultaneously tracked. As for the fifth difficulty, it is shown that remarkable speedup ratios are obtained by removing operations that are not essential to the specific problem being solved. These experiences have shown that if a production-level Monte Carlo code system had a capability to selectively construct source coding that complements the input data, then the resulting code could achieve much higher performance.

  12. Impact of random numbers on parallel Monte Carlo application

    SciTech Connect

    Pandey, Ras B.

    2002-10-22

    A number of graduate students are involved at various level of research in this project. We investigate the basic issues in materials using Monte Carlo simulations with specific interest in heterogeneous materials. Attempts have been made to seek collaborations with the DOE laboratories. Specific details are given.

  13. Applications of the Monte Carlo radiation transport toolkit at LLNL

    NASA Astrophysics Data System (ADS)

    Sale, Kenneth E.; Bergstrom, Paul M., Jr.; Buck, Richard M.; Cullen, Dermot; Fujino, D.; Hartmann-Siantar, Christine

    1999-09-01

    Modern Monte Carlo radiation transport codes can be applied to model most applications of radiation, from optical to TeV photons, from thermal neutrons to heavy ions. Simulations can include any desired level of detail in three-dimensional geometries using the right level of detail in the reaction physics. The technology areas to which we have applied these codes include medical applications, defense, safety and security programs, nuclear safeguards and industrial and research system design and control. The main reason such applications are interesting is that by using these tools substantial savings of time and effort (i.e. money) can be realized. In addition it is possible to separate out and investigate computationally effects which can not be isolated and studied in experiments. In model calculations, just as in real life, one must take care in order to get the correct answer to the right question. Advancing computing technology allows extensions of Monte Carlo applications in two directions. First, as computers become more powerful more problems can be accurately modeled. Second, as computing power becomes cheaper Monte Carlo methods become accessible more widely. An overview of the set of Monte Carlo radiation transport tools in use a LLNL will be presented along with a few examples of applications and future directions.

  14. Present Status and Extensions of the Monte Carlo Performance Benchmark

    NASA Astrophysics Data System (ADS)

    Hoogenboom, J. Eduard; Petrovic, Bojan; Martin, William R.

    2014-06-01

    The NEA Monte Carlo Performance benchmark started in 2011 aiming to monitor over the years the abilities to perform a full-size Monte Carlo reactor core calculation with a detailed power production for each fuel pin with axial distribution. This paper gives an overview of the contributed results thus far. It shows that reaching a statistical accuracy of 1 % for most of the small fuel zones requires about 100 billion neutron histories. The efficiency of parallel execution of Monte Carlo codes on a large number of processor cores shows clear limitations for computer clusters with common type computer nodes. However, using true supercomputers the speedup of parallel calculations is increasing up to large numbers of processor cores. More experience is needed from calculations on true supercomputers using large numbers of processors in order to predict if the requested calculations can be done in a short time. As the specifications of the reactor geometry for this benchmark test are well suited for further investigations of full-core Monte Carlo calculations and a need is felt for testing other issues than its computational performance, proposals are presented for extending the benchmark to a suite of benchmark problems for evaluating fission source convergence for a system with a high dominance ratio, for coupling with thermal-hydraulics calculations to evaluate the use of different temperatures and coolant densities and to study the correctness and effectiveness of burnup calculations. Moreover, other contemporary proposals for a full-core calculation with realistic geometry and material composition will be discussed.

  15. Monte Carlo method for magnetic impurities in metals

    NASA Technical Reports Server (NTRS)

    Hirsch, J. E.; Fye, R. M.

    1986-01-01

    The paper discusses a Monte Carlo algorithm to study properties of dilute magnetic alloys; the method can treat a small number of magnetic impurities interacting wiith the conduction electrons in a metal. Results for the susceptibility of a single Anderson impurity in the symmetric case show the expected universal behavior at low temperatures. Some results for two Anderson impurities are also discussed.

  16. Exploring Mass Perception with Markov Chain Monte Carlo

    ERIC Educational Resources Information Center

    Cohen, Andrew L.; Ross, Michael G.

    2009-01-01

    Several previous studies have examined the ability to judge the relative mass of objects in idealized collisions. With a newly developed technique of psychological Markov chain Monte Carlo sampling (A. N. Sanborn & T. L. Griffiths, 2008), this work explores participants; perceptions of different collision mass ratios. The results reveal…

  17. Monte Carlo Simulations of Light Propagation in Apples

    Technology Transfer Automated Retrieval System (TEKTRAN)

    This paper reports on the investigation of light propagation in fresh apples in the visible and short-wave near-infrared region using Monte Carlo simulations. Optical properties of ‘Golden Delicious’ apples were determined over the spectral range of 500-1100 nm using a hyperspectral imaging method, ...

  18. Understanding and improving the efficiency of full configuration interaction quantum Monte Carlo.

    PubMed

    Vigor, W A; Spencer, J S; Bearpark, M J; Thom, A J W

    2016-03-01

    Within full configuration interaction quantum Monte Carlo, we investigate how the statistical error behaves as a function of the parameters which control the stochastic sampling. We define the inefficiency as a measure of the statistical error per particle sampling the space and per time step and show there is a sizeable parameter regime where this is minimised. We find that this inefficiency increases sublinearly with Hilbert space size and can be reduced by localising the canonical Hartree-Fock molecular orbitals, suggesting that the choice of basis impacts the method beyond that of the sign problem. PMID:26957160

  19. Understanding and improving the efficiency of full configuration interaction quantum Monte Carlo

    NASA Astrophysics Data System (ADS)

    Vigor, W. A.; Spencer, J. S.; Bearpark, M. J.; Thom, A. J. W.

    2016-03-01

    Within full configuration interaction quantum Monte Carlo, we investigate how the statistical error behaves as a function of the parameters which control the stochastic sampling. We define the inefficiency as a measure of the statistical error per particle sampling the space and per time step and show there is a sizeable parameter regime where this is minimised. We find that this inefficiency increases sublinearly with Hilbert space size and can be reduced by localising the canonical Hartree-Fock molecular orbitals, suggesting that the choice of basis impacts the method beyond that of the sign problem.

  20. SCOUT: A Fast Monte-Carlo Modeling Tool of Scintillation Camera Output

    PubMed Central

    Hunter, William C. J.; Barrett, Harrison H.; Lewellen, Thomas K.; Miyaoka, Robert S.; Muzi, John P.; Li, Xiaoli; McDougald, Wendy; MacDonald, Lawrence R.

    2011-01-01

    We have developed a Monte-Carlo photon-tracking and readout simulator called SCOUT to study the stochastic behavior of signals output from a simplified rectangular scintillation-camera design. SCOUT models the salient processes affecting signal generation, transport, and readout. Presently, we compare output signal statistics from SCOUT to experimental results for both a discrete and a monolithic camera. We also benchmark the speed of this simulation tool and compare it to existing simulation tools. We find this modeling tool to be relatively fast and predictive of experimental results. Depending on the modeled camera geometry, we found SCOUT to be 4 to 140 times faster than other modeling tools. PMID:22072297

  1. APPLICATION OF BAYESIAN MONTE CARLO ANALYSIS TO A LAGRANGIAN PHOTOCHEMICAL AIR QUALITY MODEL. (R824792)

    EPA Science Inventory

    Uncertainties in ozone concentrations predicted with a Lagrangian photochemical air quality model have been estimated using Bayesian Monte Carlo (BMC) analysis. Bayesian Monte Carlo analysis provides a means of combining subjective "prior" uncertainty estimates developed ...

  2. Global Monte Carlo Simulation with High Order Polynomial Expansions

    SciTech Connect

    William R. Martin; James Paul Holloway; Kaushik Banerjee; Jesse Cheatham; Jeremy Conlin

    2007-12-13

    The functional expansion technique (FET) was recently developed for Monte Carlo simulation. The basic idea of the FET is to expand a Monte Carlo tally in terms of a high order expansion, the coefficients of which can be estimated via the usual random walk process in a conventional Monte Carlo code. If the expansion basis is chosen carefully, the lowest order coefficient is simply the conventional histogram tally, corresponding to a flat mode. This research project studied the applicability of using the FET to estimate the fission source, from which fission sites can be sampled for the next generation. The idea is that individual fission sites contribute to expansion modes that may span the geometry being considered, possibly increasing the communication across a loosely coupled system and thereby improving convergence over the conventional fission bank approach used in most production Monte Carlo codes. The project examined a number of basis functions, including global Legendre polynomials as well as local piecewise polynomials such as finite element hat functions and higher order versions. The global FET showed an improvement in convergence over the conventional fission bank approach. The local FET methods showed some advantages versus global polynomials in handling geometries with discontinuous material properties. The conventional finite element hat functions had the disadvantage that the expansion coefficients could not be estimated directly but had to be obtained by solving a linear system whose matrix elements were estimated. An alternative fission matrix-based response matrix algorithm was formulated. Studies were made of two alternative applications of the FET, one based on the kernel density estimator and one based on Arnoldis method of minimized iterations. Preliminary results for both methods indicate improvements in fission source convergence. These developments indicate that the FET has promise for speeding up Monte Carlo fission source convergence.

  3. Reconstruction of Human Monte Carlo Geometry from Segmented Images

    NASA Astrophysics Data System (ADS)

    Zhao, Kai; Cheng, Mengyun; Fan, Yanchang; Wang, Wen; Long, Pengcheng; Wu, Yican

    2014-06-01

    Human computational phantoms have been used extensively for scientific experimental analysis and experimental simulation. This article presented a method for human geometry reconstruction from a series of segmented images of a Chinese visible human dataset. The phantom geometry could actually describe detailed structure of an organ and could be converted into the input file of the Monte Carlo codes for dose calculation. A whole-body computational phantom of Chinese adult female has been established by FDS Team which is named Rad-HUMAN with about 28.8 billion voxel number. For being processed conveniently, different organs on images were segmented with different RGB colors and the voxels were assigned with positions of the dataset. For refinement, the positions were first sampled. Secondly, the large sums of voxels inside the organ were three-dimensional adjacent, however, there were not thoroughly mergence methods to reduce the cell amounts for the description of the organ. In this study, the voxels on the organ surface were taken into consideration of the mergence which could produce fewer cells for the organs. At the same time, an indexed based sorting algorithm was put forward for enhancing the mergence speed. Finally, the Rad-HUMAN which included a total of 46 organs and tissues was described by the cuboids into the Monte Carlo Monte Carlo Geometry for the simulation. The Monte Carlo geometry was constructed directly from the segmented images and the voxels was merged exhaustively. Each organ geometry model was constructed without ambiguity and self-crossing, its geometry information could represent the accuracy appearance and precise interior structure of the organs. The constructed geometry largely retaining the original shape of organs could easily be described into different Monte Carlo codes input file such as MCNP. Its universal property was testified and high-performance was experimentally verified

  4. Direct aperture optimization for IMRT using Monte Carlo generated beamlets

    SciTech Connect

    Bergman, Alanah M.; Bush, Karl; Milette, Marie-Pierre; Popescu, I. Antoniu; Otto, Karl; Duzenli, Cheryl

    2006-10-15

    This work introduces an EGSnrc-based Monte Carlo (MC) beamlet does distribution matrix into a direct aperture optimization (DAO) algorithm for IMRT inverse planning. The technique is referred to as Monte Carlo-direct aperture optimization (MC-DAO). The goal is to assess if the combination of accurate Monte Carlo tissue inhomogeneity modeling and DAO inverse planning will improve the dose accuracy and treatment efficiency for treatment planning. Several authors have shown that the presence of small fields and/or inhomogeneous materials in IMRT treatment fields can cause dose calculation errors for algorithms that are unable to accurately model electronic disequilibrium. This issue may also affect the IMRT optimization process because the dose calculation algorithm may not properly model difficult geometries such as targets close to low-density regions (lung, air etc.). A clinical linear accelerator head is simulated using BEAMnrc (NRC, Canada). A novel in-house algorithm subdivides the resulting phase space into 2.5x5.0 mm{sup 2} beamlets. Each beamlet is projected onto a patient-specific phantom. The beamlet dose contribution to each voxel in a structure-of-interest is calculated using DOSXYZnrc. The multileaf collimator (MLC) leaf positions are linked to the location of the beamlet does distributions. The MLC shapes are optimized using direct aperture optimization (DAO). A final Monte Carlo calculation with MLC modeling is used to compute the final dose distribution. Monte Carlo simulation can generate accurate beamlet dose distributions for traditionally difficult-to-calculate geometries, particularly for small fields crossing regions of tissue inhomogeneity. The introduction of DAO results in an additional improvement by increasing the treatment delivery efficiency. For the examples presented in this paper the reduction in the total number of monitor units to deliver is {approx}33% compared to fluence-based optimization methods.

  5. Fast Monte Carlo for radiation therapy: the PEREGRINE Project

    SciTech Connect

    Hartmann Siantar, C.L.; Bergstrom, P.M.; Chandler, W.P.; Cox, L.J.; Daly, T.P.; Garrett, D.; House, R.K.; Moses, E.I.; Powell, C.L.; Patterson, R.W.; Schach von Wittenau, A.E.

    1997-11-11

    The purpose of the PEREGRINE program is to bring high-speed, high- accuracy, high-resolution Monte Carlo dose calculations to the desktop in the radiation therapy clinic. PEREGRINE is a three- dimensional Monte Carlo dose calculation system designed specifically for radiation therapy planning. It provides dose distributions from external beams of photons, electrons, neutrons, and protons as well as from brachytherapy sources. Each external radiation source particle passes through collimator jaws and beam modifiers such as blocks, compensators, and wedges that are used to customize the treatment to maximize the dose to the tumor. Absorbed dose is tallied in the patient or phantom as Monte Carlo simulation particles are followed through a Cartesian transport mesh that has been manually specified or determined from a CT scan of the patient. This paper describes PEREGRINE capabilities, results of benchmark comparisons, calculation times and performance, and the significance of Monte Carlo calculations for photon teletherapy. PEREGRINE results show excellent agreement with a comprehensive set of measurements for a wide variety of clinical photon beam geometries, on both homogeneous and heterogeneous test samples or phantoms. PEREGRINE is capable of calculating >350 million histories per hour for a standard clinical treatment plan. This results in a dose distribution with voxel standard deviations of <2% of the maximum dose on 4 million voxels with 1 mm resolution in the CT-slice plane in under 20 minutes. Calculation times include tracking particles through all patient specific beam delivery components as well as the patient. Most importantly, comparison of Monte Carlo dose calculations with currently-used algorithms reveal significantly different dose distributions for a wide variety of treatment sites, due to the complex 3-D effects of missing tissue, tissue heterogeneities, and accurate modeling of the radiation source.

  6. Accurate characterization of Monte Carlo calculated electron beams for radiotherapy.

    PubMed

    Ma, C M; Faddegon, B A; Rogers, D W; Mackie, T R

    1997-03-01

    Monte Carlo studies of dose distributions in patients treated with radiotherapy electron beams would benefit from generalized models of clinical beams if such models introduce little error into the dose calculations. Methodology is presented for the design of beam models, including their evaluation in terms of how well they preserve the character of the clinical beam, and the effect of the beam models on the accuracy of dose distributions calculated with Monte Carlo. This methodology has been used to design beam models for electron beams from two linear accelerators, with either a scanned beam or a scattered beam. Monte Carlo simulations of the accelerator heads are done in which a record is kept of the particle phase-space, including the charge, energy, direction, and position of every particle that emerges from the treatment head, along with a tag regarding the details of the particle history. The character of the simulated beams are studied in detail and used to design various beam models from a simple point source to a sophisticated multiple-source model which treats particles from different parts of a linear accelerator as from different sub-sources. Dose distributions calculated using both the phase-space data and the multiple-source model agree within 2%, demonstrating that the model is adequate for the purpose of Monte Carlo treatment planning for the beams studied. Benefits of the beam models over phase-space data for dose calculation are shown to include shorter computation time in the treatment head simulation and a smaller disk space requirement, both of which impact on the clinical utility of Monte Carlo treatment planning. PMID:9089592

  7. The Analysis of the Patterns of Radiation-Induced DNA Damage Foci by a Stochastic Monte Carlo Model of DNA Double Strand Breaks Induction by Heavy Ions and Image Segmentation Software

    NASA Technical Reports Server (NTRS)

    Ponomarev, Artem; Cucinotta, F.

    2011-01-01

    To create a generalized mechanistic model of DNA damage in human cells that will generate analytical and image data corresponding to experimentally observed DNA damage foci and will help to improve the experimental foci yields by simulating spatial foci patterns and resolving problems with quantitative image analysis. Material and Methods: The analysis of patterns of RIFs (radiation-induced foci) produced by low- and high-LET (linear energy transfer) radiation was conducted by using a Monte Carlo model that combines the heavy ion track structure with characteristics of the human genome on the level of chromosomes. The foci patterns were also simulated in the maximum projection plane for flat nuclei. Some data analysis was done with the help of image segmentation software that identifies individual classes of RIFs and colocolized RIFs, which is of importance to some experimental assays that assign DNA damage a dual phosphorescent signal. Results: The model predicts the spatial and genomic distributions of DNA DSBs (double strand breaks) and associated RIFs in a human cell nucleus for a particular dose of either low- or high-LET radiation. We used the model to do analyses for different irradiation scenarios. In the beam-parallel-to-the-disk-of-a-flattened-nucleus scenario we found that the foci appeared to be merged due to their high density, while, in the perpendicular-beam scenario, the foci appeared as one bright spot per hit. The statistics and spatial distribution of regions of densely arranged foci, termed DNA foci chains, were predicted numerically using this model. Another analysis was done to evaluate the number of ion hits per nucleus, which were visible from streaks of closely located foci. In another analysis, our image segmentaiton software determined foci yields directly from images with single-class or colocolized foci. Conclusions: We showed that DSB clustering needs to be taken into account to determine the true DNA damage foci yield, which helps to determine the DSB yield. Using the model analysis, a researcher can refine the DSB yield per nucleus per particle. We showed that purely geometric artifacts, present in the experimental images, can be analytically resolved with the model, and that the quantization of track hits and DSB yields can be provided to the experimentalists who use enumeration of radiation-induced foci in immunofluorescence experiments using proteins that detect DNA damage. An automated image segmentaiton software can prove useful in a faster and more precise object counting for colocolized foci images.

  8. A Preliminary Study of In-House Monte Carlo Simulations: An Integrated Monte Carlo Verification System

    SciTech Connect

    Mukumoto, Nobutaka; Tsujii, Katsutomo; Saito, Susumu; Yasunaga, Masayoshi; Takegawa, Hidek; Yamamoto, Tokihiro; Numasaki, Hodaka; Teshima, Teruki

    2009-10-01

    Purpose: To develop an infrastructure for the integrated Monte Carlo verification system (MCVS) to verify the accuracy of conventional dose calculations, which often fail to accurately predict dose distributions, mainly due to inhomogeneities in the patient's anatomy, for example, in lung and bone. Methods and Materials: The MCVS consists of the graphical user interface (GUI) based on a computational environment for radiotherapy research (CERR) with MATLAB language. The MCVS GUI acts as an interface between the MCVS and a commercial treatment planning system to import the treatment plan, create MC input files, and analyze MC output dose files. The MCVS consists of the EGSnrc MC codes, which include EGSnrc/BEAMnrc to simulate the treatment head and EGSnrc/DOSXYZnrc to calculate the dose distributions in the patient/phantom. In order to improve computation time without approximations, an in-house cluster system was constructed. Results: The phase-space data of a 6-MV photon beam from a Varian Clinac unit was developed and used to establish several benchmarks under homogeneous conditions. The MC results agreed with the ionization chamber measurements to within 1%. The MCVS GUI could import and display the radiotherapy treatment plan created by the MC method and various treatment planning systems, such as RTOG and DICOM-RT formats. Dose distributions could be analyzed by using dose profiles and dose volume histograms and compared on the same platform. With the cluster system, calculation time was improved in line with the increase in the number of central processing units (CPUs) at a computation efficiency of more than 98%. Conclusions: Development of the MCVS was successful for performing MC simulations and analyzing dose distributions.

  9. A Fast Monte Carlo Simulation for the International Linear Collider Detector

    SciTech Connect

    Furse, D.; /Georgia Tech

    2005-12-15

    The following paper contains details concerning the motivation for, implementation and performance of a Java-based fast Monte Carlo simulation for a detector designed to be used in the International Linear Collider. This simulation, presently included in the SLAC ILC group's org.lcsim package, reads in standard model or SUSY events in STDHEP file format, stochastically simulates the blurring in physics measurements caused by intrinsic detector error, and writes out an LCIO format file containing a set of final particles statistically similar to those that would have found by a full Monte Carlo simulation. In addition to the reconstructed particles themselves, descriptions of the calorimeter hit clusters and tracks that these particles would have produced are also included in the LCIO output. These output files can then be put through various analysis codes in order to characterize the effectiveness of a hypothetical detector at extracting relevant physical information about an event. Such a tool is extremely useful in preliminary detector research and development, as full simulations are extremely cumbersome and taxing on processor resources; a fast, efficient Monte Carlo can facilitate and even make possible detector physics studies that would be very impractical with the full simulation by sacrificing what is in many cases inappropriate attention to detail for valuable gains in time required for results.

  10. Theoretically informed Monte Carlo simulation of liquid crystals by sampling of alignment-tensor fields.

    PubMed

    Armas-Prez, Julio C; Londono-Hurtado, Alejandro; Guzmn, Orlando; Hernndez-Ortiz, Juan P; de Pablo, Juan J

    2015-07-28

    A theoretically informed coarse-grained Monte Carlo method is proposed for studying liquid crystals. The free energy functional of the system is described in the framework of the Landau-de Gennes formalism. The alignment field and its gradients are approximated by finite differences, and the free energy is minimized through a stochastic sampling technique. The validity of the proposed method is established by comparing the results of the proposed approach to those of traditional free energy minimization techniques. Its usefulness is illustrated in the context of three systems, namely, a nematic liquid crystal confined in a slit channel, a nematic liquid crystal droplet, and a chiral liquid crystal in the bulk. It is found that for systems that exhibit multiple metastable morphologies, the proposed Monte Carlo method is generally able to identify lower free energy states that are often missed by traditional approaches. Importantly, the Monte Carlo method identifies such states from random initial configurations, thereby obviating the need for educated initial guesses that can be difficult to formulate. PMID:26233107

  11. Accelerated rescaling of single Monte Carlo simulation runs with the Graphics Processing Unit (GPU)

    PubMed Central

    Yang, Owen; Choi, Bernard

    2013-01-01

    Abstract: To interpret fiber-based and camera-based measurements of remitted light from biological tissues, researchers typically use analytical models, such as the diffusion approximation to light transport theory, or stochastic models, such as Monte Carlo modeling. To achieve rapid (ideally real-time) measurement of tissue optical properties, especially in clinical situations, there is a critical need to accelerate Monte Carlo simulation runs. In this manuscript, we report on our approach using the Graphics Processing Unit (GPU) to accelerate rescaling of single Monte Carlo runs to calculate rapidly diffuse reflectance values for different sets of tissue optical properties. We selected MATLAB to enable non-specialists in C and CUDA-based programming to use the generated open-source code. We developed a software package with four abstraction layers. To calculate a set of diffuse reflectance values from a simulated tissue with homogeneous optical properties, our rescaling GPU-based approach achieves a reduction in computation time of several orders of magnitude as compared to other GPU-based approaches. Specifically, our GPU-based approach generated a diffuse reflectance value in 0.08ms. The transfer time from CPU to GPU memory currently is a limiting factor with GPU-based calculations. However, for calculation of multiple diffuse reflectance values, our GPU-based approach still can lead to processing that is ~3400 times faster than other GPU-based approaches. PMID:24298424

  12. A comparison of generalized hybrid Monte Carlo methods with and without momentum flip

    SciTech Connect

    Akhmatskaya, Elena; Bou-Rabee, Nawaf; Reich, Sebastian

    2009-04-01

    The generalized hybrid Monte Carlo (GHMC) method combines Metropolis corrected constant energy simulations with a partial random refreshment step in the particle momenta. The standard detailed balance condition requires that momenta are negated upon rejection of a molecular dynamics proposal step. The implication is a trajectory reversal upon rejection, which is undesirable when interpreting GHMC as thermostated molecular dynamics. We show that a modified detailed balance condition can be used to implement GHMC without momentum flips. The same modification can be applied to the generalized shadow hybrid Monte Carlo (GSHMC) method. Numerical results indicate that GHMC/GSHMC implementations with momentum flip display a favorable behavior in terms of sampling efficiency, i.e., the traditional GHMC/GSHMC implementations with momentum flip got the advantage of a higher acceptance rate and faster decorrelation of Monte Carlo samples. The difference is more pronounced for GHMC. We also numerically investigate the behavior of the GHMC method as a Langevin-type thermostat. We find that the GHMC method without momentum flip interferes less with the underlying stochastic molecular dynamics in terms of autocorrelation functions and it to be preferred over the GHMC method with momentum flip. The same finding applies to GSHMC.

  13. Theoretically informed Monte Carlo simulation of liquid crystals by sampling of alignment-tensor fields

    NASA Astrophysics Data System (ADS)

    Armas-Pérez, Julio C.; Londono-Hurtado, Alejandro; Guzmán, Orlando; Hernández-Ortiz, Juan P.; de Pablo, Juan J.

    2015-07-01

    A theoretically informed coarse-grained Monte Carlo method is proposed for studying liquid crystals. The free energy functional of the system is described in the framework of the Landau-de Gennes formalism. The alignment field and its gradients are approximated by finite differences, and the free energy is minimized through a stochastic sampling technique. The validity of the proposed method is established by comparing the results of the proposed approach to those of traditional free energy minimization techniques. Its usefulness is illustrated in the context of three systems, namely, a nematic liquid crystal confined in a slit channel, a nematic liquid crystal droplet, and a chiral liquid crystal in the bulk. It is found that for systems that exhibit multiple metastable morphologies, the proposed Monte Carlo method is generally able to identify lower free energy states that are often missed by traditional approaches. Importantly, the Monte Carlo method identifies such states from random initial configurations, thereby obviating the need for educated initial guesses that can be difficult to formulate.

  14. Cluster-Event Biasing in Monte Carlo Applications to Systems Reliability

    SciTech Connect

    Khazen, Michael; Dubi, Arie

    2002-07-15

    Estimation of the probabilities of rare events with significant consequences, e.g., disasters, is one of the most difficult problems in Monte Carlo applications to systems engineering and reliability. The Bernoulli-type estimator used in analog Monte Carlo is characterized by extremely high variance when applied to the estimation of rare events. Variance reduction methods are, therefore, of importance in this field.The present work suggests a parametric nonanalog probability measure based on the superposition of transition biasing and forced events biasing. The cluster-event model is developed providing an effective and reliable approximation for the second moment and the benefit along with a methodology of selecting near-optimal biasing parameters. Numerical examples show a considerable benefit when the method is applied to problems of particular difficulty for the analog Monte Carlo method.The suggested model is applicable for reliability assessment of stochastic networks of complicated topology and high redundancy with component-level repair (i.e., repair applied to an individual failed component while the system is operational)

  15. Theoretically informed Monte Carlo simulation of liquid crystals by sampling of alignment-tensor fields.

    SciTech Connect

    Armas-Perez, Julio C.; Londono-Hurtado, Alejandro; Guzman, Orlando; Hernandez-Ortiz, Juan P.; de Pablo, Juan J.

    2015-07-27

    A theoretically informed coarse-grained Monte Carlo method is proposed for studying liquid crystals. The free energy functional of the system is described in the framework of the Landau-de Gennes formalism. The alignment field and its gradients are approximated by finite differences, and the free energy is minimized through a stochastic sampling technique. The validity of the proposed method is established by comparing the results of the proposed approach to those of traditional free energy minimization techniques. Its usefulness is illustrated in the context of three systems, namely, a nematic liquid crystal confined in a slit channel, a nematic liquid crystal droplet, and a chiral liquid crystal in the bulk. It is found that for systems that exhibit multiple metastable morphologies, the proposed Monte Carlo method is generally able to identify lower free energy states that are often missed by traditional approaches. Importantly, the Monte Carlo method identifies such states from random initial configurations, thereby obviating the need for educated initial guesses that can be difficult to formulate.

  16. GPU accelerated Monte Carlo simulation of Brownian motors dynamics with CUDA

    NASA Astrophysics Data System (ADS)

    Spiechowicz, J.; Kostur, M.; Machura, L.

    2015-06-01

    This work presents an updated and extended guide on methods of a proper acceleration of the Monte Carlo integration of stochastic differential equations with the commonly available NVIDIA Graphics Processing Units using the CUDA programming environment. We outline the general aspects of the scientific computing on graphics cards and demonstrate them with two models of a well known phenomenon of the noise induced transport of Brownian motors in periodic structures. As a source of fluctuations in the considered systems we selected the three most commonly occurring noises: the Gaussian white noise, the white Poissonian noise and the dichotomous process also known as a random telegraph signal. The detailed discussion on various aspects of the applied numerical schemes is also presented. The measured speedup can be of the astonishing order of about 3000 when compared to a typical CPU. This number significantly expands the range of problems solvable by use of stochastic simulations, allowing even an interactive research in some cases.

  17. Multilevel Monte Carlo for two phase flow and Buckley–Leverett transport in random heterogeneous porous media

    SciTech Connect

    Müller, Florian Jenny, Patrick Meyer, Daniel W.

    2013-10-01

    Monte Carlo (MC) is a well known method for quantifying uncertainty arising for example in subsurface flow problems. Although robust and easy to implement, MC suffers from slow convergence. Extending MC by means of multigrid techniques yields the multilevel Monte Carlo (MLMC) method. MLMC has proven to greatly accelerate MC for several applications including stochastic ordinary differential equations in finance, elliptic stochastic partial differential equations and also hyperbolic problems. In this study, MLMC is combined with a streamline-based solver to assess uncertain two phase flow and Buckley–Leverett transport in random heterogeneous porous media. The performance of MLMC is compared to MC for a two dimensional reservoir with a multi-point Gaussian logarithmic permeability field. The influence of the variance and the correlation length of the logarithmic permeability on the MLMC performance is studied.

  18. Monte Carlo simulation of particle acceleration at astrophysical shocks

    NASA Technical Reports Server (NTRS)

    Campbell, Roy K.

    1989-01-01

    A Monte Carlo code was developed for the simulation of particle acceleration at astrophysical shocks. The code is implemented in Turbo Pascal on a PC. It is modularized and structured in such a way that modification and maintenance are relatively painless. Monte Carlo simulations of particle acceleration at shocks follow the trajectories of individual particles as they scatter repeatedly across the shock front, gaining energy with each crossing. The particles are assumed to scatter from magnetohydrodynamic (MHD) turbulence on both sides of the shock. A scattering law is used which is related to the assumed form of the turbulence, and the particle and shock parameters. High energy cosmic ray spectra derived from Monte Carlo simulations have observed power law behavior just as the spectra derived from analytic calculations based on a diffusion equation. This high energy behavior is not sensitive to the scattering law used. In contrast with Monte Carlo calculations diffusive calculations rely on the initial injection of supra-thermal particles into the shock environment. Monte Carlo simulations are the only known way to describe the extraction of particles directly from the thermal pool. This was the triumph of the Monte Carlo approach. The question of acceleration efficiency is an important one in the shock acceleration game. The efficiency of shock waves efficient to account for the observed flux of high energy galactic cosmic rays was examined. The efficiency of the acceleration process depends on the thermal particle pick-up and hence the low energy scattering in detail. One of the goals is the self-consistent derivation of the accelerated particle spectra and the MHD turbulence spectra. Presumably the upstream turbulence, which scatters the particles so they can be accelerated, is excited by the streaming accelerated particles and the needed downstream turbulence is convected from the upstream region. The present code is to be modified to include a better description of particle scattering (pitch-angle instead of hard-sphere) and as iterative procedure for treating the self-excitation of the MHD turbulence.

  19. Monte Carlo and analytical dose calculations for ocular proton therapy

    NASA Astrophysics Data System (ADS)

    Koch, Nicholas Corey

    Uveal melanoma is a rare but life-threatening form of ocular cancer. Contemporary treatment techniques include proton therapy, which enables conservation of the eye and its useful vision. Dose to the proximal structures is widely believed to play a role in treatment side effects, therefore, reliable dose estimates are required for properly evaluating the therapeutic value and complication risk of treatment plans. Unfortunately, current simplistic dose calculation algorithms can result in errors of up to 30% in the proximal region. In addition, they lack predictive methods for absolute dose per monitor unit (D/MU) values. To facilitate more accurate dose predictions, a Monte Carlo model of an ocular proton nozzle was created and benchmarked against measured dose profiles to within +/-3% or +/-0.5 mm and D/MU values to within +/-3%. The benchmarked Monte Carlo model was used to develop and validate a new broad beam dose algorithm that included the influence of edgescattered protons on the cross-field intensity profile, the effect of energy straggling in the distal portion of poly-energetic beams, and the proton fluence loss as a function of residual range. Generally, the analytical algorithm predicted relative dose distributions that were within +/-3% or +/-0.5 mm and absolute D/MU values that were within +/-3% of Monte Carlo calculations. Slightly larger dose differences were observed at depths less than 7 mm, an effect attributed to the dose contributions of edge-scattered protons. Additional comparisons of Monte Carlo and broad beam dose predictions were made in a detailed eye model developed in this work, with generally similar findings. Monte Carlo was shown to be an excellent predictor of the measured dose profiles and D/MU values and a valuable tool for developing and validating a broad beam dose algorithm for ocular proton therapy. The more detailed physics modeling by the Monte Carlo and broad beam dose algorithms represent an improvement in the accuracy of relative dose predictions over current techniques, and they provide absolute dose predictions. It is anticipated these improvements can be used to develop treatment strategies that reduce the incidence or severity of treatment complications by sparing normal tissue.

  20. MONITOR- MONTE CARLO INVESTIGATION OF TRAJECTORY OPERATIONS AND REQUIREMENTS

    NASA Technical Reports Server (NTRS)

    Glass, A. B.

    1994-01-01

    The Monte Carlo Investigation of Trajectory Operations and Requirements (MONITOR) program was developed to perform spacecraft mission maneuver simulations for geosynchronous, single maneuver, and comet encounter type trajectories. MONITOR is a multifaceted program which enables the modeling of various orbital sequences and missions, the generation of Monte Carlo simulation statistics, and the parametric scanning of user requested variables over specified intervals. The MONITOR program has been used primarily to study geosynchronous missions and has the capability to model Space Shuttle deployed satellite trajectories. The ability to perform a Monte Carlo error analysis of user specified orbital parameters using predicted maneuver execution errors can make MONITOR a significant part of any mission planning and analysis system. The MONITOR program can be executed in four operational modes. In the first mode, analytic state covariance matrix propagation is performed using state transition matrices for the coasting and powered burn phases of the trajectory. A two-body central force field is assumed throughout the analysis. Histograms of the final orbital elements and other state dependent variables may be evaluated by a Monte Carlo analysis. In the second mode, geosynchronous missions can be simulated from parking orbit injection through station acquisition. A two-body central force field is assumed throughout the simulation. Nominal mission studies can be conducted; however, the main use of this mode lies in evaluating the behavior of pertinent orbital trajectory parameters by making use of a Monte Carlo analysis. In the third mode, MONITOR performs parametric scans of user-requested variables for a nominal mission. Various orbital sequences may be specified; however, primary use is devoted to geosynchronous missions. A maximum of five variables may be scanned at a time. The fourth mode simulates a mission from orbit injection through comet encounter with optional Monte Carlo analysis. Midcourse maneuvers may be made to correct for burn errors and comet movements. The MONITOR program is written in FORTRAN IV for batch execution and has been implemented on an IBM 360 series computer with a central memory requirement of approximately 255K of 8 bit bytes. The MONITOR program was developed in 1980.

  1. Goal-oriented sensitivity analysis for lattice kinetic Monte Carlo simulations

    SciTech Connect

    Arampatzis, Georgios; Department of Mathematics and Statistics, University of Massachusetts, Amherst, Massachusetts 01003 ; Katsoulakis, Markos A.

    2014-03-28

    In this paper we propose a new class of coupling methods for the sensitivity analysis of high dimensional stochastic systems and in particular for lattice Kinetic Monte Carlo (KMC). Sensitivity analysis for stochastic systems is typically based on approximating continuous derivatives with respect to model parameters by the mean value of samples from a finite difference scheme. Instead of using independent samples the proposed algorithm reduces the variance of the estimator by developing a strongly correlated-“coupled”- stochastic process for both the perturbed and unperturbed stochastic processes, defined in a common state space. The novelty of our construction is that the new coupled process depends on the targeted observables, e.g., coverage, Hamiltonian, spatial correlations, surface roughness, etc., hence we refer to the proposed method as goal-oriented sensitivity analysis. In particular, the rates of the coupled Continuous Time Markov Chain are obtained as solutions to a goal-oriented optimization problem, depending on the observable of interest, by considering the minimization functional of the corresponding variance. We show that this functional can be used as a diagnostic tool for the design and evaluation of different classes of couplings. Furthermore, the resulting KMC sensitivity algorithm has an easy implementation that is based on the Bortz–Kalos–Lebowitz algorithm's philosophy, where events are divided in classes depending on level sets of the observable of interest. Finally, we demonstrate in several examples including adsorption, desorption, and diffusion Kinetic Monte Carlo that for the same confidence interval and observable, the proposed goal-oriented algorithm can be two orders of magnitude faster than existing coupling algorithms for spatial KMC such as the Common Random Number approach. We also provide a complete implementation of the proposed sensitivity analysis algorithms, including various spatial KMC examples, in a supplementary MATLAB source code.

  2. Variance reduction for Fokker-Planck based particle Monte Carlo schemes

    NASA Astrophysics Data System (ADS)

    Gorji, M. Hossein; Andric, Nemanja; Jenny, Patrick

    2015-08-01

    Recently, Fokker-Planck based particle Monte Carlo schemes have been proposed and evaluated for simulations of rarefied gas flows [1-3]. In this paper, the variance reduction for particle Monte Carlo simulations based on the Fokker-Planck model is considered. First, deviational based schemes were derived and reviewed, and it is shown that these deviational methods are not appropriate for practical Fokker-Planck based rarefied gas flow simulations. This is due to the fact that the deviational schemes considered in this study lead either to instabilities in the case of two-weight methods or to large statistical errors if the direct sampling method is applied. Motivated by this conclusion, we developed a novel scheme based on correlated stochastic processes. The main idea here is to synthesize an additional stochastic process with a known solution, which is simultaneously solved together with the main one. By correlating the two processes, the statistical errors can dramatically be reduced; especially for low Mach numbers. To assess the methods, homogeneous relaxation, planar Couette and lid-driven cavity flows were considered. For these test cases, it could be demonstrated that variance reduction based on parallel processes is very robust and effective.

  3. COMET-PE as an Alternative to Monte Carlo for Photon and Electron Transport

    NASA Astrophysics Data System (ADS)

    Hayward, Robert M.; Rahnema, Farzad

    2014-06-01

    Monte Carlo methods are a central component of radiotherapy treatment planning, shielding design, detector modeling, and other applications. Long calculation times, however, can limit the usefulness of these purely stochastic methods. The coarse mesh method for photon and electron transport (COMET-PE) provides an attractive alternative. By combining stochastic pre-computation with a deterministic solver, COMET-PE achieves accuracy comparable to Monte Carlo methods in only a fraction of the time. The method's implementation has been extended to 3D, and in this work, it is validated by comparison to DOSXYZnrc using a photon radiotherapy benchmark. The comparison demonstrates excellent agreement; of the voxels that received more than 10% of the maximum dose, over 97.3% pass a 2% / 2mm acceptance test and over 99.7% pass a 3% / 3mm test. Furthermore, the method is over an order of magnitude faster than DOSXYZnrc and is able to take advantage of both distributed-memory and shared-memory parallel architectures for increased performance.

  4. Computer Monte Carlo simulation in quantitative resource estimation

    USGS Publications Warehouse

    Root, D.H.; Menzie, W.D.; Scott, W.A.

    1992-01-01

    The method of making quantitative assessments of mineral resources sufficiently detailed for economic analysis is outlined in three steps. The steps are (1) determination of types of deposits that may be present in an area, (2) estimation of the numbers of deposits of the permissible deposit types, and (3) combination by Monte Carlo simulation of the estimated numbers of deposits with the historical grades and tonnages of these deposits to produce a probability distribution of the quantities of contained metal. Two examples of the estimation of the number of deposits (step 2) are given. The first example is for mercury deposits in southwestern Alaska and the second is for lode tin deposits in the Seward Peninsula. The flow of the Monte Carlo simulation program is presented with particular attention to the dependencies between grades and tonnages of deposits and between grades of different metals in the same deposit. ?? 1992 Oxford University Press.

  5. Rejection-free Monte Carlo scheme for anisotropic particles.

    PubMed

    Sinkovits, Daniel W; Barr, Stephen A; Luijten, Erik

    2012-04-14

    We extend the geometric cluster algorithm [J. Liu and E. Luijten, Phys. Rev. Lett. 92, 035504 (2004)], a highly efficient, rejection-free Monte Carlo scheme for fluids and colloidal suspensions, to the case of anisotropic particles. This is made possible by adopting hyperspherical boundary conditions. A detailed derivation of the algorithm is presented, along with extensive implementation details as well as benchmark results. We describe how the quaternion notation is particularly suitable for the four-dimensional geometric operations employed in the algorithm. We present results for asymmetric Lennard-Jones dimers and for the Yukawa one-component plasma in hyperspherical geometry. The efficiency gain that can be achieved compared to conventional, Metropolis-type Monte Carlo simulations is investigated for rod-sphere mixtures as a function of rod aspect ratio, rod-sphere diameter ratio, and rod concentration. The effect of curved geometry on physical properties is addressed. PMID:22502505

  6. Monte Carlo simulations of plutonium gamma-ray spectra

    SciTech Connect

    Koenig, Z.M.; Carlson, J.B.; Wang, Tzu-Fang; Ruhter, W.D.

    1993-07-16

    Monte Carlo calculations were investigated as a means of simulating the gamma-ray spectra of Pu. These simulated spectra will be used to develop and evaluate gamma-ray analysis techniques for various nondestructive measurements. Simulated spectra of calculational standards can be used for code intercomparisons, to understand systematic biases and to estimate minimum detection levels of existing and proposed nondestructive analysis instruments. The capability to simulate gamma-ray spectra from HPGe detectors could significantly reduce the costs of preparing large numbers of real reference materials. MCNP was used for the Monte Carlo transport of the photons. Results from the MCNP calculations were folded in with a detector response function for a realistic spectrum. Plutonium spectrum peaks were produced with Lorentzian shapes, for the x-rays, and Gaussian distributions. The MGA code determined the Pu isotopes and specific power of this calculated spectrum and compared it to a similar analysis on a measured spectrum.

  7. Engineering local optimality in quantum Monte Carlo algorithms

    SciTech Connect

    Pollet, Lode . E-mail: pollet@itp.phys.ethz.ch; Houcke, Kris Van; Rombouts, Stefan M.A.

    2007-08-10

    Quantum Monte Carlo algorithms based on a world-line representation such as the worm algorithm and the directed loop algorithm are among the most powerful numerical techniques for the simulation of non-frustrated spin models and of bosonic models. Both algorithms work in the grand-canonical ensemble and can have a winding number larger than zero. However, they retain a lot of intrinsic degrees of freedom which can be used to optimize the algorithm. We let us guide by the rigorous statements on the globally optimal form of Markov chain Monte Carlo simulations in order to devise a locally optimal formulation of the worm algorithm while incorporating ideas from the directed loop algorithm. We provide numerical examples for the soft-core Bose-Hubbard model and various spin-S models.

  8. Molecular physics and chemistry applications of quantum Monte Carlo

    SciTech Connect

    Reynolds, P.J.; Barnett, R.N.; Hammond, B.L.; Lester, W.A. Jr.

    1985-09-01

    We discuss recent work with the diffusion quantum Monte Carlo (QMC) method in its application to molecular systems. The formal correspondence of the imaginary time Schroedinger equation to a diffusion equation allows one to calculate quantum mechanical expectation values as Monte Carlo averages over an ensemble of random walks. We report work on atomic and molecular total energies, as well as properties including electron affinities, binding energies, reaction barriers, and moments of the electronic charge distribution. A brief discussion is given on how standard QMC must be modified for calculating properties. Calculated energies and properties are presented for a number of molecular systems, including He, F, F , H2, N, and N2. Recent progress in extending the basic QMC approach to the calculation of ''analytic'' (as opposed to finite-difference) derivatives of the energy is presented, together with an H2 potential-energy curve obtained using analytic derivatives. 39 refs., 1 fig., 2 tabs.

  9. Ab initio Monte Carlo investigation of small lithium clusters.

    SciTech Connect

    Srinivas, S.

    1999-06-16

    Structural and thermal properties of small lithium clusters are studied using ab initio-based Monte Carlo simulations. The ab initio scheme uses a Hartree-Fock/density functional treatment of the electronic structure combined with a jump-walking Monte Carlo sampling of nuclear configurations. Structural forms of Li{sub 8} and Li{sub 9}{sup +} clusters are obtained and their thermal properties analyzed in terms of probability distributions of the cluster potential energy, average potential energy and configurational heat capacity all considered as a function of the cluster temperature. Details of the gradual evolution with temperature of the structural forms sampled are examined. Temperatures characterizing the onset of structural changes and isomer coexistence are identified for both clusters.

  10. Monte Carlo simulation for PET scanners and shields.

    PubMed

    Hasegawa, Tomoyuki; Michel, Christian; Murayama, Hideo; Yamaya, Taiga; Matsuura, Hajime; Tanada, Syuuji

    2001-01-01

    A Monte Carlo simulation code was developed for simulating PET scanners with the Monte Carlo program package GEANT. The present simulation code can handle not only conventional types of PET scanners, but also any complex detector systems with arbitrary geometrical configuration. All the relevant interactions of photons and electrons are taken into account in all the defined objects while optical tracking in the scintillation crystals is approximated by simple analytical simulation. In addition to basic PET scanner performance factors, such as sensitivity and scatter fraction, valuable but un-measurable information, such as photon trajectories and interaction position distribution, can be obtained and represented graphically in various ways. This simulation code has proved useful in analyzing the physics characteristics of existing commercial PET scanners and related shields, and in design studies of new PET scanners. PMID:12766303

  11. Cluster Monte Carlo methods for the FePt Hamiltonian

    NASA Astrophysics Data System (ADS)

    Lyberatos, A.; Parker, G. J.

    2016-02-01

    Cluster Monte Carlo methods for the classical spin Hamiltonian of FePt with long range exchange interactions are presented. We use a combination of the Swendsen-Wang (or Wolff) and Metropolis algorithms that satisfies the detailed balance condition and ergodicity. The algorithms are tested by calculating the temperature dependence of the magnetization, susceptibility and heat capacity of L10-FePt nanoparticles in a range including the critical region. The cluster models yield numerical results in good agreement within statistical error with the standard single-spin flipping Monte Carlo method. The variation of the spin autocorrelation time with grain size is used to deduce the dynamic exponent of the algorithms. Our cluster models do not provide a more accurate estimate of the magnetic properties at equilibrium.

  12. Large-cell Monte Carlo renormalization of irreversible growth processes

    NASA Technical Reports Server (NTRS)

    Nakanishi, H.; Family, F.

    1985-01-01

    Monte Carlo sampling is applied to a recently formulated direct-cell renormalization method for irreversible, disorderly growth processes. Large-cell Monte Carlo renormalization is carried out for various nonequilibrium problems based on the formulation dealing with relative probabilities. Specifically, the method is demonstrated by application to the 'true' self-avoiding walk and the Eden model of growing animals for d = 2, 3, and 4 and to the invasion percolation problem for d = 2 and 3. The results are asymptotically in agreement with expectations; however, unexpected complications arise, suggesting the possibility of crossovers, and in any case, demonstrating the danger of using small cells alone, because of the very slow convergence as the cell size b is extrapolated to infinity. The difficulty of applying the present method to the diffusion-limited-aggregation model, is commented on.

  13. Monte Carlo simulations of the Galileo energetic particle detector

    NASA Astrophysics Data System (ADS)

    Jun, I.; Ratliff, J. M.; Garrett, H. B.; McEntire, R. W.

    2002-09-01

    Monte Carlo radiation transport studies have been performed for the Galileo spacecraft energetic particle detector (EPD) in order to study its response to energetic electrons and protons. Three-dimensional Monte Carlo radiation transport codes, MCNP version 4B (for electrons) and MCNPX version 2.2.3 (for protons), were used throughout the study. The results are presented in the form of "geometric factors" for the high-energy channels studied in this paper: B1, DC2, and DC3 for electrons and B0, DC0, and DC1 for protons. The geometric factor is the energy-dependent detector response function that relates the incident particle fluxes to instrument count rates. The trend of actual data measured by the EPD was successfully reproduced using the geometric factors obtained in this study.

  14. Monte Carlo Simulation of the Milagro Gamma-ray Observatory

    NASA Astrophysics Data System (ADS)

    Vasileiou, V.

    The Milagro gamma-ray observatory is a water-Cherenkov detector capable of observing air showers produced by very high energy gamma-rays. The sensitivity and performance of the detector is determined by a detailed Monte Carlo simulation and verified through the observation of gamma-ray sources and the isotropic cosmic-ray background. Corsika is used for simulating the extensive air showers produced by either hadrons (background) and gamma rays (signal). A GEANT4 based application is used for simulating the response of the Milagro detector to the air shower particles reaching the ground. The GEANT4 simulation includes a detailed description of the optical properties of the detector and the response of the photomultiplier tubes. Details and results from the Milagro Monte Carlo simulation will be presented.

  15. Monte Carlo Strategies for Selecting Parameter Values in Simulation Experiments.

    PubMed

    Leigh, Jessica W; Bryant, David

    2015-09-01

    Simulation experiments are used widely throughout evolutionary biology and bioinformatics to compare models, promote methods, and test hypotheses. The biggest practical constraint on simulation experiments is the computational demand, particularly as the number of parameters increases. Given the extraordinary success of Monte Carlo methods for conducting inference in phylogenetics, and indeed throughout the sciences, we investigate ways in which Monte Carlo framework can be used to carry out simulation experiments more efficiently. The key idea is to sample parameter values for the experiments, rather than iterate through them exhaustively. Exhaustive analyses become completely infeasible when the number of parameters gets too large, whereas sampled approaches can fare better in higher dimensions. We illustrate the framework with applications to phylogenetics and genetic archaeology. PMID:26012871

  16. Efficient, Automated Monte Carlo Methods for Radiation Transport

    PubMed Central

    Kong, Rong; Ambrose, Martin; Spanier, Jerome

    2012-01-01

    Monte Carlo simulations provide an indispensible model for solving radiative transport problems, but their slow convergence inhibits their use as an everyday computational tool. In this paper, we present two new ideas for accelerating the convergence of Monte Carlo algorithms based upon an efficient algorithm that couples simulations of forward and adjoint transport equations. Forward random walks are first processed in stages, each using a fixed sample size, and information from stage k is used to alter the sampling and weighting procedure in stage k + 1. This produces rapid geometric convergence and accounts for dramatic gains in the efficiency of the forward computation. In case still greater accuracy is required in the forward solution, information from an adjoint simulation can be added to extend the geometric learning of the forward solution. The resulting new approach should find widespread use when fast, accurate simulations of the transport equation are needed. PMID:23226872

  17. Fixed-node diffusion Monte Carlo method for lithium systems

    NASA Astrophysics Data System (ADS)

    Rasch, K. M.; Mitas, L.

    2015-07-01

    We study lithium systems over a range of a number of atoms, specifically atomic anion, dimer, metallic cluster, and body-centered-cubic crystal, using the fixed-node diffusion Monte Carlo method. The focus is on analysis of the fixed-node errors of each system, and for that purpose we test several orbital sets in order to provide the most accurate nodal hypersurfaces. The calculations include both core and valence electrons in order to avoid any possible impact by pseudopotentials. To quantify the fixed-node errors, we compare our results to other highly accurate calculations, and wherever available, to experimental observations. The results for these Li systems show that the fixed-node diffusion Monte Carlo method achieves accurate total energies, recovers 96 -99 % of the correlation energy, and estimates binding energies with errors bounded by 0.1 eV /at .

  18. Lattice-switch Monte Carlo: the fccbcc problem

    NASA Astrophysics Data System (ADS)

    Underwood, T. L.; Ackland, G. J.

    2015-09-01

    Lattice-switch Monte Carlo is an efficient method for calculating the free energy difference between two solid phases, or a solid and a fluid phase. Here, we provide a brief introduction to the method, and list its applications since its inception. We then describe a lattice switch for the fcc and bcc phases based on the Bain orientation relationship. Finally, we present preliminary results regarding our application of the method to the fcc and bcc phases in the Lennard-Jones system. Our initial calculations reveal that the bcc phase is unstable, quickly degenerating into some as yet undetermined metastable solid phase. This renders conventional lattice-switch Monte Carlo intractable for this phase. Possible solutions to this problem are discussed.

  19. Monte Carlo Integration Using Spatial Structure of Markov Random Field

    NASA Astrophysics Data System (ADS)

    Yasuda, Muneki

    2015-03-01

    Monte Carlo integration (MCI) techniques are important in various fields. In this study, a new MCI technique for Markov random fields (MRFs) is proposed. MCI consists of two successive parts: the first involves sampling using a technique such as the Markov chain Monte Carlo method, and the second involves an averaging operation using the obtained sample points. In the averaging operation, a simple sample averaging technique is often employed. The method proposed in this paper improves the averaging operation by addressing the spatial structure of the MRF and is mathematically guaranteed to statistically outperform standard MCI using the simple sample averaging operation. Moreover, the proposed method can be improved in a systematic manner and is numerically verified by numerical simulations using planar Ising models. In the latter part of this paper, the proposed method is applied to the inverse Ising problem and we observe that it outperforms the maximum pseudo-likelihood estimation.

  20. Bayesian Monte Carlo method for nuclear data evaluation

    NASA Astrophysics Data System (ADS)

    Koning, A. J.

    2015-12-01

    A Bayesian Monte Carlo method is outlined which allows a systematic evaluation of nuclear reactions using the nuclear model code TALYS and the experimental nuclear reaction database EXFOR. The method is applied to all nuclides at the same time. First, the global predictive power of TALYS is numerically assessed, which enables to set the prior space of nuclear model solutions. Next, the method gradually zooms in on particular experimental data per nuclide, until for each specific target nuclide its existing experimental data can be used for weighted Monte Carlo sampling. To connect to the various different schools of uncertainty propagation in applied nuclear science, the result will be either an EXFOR-weighted covariance matrix or a collection of random files, each accompanied by the EXFOR-based weight.

  1. Analytic results and weighted Monte Carlo simulations for CDO pricing

    NASA Astrophysics Data System (ADS)

    Stippinger, M.; Rácz, É.; Vető, B.; Bihary, Zs.

    2012-02-01

    We explore the possibilities of importance sampling in the Monte Carlo pricing of a structured credit derivative referred to as Collateralized Debt Obligation (CDO). Modeling a CDO contract is challenging, since it depends on a pool of (typically ˜ 100) assets, Monte Carlo simulations are often the only feasible approach to pricing. Variance reduction techniques are therefore of great importance. This paper presents an exact analytic solution using Laplace-transform and MC importance sampling results for an easily tractable intensity-based model of the CDO, namely the compound Poissonian. Furthermore analytic formulas are derived for the reweighting efficiency. The computational gain is appealing, nevertheless, even in this basic scheme, a phase transition can be found, rendering some parameter regimes out of reach. A model-independent transform approach is also presented for CDO pricing.

  2. Efficient, automated Monte Carlo methods for radiation transport

    SciTech Connect

    Kong Rong; Ambrose, Martin; Spanier, Jerome

    2008-11-20

    Monte Carlo simulations provide an indispensible model for solving radiative transport problems, but their slow convergence inhibits their use as an everyday computational tool. In this paper, we present two new ideas for accelerating the convergence of Monte Carlo algorithms based upon an efficient algorithm that couples simulations of forward and adjoint transport equations. Forward random walks are first processed in stages, each using a fixed sample size, and information from stage k is used to alter the sampling and weighting procedure in stage k+1. This produces rapid geometric convergence and accounts for dramatic gains in the efficiency of the forward computation. In case still greater accuracy is required in the forward solution, information from an adjoint simulation can be added to extend the geometric learning of the forward solution. The resulting new approach should find widespread use when fast, accurate simulations of the transport equation are needed.

  3. Radiotherapy Monte Carlo simulation using cloud computing technology.

    PubMed

    Poole, C M; Cornelius, I; Trapp, J V; Langton, C M

    2012-12-01

    Cloud computing allows for vast computational resources to be leveraged quickly and easily in bursts as and when required. Here we describe a technique that allows for Monte Carlo radiotherapy dose calculations to be performed using GEANT4 and executed in the cloud, with relative simulation cost and completion time evaluated as a function of machine count. As expected, simulation completion time decreases as 1/n for n parallel machines, and relative simulation cost is found to be optimal where n is a factor of the total simulation time in hours. Using the technique, we demonstrate the potential usefulness of cloud computing as a solution for rapid Monte Carlo simulation for radiotherapy dose calculation without the need for dedicated local computer hardware as a proof of principal. PMID:23188699

  4. The MCLIB library: Monte Carlo simulation of neutron scattering instruments

    SciTech Connect

    Seeger, P.A.

    1995-09-01

    Monte Carlo is a method to integrate over a large number of variables. Random numbers are used to select a value for each variable, and the integrand is evaluated. The process is repeated a large number of times and the resulting values are averaged. For a neutron transport problem, first select a neutron from the source distribution, and project it through the instrument using either deterministic or probabilistic algorithms to describe its interaction whenever it hits something, and then (if it hits the detector) tally it in a histogram representing where and when it was detected. This is intended to simulate the process of running an actual experiment (but it is much slower). This report describes the philosophy and structure of MCLIB, a Fortran library of Monte Carlo subroutines which has been developed for design of neutron scattering instruments. A pair of programs (LQDGEOM and MC{_}RUN) which use the library are shown as an example.

  5. Monte Carlo methods for light propagation in biological tissues.

    PubMed

    Vinckenbosch, Laura; Lacaux, Céline; Tindel, Samy; Thomassin, Magalie; Obara, Tiphaine

    2015-11-01

    Light propagation in turbid media is driven by the equation of radiative transfer. We give a formal probabilistic representation of its solution in the framework of biological tissues and we implement algorithms based on Monte Carlo methods in order to estimate the quantity of light that is received by a homogeneous tissue when emitted by an optic fiber. A variance reduction method is studied and implemented, as well as a Markov chain Monte Carlo method based on the Metropolis-Hastings algorithm. The resulting estimating methods are then compared to the so-called Wang-Prahl (or Wang) method. Finally, the formal representation allows to derive a non-linear optimization algorithm close to Levenberg-Marquardt that is used for the estimation of the scattering and absorption coefficients of the tissue from measurements. PMID:26362232

  6. Monte carlo Techniques for the Comprehensive Modeling of Isotopic Inventories in Future Nuclear Systems and Fuel Cycles

    SciTech Connect

    Paul P.H. Wilson

    2005-07-30

    The development of Monte Carlo techniques for isotopic inventory analysis has been explored in order to facilitate the modeling of systems with flowing streams of material through varying neutron irradiation environments. This represents a novel application of Monte Carlo methods to a field that has traditionally relied on deterministic solutions to systems of first-order differential equations. The Monte Carlo techniques were based largely on the known modeling techniques of Monte Carlo radiation transport, but with important differences, particularly in the area of variance reduction and efficiency measurement. The software that was developed to implement and test these methods now provides a basis for validating approximate modeling techniques that are available to deterministic methodologies. The Monte Carlo methods have been shown to be effective in reproducing the solutions of simple problems that are possible using both stochastic and deterministic methods. The Monte Carlo methods are also effective for tracking flows of materials through complex systems including the ability to model removal of individual elements or isotopes in the system. Computational performance is best for flows that have characteristic times that are large fractions of the system lifetime. As the characteristic times become short, leading to thousands or millions of passes through the system, the computational performance drops significantly. Further research is underway to determine modeling techniques to improve performance within this range of problems. This report describes the technical development of Monte Carlo techniques for isotopic inventory analysis. The primary motivation for this solution methodology is the ability to model systems of flowing material being exposed to varying and stochastically varying radiation environments. The methodology was developed in three stages: analog methods which model each atom with true reaction probabilities (Section 2), non-analog methods which bias the probability distributions while adjusting atom weights to preserve a fair game (Section 3), and efficiency measures to provide local and global measures of the effectiveness of the non-analog methods (Section 4). Following this development, the MCise (Monte Carlo isotope simulation engine) software was used to explore the efficiency of different modeling techniques (Section 5).

  7. Monte-Carlo simulation for an aerogel Cherenkov counter

    NASA Astrophysics Data System (ADS)

    Suda, R.; Watanabe, M.; Enomoto, R.; Iijima, T.; Adachi, I.; Hattori, H.; Kuniya, T.; Ooba, T.; Sumiyoshi, T.; Yoshida, Y.

    1998-02-01

    We have developed a Monte-Carlo simulation code for an aerogel Cherenkov counter which is operated under a strong magnetic field such as 1.5T. This code consists of two parts: photon transportation inside aerogel tiles, and one-dimensional amplification in a fine-mesh photomultiplier tube. It simulates the output photo-electron yields as accurately as 5% with only a single free parameter. This code is applied to simulations for a B-factory particle identification system.

  8. Procedure for Adapting Direct Simulation Monte Carlo Meshes

    NASA Technical Reports Server (NTRS)

    Woronowicz, Michael S.; Wilmoth, Richard G.; Carlson, Ann B.; Rault, Didier F. G.

    1992-01-01

    A technique is presented for adapting computational meshes used in the G2 version of the direct simulation Monte Carlo method. The physical ideas underlying the technique are discussed, and adaptation formulas are developed for use on solutions generated from an initial mesh. The effect of statistical scatter on adaptation is addressed, and results demonstrate the ability of this technique to achieve more accurate results without increasing necessary computational resources.

  9. A new method for commissioning Monte Carlo treatment planning systems

    NASA Astrophysics Data System (ADS)

    Aljarrah, Khaled Mohammed

    2005-11-01

    The Monte Carlo method is an accurate method for solving numerical problems in different fields. It has been used for accurate radiation dose calculation for radiation treatment of cancer. However, the modeling of an individual radiation beam produced by a medical linear accelerator for Monte Carlo dose calculation, i.e., the commissioning of a Monte Carlo treatment planning system, has been the bottleneck for the clinical implementation of Monte Carlo treatment planning. In this study a new method has been developed to determine the parameters of the initial electron beam incident on the target for a clinical linear accelerator. The interaction of the initial electron beam with the accelerator target produces x-ray and secondary charge particles. After successive interactions in the linac head components, the x-ray photons and the secondary charge particles interact with the patient's anatomy and deliver dose to the region of interest. The determination of the initial electron beam parameters is important for estimating the delivered dose to the patients. These parameters, such as beam energy and radial intensity distribution, are usually estimated through a trial and error process. In this work an easy and efficient method was developed to determine these parameters. This was accomplished by comparing calculated 3D dose distributions for a grid of assumed beam energies and radii in a water phantom with measurements data. Different cost functions were studied to choose the appropriate function for the data comparison. The beam parameters were determined on the light of this method. Due to the assumption that same type of linacs are exactly the same in their geometries and only differ by the initial phase space parameters, the results of this method were considered as a source data to commission other machines of the same type.

  10. Direct Monte Carlo Simulations of Hypersonic Viscous Interactions Including Separation

    NASA Technical Reports Server (NTRS)

    Moss, James N.; Rault, Didier F. G.; Price, Joseph M.

    1993-01-01

    Results of calculations obtained using the direct simulation Monte Carlo method for Mach 25 flow over a control surface are presented. The numerical simulations are for a 35-deg compression ramp at a low-density wind-tunnel test condition. Calculations obtained using both two- and three-dimensional solutions are reviewed, and a qualitative comparison is made with the oil flow pictures highlight separation and three-dimensional flow structure.

  11. Quantum Monte Carlo study of porphyrin transition metal complexes

    NASA Astrophysics Data System (ADS)

    Koseki, Jun; Maezono, Ryo; Tachikawa, Masanori; Towler, M. D.; Needs, R. J.

    2008-08-01

    Diffusion quantum Monte Carlo (DMC) calculations for transition metal (M) porphyrin complexes (MPo, M=Ni,Cu,Zn) are reported. We calculate the binding energies of the transition metal atoms to the porphin molecule. Our DMC results are in reasonable agreement with those obtained from density functional theory calculations using the B3LYP hybrid exchange-correlation functional. Our study shows that such calculations are feasible with the DMC method.

  12. Variational Monte Carlo study of He-4 in two dimensions

    NASA Astrophysics Data System (ADS)

    Belic, A.; Fantoni, S.

    1993-11-01

    The study of the ground state of liquid and solid He-4 in two dimensions (2D) and at zero temperature, using Variational Monte Carlo (VMC) method, is presented. The trial wave functions used include the Shadow Wave Function (SWF) and the recently proposed Extended Shadow Wave Function (ESWF), and well as Jastrow (JWF), Jastrow-Nosanow (JNWF) and Jastrow+Triplet Wave Function (JTWF).

  13. Variational Monte Carlo study of 4He in two dimensions

    NASA Astrophysics Data System (ADS)

    Be´, A.; Fantoni, S.

    1994-02-01

    The study of the ground state of liquid and solid 4He in two dimensions (2D) and at zero temperature, using Variational Monte Carlo (VMC) method, is presented. The trial wave functions used include the shadow wave function (SWF) and the recently proposed extended shadow wave function (ESWF), as well as Jastrow (JWF), Jastrow-Nosanow (JNWF) and Jastrow+Triplet wave function (JTWF).

  14. Monte Carlo simulation experiments on box-type radon dosimeter

    NASA Astrophysics Data System (ADS)

    Jamil, Khalid; Kamran, Muhammad; Illahi, Ahsan; Manzoor, Shahid

    2014-11-01

    Epidemiological studies show that inhalation of radon gas (222Rn) may be carcinogenic especially to mine workers, people living in closed indoor energy conserved environments and underground dwellers. It is, therefore, of paramount importance to measure the 222Rn concentrations (Bq/m3) in indoors environments. For this purpose, box-type passive radon dosimeters employing ion track detector like CR-39 are widely used. Fraction of the number of radon alphas emitted in the volume of the box type dosimeter resulting in latent track formation on CR-39 is the latent track registration efficiency. Latent track registration efficiency is ultimately required to evaluate the radon concentration which consequently determines the effective dose and the radiological hazards. In this research, Monte Carlo simulation experiments were carried out to study the alpha latent track registration efficiency for box type radon dosimeter as a function of dosimeter's dimensions and range of alpha particles in air. Two different self developed Monte Carlo simulation techniques were employed namely: (a) Surface ratio (SURA) method and (b) Ray hitting (RAHI) method. Monte Carlo simulation experiments revealed that there are two types of efficiencies i.e. intrinsic efficiency (?int) and alpha hit efficiency (?hit). The ?int depends upon only on the dimensions of the dosimeter and ?hit depends both upon dimensions of the dosimeter and range of the alpha particles. The total latent track registration efficiency is the product of both intrinsic and hit efficiencies. It has been concluded that if diagonal length of box type dosimeter is kept smaller than the range of alpha particle then hit efficiency is achieved as 100%. Nevertheless the intrinsic efficiency keeps playing its role. The Monte Carlo simulation experimental results have been found helpful to understand the intricate track registration mechanisms in the box type dosimeter. This paper explains that how radon concentration from the experimentally obtained etched track density can be obtained. The program based on RAHI method is also given in this paper.

  15. Hyperparallel tempering Monte Carlo simulation of polymeric systems

    SciTech Connect

    Yan, Qiliang; Pablo, Juan J. de

    2000-07-15

    A new hyperparallel tempering Monte Carlo method is proposed for simulation of complex fluids, including polymeric systems. The method is based on a combination of the expanded grand canonical ensemble (or simple tempering) and the multidimensional parallel tempering techniques. Its usefulness is established by applying it to polymer solutions and blends with large molecular weights. Our numerical results for long molecules indicate that the new algorithm can be significantly more efficient than previously available techniques. (c) 2000 American Institute of Physics.

  16. The All Particle Monte Carlo method: Atomic data files

    SciTech Connect

    Rathkopf, J.A.; Cullen, D.E.; Perkins, S.T.

    1990-11-06

    Development of the All Particle Method, a project to simulate the transport of particles via the Monte Carlo method, has proceeded on two fronts: data collection and algorithm development. In this paper we report on the status of the data libraries. The data collection is nearly complete with the addition of electron, photon, and atomic data libraries to the existing neutron, gamma ray, and charged particle libraries. The contents of these libraries are summarized.

  17. MCSpearman: Monte Carlo error analyses of Spearman's rank test

    NASA Astrophysics Data System (ADS)

    Curran, Peter A.

    2015-04-01

    Spearman’s rank correlation test is commonly used in astronomy to discern whether a set of two variables are correlated or not. Unlike most other quantities quoted in astronomical literature, the Spearman’s rank correlation coefficient is generally quoted with no attempt to estimate the errors on its value. This code implements a number of Monte Carlo based methods to estimate the uncertainty on the Spearman’s rank correlation coefficient.

  18. OBJECT KINETIC MONTE CARLO SIMULATIONS OF MICROSTRUCTURE EVOLUTION

    SciTech Connect

    Nandipati, Giridhar; Setyawan, Wahyu; Heinisch, Howard L.; Roche, Kenneth J.; Kurtz, Richard J.; Wirth, Brian D.

    2013-09-30

    The objective is to report the development of the flexible object kinetic Monte Carlo (OKMC) simulation code KSOME (kinetic simulation of microstructure evolution) which can be used to simulate microstructure evolution of complex systems under irradiation. In this report we briefly describe the capabilities of KSOME and present preliminary results for short term annealing of single cascades in tungsten at various primary-knock-on atom (PKA) energies and temperatures.

  19. Optimization of Ballistic Deflection Transistors by Monte Carlo Simulations

    NASA Astrophysics Data System (ADS)

    Millithaler, J.-F.; Iiguez-de-la-Torre, I.; Mateos, J.; Gonzlez, T.; Margala, M.

    2015-10-01

    This paper presents an optimization of the current-voltage characteristic of Ballistic Deflection Transistors. The implementation of an adequate surface charge model in a Monte Carlo tool shows a very good agreement with the available experimental data and allows us to predict the influence of different parameters, like temperature, channel and trench dimensions on the device output. These results are of importance for further use of this device in logical circuit applications.

  20. Monte Carlo Study on Anomalous Carrier Diffusion in Inhomogeneous Semiconductors

    NASA Astrophysics Data System (ADS)

    Mori, N.; Hill, R. J. A.; Patan, A.; Eaves, L.

    2015-10-01

    We perform ensemble Monte Carlo simulations of electron diffusion in high mobility inhomogeneous InAs layers. Electrons move ballistically for short times while moving diffusively for sufficiently long times. We find that electrons show anomalous diffusion in the intermediate time domain. Our study suggests that electrons in inhomogeneous InAs could be used to experimentally explore generalized random walk phenomena, which, some studies assert, also occur naturally in the motion of animal foraging paths.

  1. Monte Carlo approach to nuclei and nuclear matter

    SciTech Connect

    Fantoni, Stefano; Gandolfi, Stefano; Illarionov, Alexey Yu.; Schmidt, Kevin E.; Pederiva, Francesco

    2008-10-13

    We report on the most recent applications of the Auxiliary Field Diffusion Monte Carlo (AFDMC) method. The equation of state (EOS) for pure neutron matter in both normal and BCS phase and the superfluid gap in the low-density regime are computed, using a realistic Hamiltonian containing the Argonne AV8' plus Urbana IX three-nucleon interaction. Preliminary results for the EOS of isospin-asymmetric nuclear matter are also presented.

  2. Monte Carlo verification of gel dosimetry measurements for stereotactic radiotherapy.

    PubMed

    Kairn, T; Taylor, M L; Crowe, S B; Dunn, L; Franich, R D; Kenny, J; Knight, R T; Trapp, J V

    2012-06-01

    The quality assurance of stereotactic radiotherapy and radiosurgery treatments requires the use of small-field dose measurements that can be experimentally challenging. This study used Monte Carlo simulations to establish that PAGAT dosimetry gel can be used to provide accurate, high-resolution, three-dimensional dose measurements of stereotactic radiotherapy fields. A small cylindrical container (4 cm height, 4.2 cm diameter) was filled with PAGAT gel, placed in the parietal region inside a CIRS head phantom and irradiated with a 12-field stereotactic radiotherapy plan. The resulting three-dimensional dose measurement was read out using an optical CT scanner and compared with the treatment planning prediction of the dose delivered to the gel during the treatment. A BEAMnrc/DOSXYZnrc simulation of this treatment was completed, to provide a standard against which the accuracy of the gel measurement could be gauged. The three-dimensional dose distributions obtained from Monte Carlo and from the gel measurement were found to be in better agreement with each other than with the dose distribution provided by the treatment planning system's pencil beam calculation. Both sets of data showed close agreement with the treatment planning system's dose distribution through the centre of the irradiated volume and substantial disagreement with the treatment planning system at the penumbrae. The Monte Carlo calculations and gel measurements both indicated that the treated volume was up to 3 mm narrower, with steeper penumbrae and more variable out-of-field dose, than predicted by the treatment planning system. The Monte Carlo simulations allowed the accuracy of the PAGAT gel dosimeter to be verified in this case, allowing PAGAT gel to be utilized in the measurement of dose from stereotactic and other radiotherapy treatments, with greater confidence in the future. PMID:22572565

  3. Monte Carlo verification of gel dosimetry measurements for stereotactic radiotherapy

    NASA Astrophysics Data System (ADS)

    Kairn, T.; Taylor, M. L.; Crowe, S. B.; Dunn, L.; Franich, R. D.; Kenny, J.; Knight, R. T.; Trapp, J. V.

    2012-06-01

    The quality assurance of stereotactic radiotherapy and radiosurgery treatments requires the use of small-field dose measurements that can be experimentally challenging. This study used Monte Carlo simulations to establish that PAGAT dosimetry gel can be used to provide accurate, high-resolution, three-dimensional dose measurements of stereotactic radiotherapy fields. A small cylindrical container (4 cm height, 4.2 cm diameter) was filled with PAGAT gel, placed in the parietal region inside a CIRS head phantom and irradiated with a 12-field stereotactic radiotherapy plan. The resulting three-dimensional dose measurement was read out using an optical CT scanner and compared with the treatment planning prediction of the dose delivered to the gel during the treatment. A BEAMnrc/DOSXYZnrc simulation of this treatment was completed, to provide a standard against which the accuracy of the gel measurement could be gauged. The three-dimensional dose distributions obtained from Monte Carlo and from the gel measurement were found to be in better agreement with each other than with the dose distribution provided by the treatment planning system's pencil beam calculation. Both sets of data showed close agreement with the treatment planning system's dose distribution through the centre of the irradiated volume and substantial disagreement with the treatment planning system at the penumbrae. The Monte Carlo calculations and gel measurements both indicated that the treated volume was up to 3 mm narrower, with steeper penumbrae and more variable out-of-field dose, than predicted by the treatment planning system. The Monte Carlo simulations allowed the accuracy of the PAGAT gel dosimeter to be verified in this case, allowing PAGAT gel to be utilized in the measurement of dose from stereotactic and other radiotherapy treatments, with greater confidence in the future. Experimental aspects of this work were originally presented at the Engineering and Physical Sciences in Medicine Conference (EPSM-ABEC), Melbourne, 2010.

  4. Recent advances in the Mercury Monte Carlo particle transport code

    SciTech Connect

    Brantley, P. S.; Dawson, S. A.; McKinley, M. S.; O'Brien, M. J.; Stevens, D. E.; Beck, B. R.; Jurgenson, E. D.; Ebbers, C. A.; Hall, J. M.

    2013-07-01

    We review recent physics and computational science advances in the Mercury Monte Carlo particle transport code under development at Lawrence Livermore National Laboratory. We describe recent efforts to enable a nuclear resonance fluorescence capability in the Mercury photon transport. We also describe recent work to implement a probability of extinction capability into Mercury. We review the results of current parallel scaling and threading efforts that enable the code to run on millions of MPI processes. (authors)

  5. Application of MINERVA Monte Carlo simulations to targeted radionuclide therapy.

    PubMed

    Descalle, Marie-Anne; Hartmann Siantar, Christine L; Dauffy, Lucile; Nigg, David W; Wemple, Charles A; Yuan, Aina; DeNardo, Gerald L

    2003-02-01

    Recent clinical results have demonstrated the promise of targeted radionuclide therapy for advanced cancer. As the success of this emerging form of radiation therapy grows, accurate treatment planning and radiation dose simulations are likely to become increasingly important. To address this need, we have initiated the development of a new, Monte Carlo transport-based treatment planning system for molecular targeted radiation therapy as part of the MINERVA system. The goal of the MINERVA dose calculation system is to provide 3-D Monte Carlo simulation-based dosimetry for radiation therapy, focusing on experimental and emerging applications. For molecular targeted radionuclide therapy applications, MINERVA calculates patient-specific radiation dose estimates using computed tomography to describe the patient anatomy, combined with a user-defined 3-D radiation source. This paper describes the validation of the 3-D Monte Carlo transport methods to be used in MINERVA for molecular targeted radionuclide dosimetry. It reports comparisons of MINERVA dose simulations with published absorbed fraction data for distributed, monoenergetic photon and electron sources, and for radioisotope photon emission. MINERVA simulations are generally within 2% of EGS4 results and 10% of MCNP results, but differ by up to 40% from the recommendations given in MIRD Pamphlets 3 and 8 for identical medium composition and density. For several representative source and target organs in the abdomen and thorax, specific absorbed fractions calculated with the MINERVA system are generally within 5% of those published in the revised MIRD Pamphlet 5 for 100 keV photons. However, results differ by up to 23% for the adrenal glands, the smallest of our target organs. Finally, we show examples of Monte Carlo simulations in a patient-like geometry for a source of uniform activity located in the kidney. PMID:12667310

  6. Towards a Revised Monte Carlo Neutral Particle Surface Interaction Model

    SciTech Connect

    D.P. Stotler

    2005-06-09

    The components of the neutral- and plasma-surface interaction model used in the Monte Carlo neutral transport code DEGAS 2 are reviewed. The idealized surfaces and processes handled by that model are inadequate for accurately simulating neutral transport behavior in present day and future fusion devices. We identify some of the physical processes missing from the model, such as mixed materials and implanted hydrogen, and make some suggestions for improving the model.

  7. Monte Carlo approach to nuclei and nuclear matter

    NASA Astrophysics Data System (ADS)

    Fantoni, Stefano; Gandolfi, Stefano; Illarionov, Alexey Yu.; Schmidt, Kevin E.; Pederiva, Francesco

    2008-10-01

    We report on the most recent applications of the Auxiliary Field Diffusion Monte Carlo (AFDMC) method. The equation of state (EOS) for pure neutron matter in both normal and BCS phase and the superfluid gap in the lowdensity regime are computed, using a realistic Hamiltonian containing the Argonne AV8' plus Urbana IX threenucleon interaction. Preliminary results for the EOS of isospinasymmetric nuclear matter are also presented.

  8. Regenerative Markov Chain Monte Carlo for any distribution.

    SciTech Connect

    Minh, D.

    2012-01-01

    While Markov chain Monte Carlo (MCMC) methods are frequently used for difficult calculations in a wide range of scientific disciplines, they suffer from a serious limitation: their samples are not independent and identically distributed. Consequently, estimates of expectations are biased if the initial value of the chain is not drawn from the target distribution. Regenerative simulation provides an elegant solution to this problem. In this article, we propose a simple regenerative MCMC algorithm to generate variates for any distribution

  9. Monte-Carlo simulation of heavy-ion collisions

    NASA Astrophysics Data System (ADS)

    Schenke, Bjrn; Jeon, Sangyong; Gale, Charles

    2011-04-01

    We present Monte-Carlo simulations for heavy-ion collisions combining PYTHIA and the McGill-AMY formalism to describe the evolution of hard partons in a soft background, modelled using hydrodynamic simulations. MARTINI generates full event configurations in the high pT region that take into account thermal QCD and QED effects as well as effects of the evolving medium. This way it is possible to perform detailed quantitative comparisons with experimental observables.

  10. Applications of Monte Carlo simulations of gamma-ray spectra

    SciTech Connect

    Clark, D.D.

    1995-12-31

    A short, convenient computer program based on the Monte Carlo method that was developed to generate simulated gamma-ray spectra has been found to have useful applications in research and teaching. In research, we use it to predict spectra in neutron activation analysis (NAA), particularly in prompt gamma-ray NAA (PGNAA). In teaching, it is used to illustrate the dependence of detector response functions on the nature of gamma-ray interactions, the incident gamma-ray energy, and detector geometry.

  11. Improved diffusion coefficients generated from Monte Carlo codes

    SciTech Connect

    Herman, B. R.; Forget, B.; Smith, K.; Aviles, B. N.

    2013-07-01

    Monte Carlo codes are becoming more widely used for reactor analysis. Some of these applications involve the generation of diffusion theory parameters including macroscopic cross sections and diffusion coefficients. Two approximations used to generate diffusion coefficients are assessed using the Monte Carlo code MC21. The first is the method of homogenization; whether to weight either fine-group transport cross sections or fine-group diffusion coefficients when collapsing to few-group diffusion coefficients. The second is a fundamental approximation made to the energy-dependent P1 equations to derive the energy-dependent diffusion equations. Standard Monte Carlo codes usually generate a flux-weighted transport cross section with no correction to the diffusion approximation. Results indicate that this causes noticeable tilting in reconstructed pin powers in simple test lattices with L2 norm error of 3.6%. This error is reduced significantly to 0.27% when weighting fine-group diffusion coefficients by the flux and applying a correction to the diffusion approximation. Noticeable tilting in reconstructed fluxes and pin powers was reduced when applying these corrections. (authors)

  12. Monte Carlo modelling of positron transport in real world applications

    NASA Astrophysics Data System (ADS)

    Marjanović, S.; Banković, A.; Šuvakov, M.; Petrović, Z. Lj

    2014-05-01

    Due to the unstable nature of positrons and their short lifetime, it is difficult to obtain high positron particle densities. This is why the Monte Carlo simulation technique, as a swarm method, is very suitable for modelling most of the current positron applications involving gaseous and liquid media. The ongoing work on the measurements of cross-sections for positron interactions with atoms and molecules and swarm calculations for positrons in gasses led to the establishment of good cross-section sets for positron interaction with gasses commonly used in real-world applications. Using the standard Monte Carlo technique and codes that can follow both low- (down to thermal energy) and high- (up to keV) energy particles, we are able to model different systems directly applicable to existing experimental setups and techniques. This paper reviews the results on modelling Surko-type positron buffer gas traps, application of the rotating wall technique and simulation of positron tracks in water vapor as a substitute for human tissue, and pinpoints the challenges in and advantages of applying Monte Carlo simulations to these systems.

  13. Autocorrelation and Dominance Ratio in Monte Carlo Criticality Calculations

    SciTech Connect

    Ueki, Taro; Brown, Forrest B.; Parsons, D. Kent; Kornreich, Drew E.

    2003-11-15

    The cycle-to-cycle correlation (autocorrelation) in Monte Carlo criticality calculations is analyzed concerning the dominance ratio of fission kernels. The mathematical analysis focuses on how the eigenfunctions of a fission kernel decay if operated on by the cycle-to-cycle error propagation operator of the Monte Carlo stationary source distribution. The analytical results obtained can be summarized as follows: When the dominance ratio of a fission kernel is close to unity, autocorrelation of the k-effective tallies is weak and may be negligible, while the autocorrelation of the source distribution is strong and decays slowly. The practical implication is that when one analyzes a critical reactor with a large dominance ratio by Monte Carlo methods, the confidence interval estimation of the fission rate and other quantities at individual locations must account for the strong autocorrelation. Numerical results are presented for sample problems with a dominance ratio of 0.85-0.99, where Shannon and relative entropies are utilized to exclude the influence of initial nonstationarity.

  14. Pattern Recognition for a Flight Dynamics Monte Carlo Simulation

    NASA Technical Reports Server (NTRS)

    Restrepo, Carolina; Hurtado, John E.

    2011-01-01

    The design, analysis, and verification and validation of a spacecraft relies heavily on Monte Carlo simulations. Modern computational techniques are able to generate large amounts of Monte Carlo data but flight dynamics engineers lack the time and resources to analyze it all. The growing amounts of data combined with the diminished available time of engineers motivates the need to automate the analysis process. Pattern recognition algorithms are an innovative way of analyzing flight dynamics data efficiently. They can search large data sets for specific patterns and highlight critical variables so analysts can focus their analysis efforts. This work combines a few tractable pattern recognition algorithms with basic flight dynamics concepts to build a practical analysis tool for Monte Carlo simulations. Current results show that this tool can quickly and automatically identify individual design parameters, and most importantly, specific combinations of parameters that should be avoided in order to prevent specific system failures. The current version uses a kernel density estimation algorithm and a sequential feature selection algorithm combined with a k-nearest neighbor classifier to find and rank important design parameters. This provides an increased level of confidence in the analysis and saves a significant amount of time.

  15. Geometric Templates for Improved Tracking Performance in Monte Carlo Codes

    NASA Astrophysics Data System (ADS)

    Nease, Brian R.; Millman, David L.; Griesheimer, David P.; Gill, Daniel F.

    2014-06-01

    One of the most fundamental parts of a Monte Carlo code is its geometry kernel. This kernel not only affects particle tracking (i.e., run-time performance), but also shapes how users will input models and collect results for later analyses. A new framework based on geometric templates is proposed that optimizes performance (in terms of tracking speed and memory usage) and simplifies user input for large scale models. While some aspects of this approach currently exist in different Monte Carlo codes, the optimization aspect has not been investigated or applied. If Monte Carlo codes are to be realistically used for full core analysis and design, this type of optimization will be necessary. This paper describes the new approach and the implementation of two template types in MC21: a repeated ellipse template and a box template. Several different models are tested to highlight the performance gains that can be achieved using these templates. Though the exact gains are naturally problem dependent, results show that runtime and memory usage can be significantly reduced when using templates, even as problems reach realistic model sizes.

  16. Uncertainties in external dosimetry: analytical vs. Monte Carlo method.

    PubMed

    Behrens, R

    2010-03-01

    Over the years, the International Commission on Radiological Protection (ICRP) and other organisations have formulated recommendations regarding uncertainty in occupational dosimetry. The most practical and widely accepted recommendations are the trumpet curves. To check whether routine dosemeters comply with them, a Technical Report on uncertainties issued by the International Electrotechnical Commission (IEC) can be used. In this report, the analytical method is applied to assess the uncertainty of a dosemeter fulfilling an IEC standard. On the other hand, the Monte Carlo method can be used to assess the uncertainty. In this work, a direct comparison of the analytical and the Monte Carlo methods is performed using the same input data. It turns out that the analytical method generally overestimates the uncertainty by about 10-30 %. Therefore, the results often do not comply with the recommendations of the ICRP regarding uncertainty. The results of the more realistic uncertainty evaluation using the Monte Carlo method usually comply with the recommendations of the ICRP. This is confirmed by results seen in regular tests in Germany. PMID:19942627

  17. Accelerating Monte Carlo power studies through parametric power estimation.

    PubMed

    Ueckert, Sebastian; Karlsson, Mats O; Hooker, Andrew C

    2016-04-01

    Estimating the power for a non-linear mixed-effects model-based analysis is challenging due to the lack of a closed form analytic expression. Often, computationally intensive Monte Carlo studies need to be employed to evaluate the power of a planned experiment. This is especially time consuming if full power versus sample size curves are to be obtained. A novel parametric power estimation (PPE) algorithm utilizing the theoretical distribution of the alternative hypothesis is presented in this work. The PPE algorithm estimates the unknown non-centrality parameter in the theoretical distribution from a limited number of Monte Carlo simulation and estimations. The estimated parameter linearly scales with study size allowing a quick generation of the full power versus study size curve. A comparison of the PPE with the classical, purely Monte Carlo-based power estimation (MCPE) algorithm for five diverse pharmacometric models showed an excellent agreement between both algorithms, with a low bias of less than 1.2 % and higher precision for the PPE. The power extrapolated from a specific study size was in a very good agreement with power curves obtained with the MCPE algorithm. PPE represents a promising approach to accelerate the power calculation for non-linear mixed effect models. PMID:26934878

  18. Monte Carlo simulations of particle acceleration at oblique shocks

    NASA Technical Reports Server (NTRS)

    Baring, Matthew G.; Ellison, Donald C.; Jones, Frank C.

    1994-01-01

    The Fermi shock acceleration mechanism may be responsible for the production of high-energy cosmic rays in a wide variety of environments. Modeling of this phenomenon has largely focused on plane-parallel shocks, and one of the most promising techniques for its study is the Monte Carlo simulation of particle transport in shocked fluid flows. One of the principal problems in shock acceleration theory is the mechanism and efficiency of injection of particles from the thermal gas into the accelerated population. The Monte Carlo technique is ideally suited to addressing the injection problem directly, and previous applications of it to the quasi-parallel Earth bow shock led to very successful modeling of proton and heavy ion spectra, as well as other observed quantities. Recently this technique has been extended to oblique shock geometries, in which the upstream magnetic field makes a significant angle Theta(sub B1) to the shock normal. Spectral resutls from test particle Monte Carlo simulations of cosmic-ray acceleration at oblique, nonrelativistic shocks are presented. The results show that low Mach number shocks have injection efficiencies that are relatively insensitive to (though not independent of) the shock obliquity, but that there is a dramatic drop in efficiency for shocks of Mach number 30 or more as the obliquity increases above 15 deg. Cosmic-ray distributions just upstream of the shock reveal prominent bumps at energies below the thermal peak; these disappear far upstream but might be observable features close to astrophysical shocks.

  19. Monte Carlo Methodology Serves Up a Software Success

    NASA Technical Reports Server (NTRS)

    2003-01-01

    Widely used for the modeling of gas flows through the computation of the motion and collisions of representative molecules, the Direct Simulation Monte Carlo method has become the gold standard for producing research and engineering predictions in the field of rarefied gas dynamics. Direct Simulation Monte Carlo was first introduced in the early 1960s by Dr. Graeme Bird, a professor at the University of Sydney, Australia. It has since proved to be a valuable tool to the aerospace and defense industries in providing design and operational support data, as well as flight data analysis. In 2002, NASA brought to the forefront a software product that maintains the same basic physics formulation of Dr. Bird's method, but provides effective modeling of complex, three-dimensional, real vehicle simulations and parallel processing capabilities to handle additional computational requirements, especially in areas where computational fluid dynamics (CFD) is not applicable. NASA's Direct Simulation Monte Carlo Analysis Code (DAC) software package is now considered the Agency s premier high-fidelity simulation tool for predicting vehicle aerodynamics and aerothermodynamic environments in rarified, or low-density, gas flows.

  20. Chemical accuracy from quantum Monte Carlo for the benzene dimer.

    PubMed

    Azadi, Sam; Cohen, R E

    2015-09-14

    We report an accurate study of interactions between benzene molecules using variational quantum Monte Carlo (VMC) and diffusion quantum Monte Carlo (DMC) methods. We compare these results with density functional theory using different van der Waals functionals. In our quantum Monte Carlo (QMC) calculations, we use accurate correlated trial wave functions including three-body Jastrow factors and backflow transformations. We consider two benzene molecules in the parallel displaced geometry, and find that by highly optimizing the wave function and introducing more dynamical correlation into the wave function, we compute the weak chemical binding energy between aromatic rings accurately. We find optimal VMC and DMC binding energies of -2.3(4) and -2.7(3) kcal/mol, respectively. The best estimate of the coupled-cluster theory through perturbative triplets/complete basis set limit is -2.65(2) kcal/mol [Miliordos et al., J. Phys. Chem. A 118, 7568 (2014)]. Our results indicate that QMC methods give chemical accuracy for weakly bound van der Waals molecular interactions, comparable to results from the best quantum chemistry methods. PMID:26374029

  1. Quantitative PET Imaging Using A Comprehensive Monte Carlo System Model

    SciTech Connect

    Southekal, S.; Vaska, P.; Southekal, s.; Purschke, M.L.; Schlyer, d.J.; Vaska, P.

    2011-10-01

    We present the complete image generation methodology developed for the RatCAP PET scanner, which can be extended to other PET systems for which a Monte Carlo-based system model is feasible. The miniature RatCAP presents a unique set of advantages as well as challenges for image processing, and a combination of conventional methods and novel ideas developed specifically for this tomograph have been implemented. The crux of our approach is a low-noise Monte Carlo-generated probability matrix with integrated corrections for all physical effects that impact PET image quality. The generation and optimization of this matrix are discussed in detail, along with the estimation of correction factors and their incorporation into the reconstruction framework. Phantom studies and Monte Carlo simulations are used to evaluate the reconstruction as well as individual corrections for random coincidences, photon scatter, attenuation, and detector efficiency variations in terms of bias and noise. Finally, a realistic rat brain phantom study reconstructed using this methodology is shown to recover >; 90% of the contrast for hot as well as cold regions. The goal has been to realize the potential of quantitative neuroreceptor imaging with the RatCAP.

  2. Valence-bond quantum Monte Carlo algorithms defined on trees.

    PubMed

    Deschner, Andreas; Srensen, Erik S

    2014-09-01

    We present a class of algorithms for performing valence-bond quantum Monte Carlo of quantum spin models. Valence-bond quantum Monte Carlo is a projective T=0 Monte Carlo method based on sampling of a set of operator strings that can be viewed as forming a treelike structure. The algorithms presented here utilize the notion of a worm that moves up and down this tree and changes the associated operator string. In quite general terms, we derive a set of equations whose solutions correspond to a whole class of algorithms. As specific examples of this class of algorithms, we focus on two cases. The bouncing worm algorithm, for which updates are always accepted by allowing the worm to bounce up and down the tree, and the driven worm algorithm, where a single parameter controls how far up the tree the worm reaches before turning around. The latter algorithm involves only a single bounce where the worm turns from going up the tree to going down. The presence of the control parameter necessitates the introduction of an acceptance probability for the update. PMID:25314561

  3. Monte Carlo studies for medical imaging detector optimization

    NASA Astrophysics Data System (ADS)

    Fois, G. R.; Cisbani, E.; Garibaldi, F.

    2016-02-01

    This work reports on the Monte Carlo optimization studies of detection systems for Molecular Breast Imaging with radionuclides and Bremsstrahlung Imaging in nuclear medicine. Molecular Breast Imaging requires competing performances of the detectors: high efficiency and high spatial resolutions; in this direction, it has been proposed an innovative device which combines images from two different, and somehow complementary, detectors at the opposite sides of the breast. The dual detector design allows for spot compression and improves significantly the performance of the overall system if all components are well tuned, layout and processing carefully optimized; in this direction the Monte Carlo simulation represents a valuable tools. In recent years, Bremsstrahlung Imaging potentiality in internal radiotherapy (with beta-radiopharmaceuticals) has been clearly emerged; Bremsstrahlung Imaging is currently performed with existing detector generally used for single photon radioisotopes. We are evaluating the possibility to adapt an existing compact gamma camera and optimize by Monte Carlo its performance for Bremsstrahlung imaging with photons emitted by the beta- from 90 Y.

  4. Transient analysis of CML inverter using Monte Carlo simulations

    NASA Astrophysics Data System (ADS)

    Galdin, Sylvie; Musalem, Franois-Xavier; Dollfus, Philippe; Mouis, Mireille; Hesto, Patrice

    1996-02-01

    Monte Carlo simulation of a complete CML gate composed of two submicron bipolar transistors has been performed. The gate delay time ?D is usually calculated, according to a widely-used formula, as a weighted sum of time constants deduced from the transistor small-signal parameters. In this paper we analyse the validity of this approach. The weighting factor values, used in the ?D expression are determined from this expression and transient Monte Carlo simulation results. Comparison between these results and those given in the literature shows that the best agreement is obtained if two conditions are fulfilled: the transit time used in the ?D expression is reduced to the intrinsic base transit time, and the access base resistance is limited to the extrinsic one. However, even in this case, the weighting factors associated with depletion capacitances are called into question by the Monte Carlo analysis. A set of weighting factor values is proposed, which leads to a discrepancy between the simulated and calculated ?D within 10%.

  5. Comparison of deterministic and Monte Carlo methods in shielding design.

    PubMed

    Oliveira, A D; Oliveira, C

    2005-01-01

    In shielding calculation, deterministic methods have some advantages and also some disadvantages relative to other kind of codes, such as Monte Carlo. The main advantage is the short computer time needed to find solutions while the disadvantages are related to the often-used build-up factor that is extrapolated from high to low energies or with unknown geometrical conditions, which can lead to significant errors in shielding results. The aim of this work is to investigate how good are some deterministic methods to calculating low-energy shielding, using attenuation coefficients and build-up factor corrections. Commercial software MicroShield 5.05 has been used as the deterministic code while MCNP has been used as the Monte Carlo code. Point and cylindrical sources with slab shield have been defined allowing comparison between the capability of both Monte Carlo and deterministic methods in a day-by-day shielding calculation using sensitivity analysis of significant parameters, such as energy and geometrical conditions. PMID:16381723

  6. MONTE CARLO ERROR ESTIMATION APPLIED TO NONDESTRUCTIVE ASSAY METHODS

    SciTech Connect

    R. ESTEP; ET AL

    2000-06-01

    Monte Carlo randomization of nuclear counting data into N replicate sets is the basis of a simple and effective method for estimating error propagation through complex analysis algorithms such as those using neural networks or tomographic image reconstructions. The error distributions of properly simulated replicate data sets mimic those of actual replicate measurements and can be used to estimate the std. dev. for an assay along with other statistical quantities. We have used this technique to estimate the standard deviation in radionuclide masses determined using the tomographic gamma scanner (TGS) and combined thermal/epithermal neutron (CTEN) methods. The effectiveness of this approach is demonstrated by a comparison of our Monte Carlo error estimates with the error distributions in actual replicate measurements and simulations of measurements. We found that the std. dev. estimated this way quickly converges to an accurate value on average and has a predictable error distribution similar to N actual repeat measurements. The main drawback of the Monte Carlo method is that N additional analyses of the data are required, which may be prohibitively time consuming with slow analysis algorithms.

  7. Chemical accuracy from quantum Monte Carlo for the benzene dimer

    NASA Astrophysics Data System (ADS)

    Azadi, Sam; Cohen, R. E.

    2015-09-01

    We report an accurate study of interactions between benzene molecules using variational quantum Monte Carlo (VMC) and diffusion quantum Monte Carlo (DMC) methods. We compare these results with density functional theory using different van der Waals functionals. In our quantum Monte Carlo (QMC) calculations, we use accurate correlated trial wave functions including three-body Jastrow factors and backflow transformations. We consider two benzene molecules in the parallel displaced geometry, and find that by highly optimizing the wave function and introducing more dynamical correlation into the wave function, we compute the weak chemical binding energy between aromatic rings accurately. We find optimal VMC and DMC binding energies of -2.3(4) and -2.7(3) kcal/mol, respectively. The best estimate of the coupled-cluster theory through perturbative triplets/complete basis set limit is -2.65(2) kcal/mol [Miliordos et al., J. Phys. Chem. A 118, 7568 (2014)]. Our results indicate that QMC methods give chemical accuracy for weakly bound van der Waals molecular interactions, comparable to results from the best quantum chemistry methods.

  8. Monte Carlo study of a Cyberknife stereotactic radiosurgery system

    SciTech Connect

    Araki, Fujio

    2006-08-15

    This study investigated small-field dosimetry for a Cyberknife stereotactic radiosurgery system using Monte Carlo simulations. The EGSnrc/BEAMnrc Monte Carlo code was used to simulate the Cyberknife treatment head, and the DOSXYZnrc code was implemented to calculate central axis depth-dose curves, off-axis dose profiles, and relative output factors for various circular collimator sizes of 5 to 60 mm. Water-to-air stopping power ratios necessary for clinical reference dosimetry of the Cyberknife system were also evaluated by Monte Carlo simulations. Additionally, a beam quality conversion factor, k{sub Q}, for the Cyberknife system was evaluated for cylindrical ion chambers with different wall material. The accuracy of the simulated beam was validated by agreement within 2% between the Monte Carlo calculated and measured central axis depth-dose curves and off-axis dose profiles. The calculated output factors were compared with those measured by a diode detector and an ion chamber in water. The diode output factors agreed within 1% with the calculated values down to a 10 mm collimator. The output factors with the ion chamber decreased rapidly for collimators below 20 mm. These results were confirmed by the comparison to those from Monte Carlo methods with voxel sizes and materials corresponding to both detectors. It was demonstrated that the discrepancy in the 5 and 7.5 mm collimators for the diode detector is due to the water nonequivalence of the silicon material, and the dose fall-off for the ion chamber is due to its large active volume against collimators below 20 mm. The calculated stopping power ratios of the 60 mm collimator from the Cyberknife system (without a flattening filter) agreed within 0.2% with those of a 10x10 cm{sup 2} field from a conventional linear accelerator with a heavy flattening filter and the incident electron energy, 6 MeV. The difference in the stopping power ratios between 5 and 60 mm collimators was within 0.5% at a 10 cm depth in water. Furthermore, k{sub Q} values for the Cyberknife system were in agreement within 0.3% with those of the conventional 6 MV-linear accelerator for the cylindrical ion chambers with different wall material.

  9. Independent pixel and Monte Carlo estimates of stratocumulus albedo

    NASA Technical Reports Server (NTRS)

    Cahalan, Robert F.; Ridgway, William; Wiscombe, Warren J.; Gollmer, Steven; HARSHVARDHAN

    1994-01-01

    Monte Carlo radiative transfer methods are employed here to estimate the plane-parallel albedo bias for marine stratocumulus clouds. This is the bias in estimates of the mesoscale-average albedo, which arises from the assumption that cloud liquid water is uniformly distributed. The authors compare such estimates with those based on a more realistic distribution generated from a fractal model of marine stratocumulus clouds belonging to the class of 'bounded cascade' models. In this model the cloud top and base are fixed, so that all variations in cloud shape are ignored. The model generates random variations in liquid water along a single horizontal direction, forming fractal cloud streets while conserving the total liquid water in the cloud field. The model reproduces the mean, variance, and skewness of the vertically integrated cloud liquid water, as well as its observed wavenumber spectrum, which is approximately a power law. The Monte Carlo method keeps track of the three-dimensional paths solar photons take through the cloud field, using a vectorized implementation of a direct technique. The simplifications in the cloud field studied here allow the computations to be accelerated. The Monte Carlo results are compared to those of the independent pixel approximation, which neglects net horizontal photon transport. Differences between the Monte Carlo and independent pixel estimates of the mesoscale-average albedo are on the order of 1% for conservative scattering, while the plane-parallel bias itself is an order of magnitude larger. As cloud absorption increases, the independent pixel approximation agrees even more closely with the Monte Carlo estimates. This result holds for a wide range of sun angles and aspect ratios. Thus, horizontal photon transport can be safely neglected in estimates of the area-average flux for such cloud models. This result relies on the rapid falloff of the wavenumber spectrum of stratocumulus, which ensures that the smaller-scale variability, where the radiative transfer is more three-dimensional, contributes less to the plane-parallel albedo bias than the larger scales, which are more variable. The lack of significant three-dimensional effects also relies on the assumption of a relatively simple geometry. Even with these assumptions, the independent pixel approximation is accurate only for fluxes averaged over large horizontal areas, many photon mean free paths in diameter, and not for local radiance values, which depend strongly on the interaction between neighboring cloud elements.

  10. A Bayesian Monte Carlo Markov Chain Method for the Analysis of GPS Position Time Series

    NASA Astrophysics Data System (ADS)

    Olivares, German; Teferle, Norman

    2013-04-01

    Position time series from continuous GPS are an essential tool in many areas of the geosciences and are, for example, used to quantify long-term movements due to processes such as plate tectonics or glacial isostatic adjustment. It is now widely established that the stochastic properties of the time series do not follow a random behavior and this affects parameter estimates and associated uncertainties. Consequently, a comprehensive knowledge of the stochastic character of the position time series is crucial in order to obtain realistic error bounds and for this a number of methods have already been applied successfully. We present a new Bayesian Monte Carlo Markov Chain (MCMC) method to simultaneously estimate the model and the stochastic parameters of the noise in GPS position time series. This method provides a sample of the likelihood function and thereby, using Monte Carlo integration, all parameters and their uncertainties are estimated simultaneously. One advantage of the MCMC method is that the computational time increases linearly with the number of parameters, hence being very suitable for dealing with a high number of parameters. A second advantage is that the properties of the estimator used in this method do not depend on the stationarity of the time series. At least on a theoretical level, no other estimator has been shown to have this feature. Furthermore, the MCMC method provides a means to detect multi-modality of the parameter estimates. We present an evaluation of the new MCMC method through comparison with widely used optimization and empirical methods for the analysis of GPS position time series.

  11. CAD based Monte Carlo method: Algorithms for geometric evaluation in support of Monte Carlo radiation transport calculation

    NASA Astrophysics Data System (ADS)

    Wang, Mengkuo

    In particle transport computations, the Monte Carlo simulation method is a widely used algorithm. There are several Monte Carlo codes available that perform particle transport simulations. However the geometry packages and geometric modeling capability of Monte Carlo codes are limited as they can not handle complicated geometries made up of complex surfaces. Previous research exists that take advantage of the modeling capabilities of CAD software. The two major approaches are the Converter approach and the CAD engine based approach. By carefully analyzing the strategies and algorithms of these two approaches, the CAD engine based approach has peen identified as the more promising approach. Though currently the performance of this approach is not satisfactory, there is room for improvement. The development and implementation of an improved CAD based approach is the focus of this thesis. Algorithms to accelerate the CAD engine based approach are studied. The major acceleration algorithm is the Oriented Bounding Box algorithm, which is used in computer graphics. The difference in application between computer graphics and particle transport has been considered and the algorithm has been modified for particle transport. The major work of this thesis has been the development of the MCNPX/CGM code and the testing, benchmarking and implementation of the acceleration algorithms. MCNPX is a Monte Carlo code and CGM is a CAD geometry engine. A facet representation of the geometry provided the least slowdown of the Monte Carlo code. The CAD model generates the facet representation. The Oriented Bounding Box algorithm was the fastest acceleration technique adopted for this work. The slowdown of the MCNPX/CGM to MCNPX was reduced to a factor of 3 when the facet model is used. MCNPX/CGM has been successfully validated against test problems in medical physics and a fusion energy device. MCNPX/CGM gives exactly the same results as the standard MCNPX when an MCNPX geometry model is available. For the case of the complicated fusion device---the stellerator, the MCNPX/CGM's results closely match a one-dimension model calculation performed by ARIES team.

  12. Properties of reactive oxygen species by quantum Monte Carlo

    NASA Astrophysics Data System (ADS)

    Zen, Andrea; Trout, Bernhardt L.; Guidoni, Leonardo

    2014-07-01

    The electronic properties of the oxygen molecule, in its singlet and triplet states, and of many small oxygen-containing radicals and anions have important roles in different fields of chemistry, biology, and atmospheric science. Nevertheless, the electronic structure of such species is a challenge for ab initio computational approaches because of the difficulties to correctly describe the statical and dynamical correlation effects in presence of one or more unpaired electrons. Only the highest-level quantum chemical approaches can yield reliable characterizations of their molecular properties, such as binding energies, equilibrium structures, molecular vibrations, charge distribution, and polarizabilities. In this work we use the variational Monte Carlo (VMC) and the lattice regularized Monte Carlo (LRDMC) methods to investigate the equilibrium geometries and molecular properties of oxygen and oxygen reactive species. Quantum Monte Carlo methods are used in combination with the Jastrow Antisymmetrized Geminal Power (JAGP) wave function ansatz, which has been recently shown to effectively describe the statical and dynamical correlation of different molecular systems. In particular, we have studied the oxygen molecule, the superoxide anion, the nitric oxide radical and anion, the hydroxyl and hydroperoxyl radicals and their corresponding anions, and the hydrotrioxyl radical. Overall, the methodology was able to correctly describe the geometrical and electronic properties of these systems, through compact but fully-optimised basis sets and with a computational cost which scales as N3 - N4, where N is the number of electrons. This work is therefore opening the way to the accurate study of the energetics and of the reactivity of large and complex oxygen species by first principles.

  13. CSnrc: Correlated sampling Monte Carlo calculations using EGSnrc

    SciTech Connect

    Buckley, Lesley A.; Kawrakow, I.; Rogers, D.W.O.

    2004-12-01

    CSnrc, a new user-code for the EGSnrc Monte Carlo system is described. This user-code improves the efficiency when calculating ratios of doses from similar geometries. It uses a correlated sampling variance reduction technique. CSnrc is developed from an existing EGSnrc user-code CAVRZnrc and improves upon the correlated sampling algorithm used in an earlier version of the code written for the EGS4 Monte Carlo system. Improvements over the EGS4 version of the algorithm avoid repetition of sections of particle tracks. The new code includes a rectangular phantom geometry not available in other EGSnrc cylindrical codes. Comparison to CAVRZnrc shows gains in efficiency of up to a factor of 64 for a variety of test geometries when computing the ratio of doses to the cavity for two geometries. CSnrc is well suited to in-phantom calculations and is used to calculate the central electrode correction factor P{sub cel} in high-energy photon and electron beams. Current dosimetry protocols base the value of P{sub cel} on earlier Monte Carlo calculations. The current CSnrc calculations achieve 0.02% statistical uncertainties on P{sub cel}, much lower than those previously published. The current values of P{sub cel} compare well with the values used in dosimetry protocols for photon beams. For electrons beams, CSnrc calculations are reported at the reference depth used in recent protocols and show up to a 0.2% correction for a graphite electrode, a correction currently ignored by dosimetry protocols. The calculations show that for a 1 mm diameter aluminum central electrode, the correction factor differs somewhat from the values used in both the IAEA TRS-398 code of practice and the AAPM's TG-51 protocol.

  14. Properties of reactive oxygen species by quantum Monte Carlo.

    PubMed

    Zen, Andrea; Trout, Bernhardt L; Guidoni, Leonardo

    2014-07-01

    The electronic properties of the oxygen molecule, in its singlet and triplet states, and of many small oxygen-containing radicals and anions have important roles in different fields of chemistry, biology, and atmospheric science. Nevertheless, the electronic structure of such species is a challenge for ab initio computational approaches because of the difficulties to correctly describe the statical and dynamical correlation effects in presence of one or more unpaired electrons. Only the highest-level quantum chemical approaches can yield reliable characterizations of their molecular properties, such as binding energies, equilibrium structures, molecular vibrations, charge distribution, and polarizabilities. In this work we use the variational Monte Carlo (VMC) and the lattice regularized Monte Carlo (LRDMC) methods to investigate the equilibrium geometries and molecular properties of oxygen and oxygen reactive species. Quantum Monte Carlo methods are used in combination with the Jastrow Antisymmetrized Geminal Power (JAGP) wave function ansatz, which has been recently shown to effectively describe the statical and dynamical correlation of different molecular systems. In particular, we have studied the oxygen molecule, the superoxide anion, the nitric oxide radical and anion, the hydroxyl and hydroperoxyl radicals and their corresponding anions, and the hydrotrioxyl radical. Overall, the methodology was able to correctly describe the geometrical and electronic properties of these systems, through compact but fully-optimised basis sets and with a computational cost which scales as N(3) - N(4), where N is the number of electrons. This work is therefore opening the way to the accurate study of the energetics and of the reactivity of large and complex oxygen species by first principles. PMID:25005287

  15. Monte Carlo simulation of energy spectra for (123)I imaging.

    PubMed

    Tanaka, Minoru; Uehara, Shuzo; Kojima, Akihiro; Matsumoto, Masanori

    2007-08-01

    (123)I is a radionuclide frequently used in nuclear medicine imaging. The image formed by the 159 keV photopeak includes a considerable scatter component due to high energy gamma-ray emission. In order to evaluate the fraction of scattered photons, a Monte Carlo simulation of a scintillation camera used for (123)I imaging was undertaken. The Monte Carlo code consists of two modules, the HEXAGON code modelled the collimator with a complex hexagonal geometry and the NAI code modelled the NaI detector system including the back compartment. The simulation was carried out for various types of collimators under two separate conditions of the source locations in air and in water. Energy spectra of (123)I for every pixel (matrix size = 256 x 256) were obtained by separating the unscattered from the scattered and the penetrated photons. The calculated energy spectra (cps MBq(-1) keV(-1)) agreed with the measured spectra with approximately 20% deviations for three different collimators. The difference of the sensitivities (cps MBq(-1)) for the window of 143-175 keV was less than 10% between the simulation and the experiment. The partial sensitivities for the scattered and the unscattered components were obtained. The simulated fraction of the unscattered photons to the total photons were 0.46 for LEHR, 0.54 for LEGP and 0.90 for MEGP for the 'in air' set-up, and 0.35, 0.40 and 0.68 for the 'in water' set-up, respectively. The Monte Carlo simulation presented in this work enabled us to investigate the design of a new collimator optimum for (123)I scintigraphy. PMID:17634641

  16. Monte Carlo simulation of light propagation in the adult brain

    NASA Astrophysics Data System (ADS)

    Mudra, Regina M.; Nadler, Andreas; Keller, Emanuella; Niederer, Peter

    2004-06-01

    When near infrared spectroscopy (NIRS) is applied noninvasively to the adult head for brain monitoring, extra-cerebral bone and surface tissue exert a substantial influence on the cerebral signal. Most attempts to subtract extra-cerebral contamination involve spatially resolved spectroscopy (SRS). However, inter-individual variability of anatomy restrict the reliability of SRS. We simulated the light propagation with Monte Carlo techniques on the basis of anatomical structures determined from 3D-magnetic resonance imaging (MRI) exhibiting a voxel resolution of 0.8 x 0.8 x 0.8 mm3 for three different pairs of T1/T2 values each. The MRI data were used to define the material light absorption and dispersion coefficient for each voxel. The resulting spatial matrix was applied in the Monte Carlo Simulation to determine the light propagation in the cerebral cortex and overlaying structures. The accuracy of the Monte Carlo Simulation was furthermore increased by using a constant optical path length for the photons which was less than the median optical path length of the different materials. Based on our simulations we found a differential pathlength factor (DPF) of 6.15 which is close to with the value of 5.9 found in the literature for a distance of 4.5cm between the external sensors. Furthermore, we weighted the spatial probability distribution of the photons within the different tissues with the probabilities of the relative blood volume within the tissue. The results show that 50% of the NIRS signal is determined by the grey matter of the cerebral cortex which allows us to conclude that NIRS can produce meaningful cerebral blood flow measurements providing that the necessary corrections for extracerebral contamination are included.

  17. Properties of reactive oxygen species by quantum Monte Carlo

    SciTech Connect

    Zen, Andrea; Trout, Bernhardt L.; Guidoni, Leonardo

    2014-07-07

    The electronic properties of the oxygen molecule, in its singlet and triplet states, and of many small oxygen-containing radicals and anions have important roles in different fields of chemistry, biology, and atmospheric science. Nevertheless, the electronic structure of such species is a challenge for ab initio computational approaches because of the difficulties to correctly describe the statical and dynamical correlation effects in presence of one or more unpaired electrons. Only the highest-level quantum chemical approaches can yield reliable characterizations of their molecular properties, such as binding energies, equilibrium structures, molecular vibrations, charge distribution, and polarizabilities. In this work we use the variational Monte Carlo (VMC) and the lattice regularized Monte Carlo (LRDMC) methods to investigate the equilibrium geometries and molecular properties of oxygen and oxygen reactive species. Quantum Monte Carlo methods are used in combination with the Jastrow Antisymmetrized Geminal Power (JAGP) wave function ansatz, which has been recently shown to effectively describe the statical and dynamical correlation of different molecular systems. In particular, we have studied the oxygen molecule, the superoxide anion, the nitric oxide radical and anion, the hydroxyl and hydroperoxyl radicals and their corresponding anions, and the hydrotrioxyl radical. Overall, the methodology was able to correctly describe the geometrical and electronic properties of these systems, through compact but fully-optimised basis sets and with a computational cost which scales as N{sup 3} − N{sup 4}, where N is the number of electrons. This work is therefore opening the way to the accurate study of the energetics and of the reactivity of large and complex oxygen species by first principles.

  18. NOTE: MCDE: a new Monte Carlo dose engine for IMRT

    NASA Astrophysics Data System (ADS)

    Reynaert, N.; DeSmedt, B.; Coghe, M.; Paelinck, L.; Van Duyse, B.; DeGersem, W.; DeWagter, C.; DeNeve, W.; Thierens, H.

    2004-07-01

    A new accurate Monte Carlo code for IMRT dose computations, MCDE (Monte Carlo dose engine), is introduced. MCDE is based on BEAMnrc/DOSXYZnrc and consequently the accurate EGSnrc electron transport. DOSXYZnrc is reprogrammed as a component module for BEAMnrc. In this way both codes are interconnected elegantly, while maintaining the BEAM structure and only minimal changes to BEAMnrc.mortran are necessary. The treatment head of the Elekta SLiplus linear accelerator is modelled in detail. CT grids consisting of up to 200 slices of 512 × 512 voxels can be introduced and up to 100 beams can be handled simultaneously. The beams and CT data are imported from the treatment planning system GRATIS via a DICOM interface. To enable the handling of up to 50 × 106 voxels the system was programmed in Fortran95 to enable dynamic memory management. All region-dependent arrays (dose, statistics, transport arrays) were redefined. A scoring grid was introduced and superimposed on the geometry grid, to be able to limit the number of scoring voxels. The whole system uses approximately 200 MB of RAM and runs on a PC cluster consisting of 38 1.0 GHz processors. A set of in-house made scripts handle the parallellization and the centralization of the Monte Carlo calculations on a server. As an illustration of MCDE, a clinical example is discussed and compared with collapsed cone convolution calculations. At present, the system is still rather slow and is intended to be a tool for reliable verification of IMRT treatment planning in the case of the presence of tissue inhomogeneities such as air cavities.

  19. Improved Convergence Rate of Multi-Group Scattering Moment Tallies for Monte Carlo Neutron Transport Codes

    NASA Astrophysics Data System (ADS)

    Nelson, Adam

    Multi-group scattering moment matrices are critical to the solution of the multi-group form of the neutron transport equation, as they are responsible for describing the change in direction and energy of neutrons. These matrices, however, are difficult to correctly calculate from the measured nuclear data with both deterministic and stochastic methods. Calculating these parameters when using deterministic methods requires a set of assumptions which do not hold true in all conditions. These quantities can be calculated accurately with stochastic methods, however doing so is computationally expensive due to the poor efficiency of tallying scattering moment matrices. This work presents an improved method of obtaining multi-group scattering moment matrices from a Monte Carlo neutron transport code. This improved method of tallying the scattering moment matrices is based on recognizing that all of the outgoing particle information is known a priori and can be taken advantage of to increase the tallying efficiency (therefore reducing the uncertainty) of the stochastically integrated tallies. In this scheme, the complete outgoing probability distribution is tallied, supplying every one of the scattering moment matrices elements with its share of data. In addition to reducing the uncertainty, this method allows for the use of a track-length estimation process potentially offering even further improvement to the tallying efficiency. Unfortunately, to produce the needed distributions, the probability functions themselves must undergo an integration over the outgoing energy and scattering angle dimensions. This integration is too costly to perform during the Monte Carlo simulation itself and therefore must be performed in advance by way of a pre-processing code. The new method increases the information obtained from tally events and therefore has a significantly higher efficiency than the currently used techniques. The improved method has been implemented in a code system containing a new pre-processor code, NDPP, and a Monte Carlo neutron transport code, OpenMC. This method is then tested in a pin cell problem and a larger problem designed to accentuate the importance of scattering moment matrices. These tests show that accuracy was retained while the figure-of-merit for generating scattering moment matrices and fission energy spectra was significantly improved.

  20. Variance reduction in Monte Carlo analysis of rarefied gas diffusion

    NASA Technical Reports Server (NTRS)

    Perlmutter, M.

    1972-01-01

    The present analysis uses the Monte Carlo method to solve the problem of rarefied diffusion between parallel walls. The diffusing molecules are evaporated or emitted from one of two parallel walls and diffused through another molecular species. The analysis treats the diffusing molecule as undergoing a Markov random walk and the local macroscopic properties are found as the expected value of the random variable, the random walk payoff. By biasing the transition probabilities and changing the collision payoffs the expected Markov walk payoff is retained but its variance is reduced so that the M. C. result has a much smaller error.

  1. Studying the information content of TMDs using Monte Carlo generators

    SciTech Connect

    Avakian, H.; Matevosyan, H.; Pasquini, B.; Schweitzer, P.

    2015-02-05

    Theoretical advances in studies of the nucleon structure have been spurred by recent measurements of spin and/or azimuthal asymmetries worldwide. One of the main challenges still remaining is the extraction of the parton distribution functions, generalized to describe transverse momentum and spatial distributions of partons from these observables with no or minimal model dependence. In this topical review we present the latest developments in the field with emphasis on requirements for Monte Carlo event generators, indispensable for studies of the complex 3D nucleon structure, and discuss examples of possible applications.

  2. Application of Monte Carlo methods in tomotherapy and radiation biophysics

    NASA Astrophysics Data System (ADS)

    Hsiao, Ya-Yun

    Helical tomotherapy is an attractive treatment for cancer therapy because highly conformal dose distributions can be achieved while the on-board megavoltage CT provides simultaneous images for accurate patient positioning. The convolution/superposition (C/S) dose calculation methods typically used for Tomotherapy treatment planning may overestimate skin (superficial) doses by 3-13%. Although more accurate than C/S methods, Monte Carlo (MC) simulations are too slow for routine clinical treatment planning. However, the computational requirements of MC can be reduced by developing a source model for the parts of the accelerator that do not change from patient to patient. This source model then becomes the starting point for additional simulations of the penetration of radiation through patient. In the first section of this dissertation, a source model for a helical tomotherapy is constructed by condensing information from MC simulations into series of analytical formulas. The MC calculated percentage depth dose and beam profiles computed using the source model agree within 2% of measurements for a wide range of field sizes, which suggests that the proposed source model provides an adequate representation of the tomotherapy head for dose calculations. Monte Carlo methods are a versatile technique for simulating many physical, chemical and biological processes. In the second major of this thesis, a new methodology is developed to simulate of the induction of DNA damage by low-energy photons. First, the PENELOPE Monte Carlo radiation transport code is used to estimate the spectrum of initial electrons produced by photons. The initial spectrum of electrons are then combined with DNA damage yields for monoenergetic electrons from the fast Monte Carlo damage simulation (MCDS) developed earlier by Semenenko and Stewart (Purdue University). Single- and double-strand break yields predicted by the proposed methodology are in good agreement (1%) with the results of published experimental and theoretical studies for 60Co gamma-rays and low-energy x-rays. The reported studies provide new information about the potential biological consequences of diagnostic x-rays and selected gamma-emitting radioisotopes used in brachytherapy for the treatment of cancer. The proposed methodology is computationally efficient and may also be useful in proton therapy, space applications or internal dosimetry.

  3. Monte Carlo parameterization in the VirtualLeaf framework

    NASA Astrophysics Data System (ADS)

    Dzhurakhalov, A.; De Vos, D.; Beemster, G.; Broeckhove, J.

    2015-09-01

    The recently developed simulation framework VirtualLeaf uses Metropolis Monte Carlo dynamics for studying plant tissue morphogenesis. Minimizing the energy of the tissue is done by an energy evaluation-only method. We developed a more robust criterion for the energy minimization method for the multivariable and complex systems where the use of a gradient norm is impossible. The proposed criterion is based on checking energy changes in a sliding window of successive energy steps against a threshold value. The advantages of the sliding window criterion are discussed and results obtained by this method are presented. The impact of the choice threshold value on the energy minimization has been studied.

  4. Monte Carlo studies on potentiometric titration of (carboxymethyl)cellulose.

    PubMed

    Nishio, T

    1996-01-01

    Monte Carlo simulations of the potentiometric titration are carried out for (carboxymethyl)cellulose in aqueous salt solutions by a previously developed method. A nearly elliptic cylinder with spherical ionizable groups is assumed as model of (carboxymethyl)cellulose molecule. The spherical charges with a hard core potential are adopted as mobile hydrated ions. A fairly satisfactory agreement of the titration curves with the experimental data is achieved by using reasonable molecular dimensions. Dependence of the calculated titration profiles on the molecular model and the characteristics of the system are discussed. PMID:17023342

  5. On the efficiency of algorithms of Monte Carlo methods

    NASA Astrophysics Data System (ADS)

    Budak, V. P.; Zheltov, V. S.; Lubenchenko, A. V.; Shagalov, O. V.

    2015-11-01

    A numerical comparison of algorithms for solving the radiative transfer equation by the Monte-Carlo method is performed for the direct simulation and local estimations. The problems of radiative transfer through a turbid medium slab in the scalar and vector case is considered. The case of reflections from the boundaries of the medium is analyzed. The calculations are performed in a wide variation of parameters of the medium. It is shown that the calculation time with the same accuracy for the local estimation method is less than one - two orders of magnitude.

  6. Monte Carlo modelling of a simple accident dosemeter.

    PubMed

    Devine, R T

    2005-01-01

    A simple dosemeter made of a sulphur tablet, bare and cadmium-covered indium foils and a cadmium-covered copper foil has been modelled using MCNP5. Studies of the model without phantoms or other confounding factors have shown that the cross sections and fluence-to-dose factors generated by the Monte Carlo method agree with those generated by analytic expressions for the high-energy component. In this study, the effect of location on phantoms is studied and an extension of this study to low and intermediate energies is done. The activities expected from exposure to four critical assemblies on phantom is calculated and compared with observations. PMID:16604683

  7. Application of Direct Simulation Monte Carlo to Satellite Contamination Studies

    NASA Technical Reports Server (NTRS)

    Rault, Didier F. G.; Woronwicz, Michael S.

    1995-01-01

    A novel method is presented to estimate contaminant levels around spacecraft and satellites of arbitrarily complex geometry. The method uses a three-dimensional direct simulation Monte Carlo algorithm to characterize the contaminant cloud surrounding the space platform, and a computer-assisted design preprocessor to define the space-platform geometry. The method is applied to the Upper Atmosphere Research Satellite to estimate the contaminant flux incident on the optics of the halogen occultation experiment (HALOE) telescope. Results are presented in terms of contaminant cloud structure, molecular velocity distribution at HALOE aperture, and code performance.

  8. A generalized hard-sphere model for Monte Carlo simulation

    NASA Technical Reports Server (NTRS)

    Hassan, H. A.; Hash, David B.

    1993-01-01

    A new molecular model, called the generalized hard-sphere, or GHS model, is introduced. This model contains, as a special case, the variable hard-sphere model of Bird (1981) and is capable of reproducing all of the analytic viscosity coefficients available in the literature that are derived for a variety of interaction potentials incorporating attraction and repulsion. In addition, a new procedure for determining interaction potentials in a gas mixture is outlined. Expressions needed for implementing the new model in the direct simulation Monte Carlo methods are derived. This development makes it possible to employ interaction models that have the same level of complexity as used in Navier-Stokes calculations.

  9. Monte Carlo simulations of charge transport in heterogeneous organic semiconductors

    NASA Astrophysics Data System (ADS)

    Aung, Pyie Phyo; Khanal, Kiran; Luettmer-Strathmann, Jutta

    2015-03-01

    The efficiency of organic solar cells depends on the morphology and electronic properties of the active layer. Research teams have been experimenting with different conducting materials to achieve more efficient solar panels. In this work, we perform Monte Carlo simulations to study charge transport in heterogeneous materials. We have developed a coarse-grained lattice model of polymeric photovoltaics and use it to generate active layers with ordered and disordered regions. We determine carrier mobilities for a range of conditions to investigate the effect of the morphology on charge transport.

  10. Exploring mass perception with Markov chain Monte Carlo.

    PubMed

    Cohen, Andrew L; Ross, Michael G

    2009-12-01

    Several previous studies have examined the ability to judge the relative mass of objects in idealized collisions. With a newly developed technique of psychological Markov chain Monte Carlo sampling (A. N. Sanborn & T. L. Griffiths, 2008), this work explores participants' perceptions of different collision mass ratios. The results reveal interparticipant differences and a qualitative distinction between the perception of 1:1 and 1:2 ratios. The results strongly suggest that participants' perceptions of 1:1 collisions are described by simple heuristics. The evidence for 1:2 collisions favors heuristic perception models that are sensitive to the sign but not the magnitude of perceived mass differences. PMID:19968439

  11. AVATAR -- Automatic variance reduction in Monte Carlo calculations

    SciTech Connect

    Van Riper, K.A.; Urbatsch, T.J.; Soran, P.D.

    1997-05-01

    AVATAR{trademark} (Automatic Variance And Time of Analysis Reduction), accessed through the graphical user interface application, Justine{trademark}, is a superset of MCNP{trademark} that automatically invokes THREEDANT{trademark} for a three-dimensional deterministic adjoint calculation on a mesh independent of the Monte Carlo geometry, calculates weight windows, and runs MCNP. Computational efficiency increases by a factor of 2 to 5 for a three-detector oil well logging tool model. Human efficiency increases dramatically, since AVATAR eliminates the need for deep intuition and hours of tedious handwork.

  12. MCNP{trademark} Monte Carlo: A precis of MCNP

    SciTech Connect

    Adams, K.J.

    1996-06-01

    MCNP{trademark} is a general purpose three-dimensional time-dependent neutron, photon, and electron transport code. It is highly portable and user-oriented, and backed by stringent software quality assurance practices and extensive experimental benchmarks. The cross section database is based upon the best evaluations available. MCNP incorporates state-of-the-art analog and adaptive Monte Carlo techniques. The code is documented in a 600 page manual which is augmented by numerous Los Alamos technical reports which detail various aspects of the code. MCNP represents over a megahour of development and refinement over the past 50 years and an ongoing commitment to excellence.

  13. Monte Carlo Simulations and Generation of the SPI Response

    NASA Technical Reports Server (NTRS)

    Sturner, S. J.; Shrader, C. R.; Weidenspointner, G.; Teegarden, B. J.; Attie, D.; Diehl, R.; Ferguson, C.; Jean, P.; vonKienlin, A.

    2003-01-01

    In this paper we discuss the methods developed for the production of the INTEGRAL/SPI instrument response. The response files were produced using a suite of Monte Carlo simulation software developed at NASA/GSFC based on the GEANT-3 package available from CERN. The production of the INTEGRAL/SPI instrument response also required the development of a detailed computer mass model for SPI. We discuss our extensive investigations into methods to reduce both the computation time and storage requirements for the SPI response. We also discuss corrections to the simulated response based on our comparison of ground and inflight calibration data with MGEANT simulation.

  14. Monte Carlo Simulations and Generation of the SPI Response

    NASA Technical Reports Server (NTRS)

    Sturner, S. J.; Shrader, C. R.; Weidenspointner, G.; Teegarden, B. J.; Attie, D.; Cordier, B.; Diehl, R.; Ferguson, C.; Jean, P.; vonKienlin, A.

    2003-01-01

    In this paper we discuss the methods developed for the production of the INTEGRAL/SPI instrument response. The response files were produced using a suite of Monte Carlo simulation software developed at NASA/GSFC based on the GEANT-3 package available from CERN. The production of the INTEGRAL/SPI instrument response also required the development of a detailed computer mass model for SPI. We discuss ow extensive investigations into methods to reduce both the computation time and storage requirements for the SPI response. We also discuss corrections to the simulated response based on our comparison of ground and infiight Calibration data with MGEANT simulations.

  15. Quantum Monte Carlo calculation of reduced density matrices

    NASA Astrophysics Data System (ADS)

    Wagner, Lucas

    2012-02-01

    Quantum Monte Carlo(QMC) methods offer an efficient way to approximate the interacting ground state and some excited states of realistic model Hamiltonians based on the fundamental Coulomb interaction between electrons and nuclei. Many highly accurate results have been obtained using this method; however, it is often a challenge to extract the important correlations that the QMC wave function contains. I will describe some new results using the reduced density matrices(RDM's) to understand the electron correlation in the many-body wave function. The RDM's have both informative usage for describing correlation and pragmatic uses in further improving the variational wave function.

  16. Computational radiology and imaging with the MCNP Monte Carlo code

    SciTech Connect

    Estes, G.P.; Taylor, W.M.

    1995-05-01

    MCNP, a 3D coupled neutron/photon/electron Monte Carlo radiation transport code, is currently used in medical applications such as cancer radiation treatment planning, interpretation of diagnostic radiation images, and treatment beam optimization. This paper will discuss MCNP`s current uses and capabilities, as well as envisioned improvements that would further enhance MCNP role in computational medicine. It will be demonstrated that the methodology exists to simulate medical images (e.g. SPECT). Techniques will be discussed that would enable the construction of 3D computational geometry models of individual patients for use in patient-specific studies that would improve the quality of care for patients.

  17. Analysis and Monte Carlo simulation of luminescent solar concentrators

    SciTech Connect

    Heinaemaeki, A.

    1985-01-01

    In the study the applicability of inorganic materials for luminescent solar concentrators (LSCs) is investigated. The materials as well as the physical processes related to absorption and fluorescence are expounded and some examples of materials possibly suitable for LSCs are given. The operation of an LSC and the possible loss sources are considered in detail. Several efficiencies are defined and discussed. The Monte Carlo method is used in simulating the LSC performance to compare various LSCs and to study how the plate size, shape, and dye concentration affect the efficiencies.

  18. Monte Carlo approaches to the few-nucleon continuum

    SciTech Connect

    Schiavilla, R. |; Carlson, J.; Wiringa, R.B.

    1994-08-01

    Variational and Green`s Function Monte Carlo methods are reviewed as applied to the study of the few-nucleon continuum at low- and intermediate-energies. Results recently obtained for the radiative and weak capture reactions n + {sup 3}He {yields} {sup 4}He + {gamma} and p + {sup 3}He {yields} {sup 4}He + e{sup +} + {nu}{sub e}, the {sup 5}He P-wave resonances, and the inclusive and exclusive electron scattering reactions on {sup 3}H and the helium isotopes are summarized.

  19. Current status of the PSG Monte Carlo neutron transport code

    SciTech Connect

    Leppaenen, J.

    2006-07-01

    PSG is a new Monte Carlo neutron transport code, developed at the Technical Research Centre of Finland (VTT). The code is mainly intended for fuel assembly-level reactor physics calculations, such as group constant generation for deterministic reactor simulator codes. This paper presents the current status of the project and the essential capabilities of the code. Although the main application of PSG is in lattice calculations, the geometry is not restricted in two dimensions. This paper presents the validation of PSG against the experimental results of the three-dimensional MOX fuelled VENUS-2 reactor dosimetry benchmark. (authors)

  20. Morphological evolution of growing crystals - A Monte Carlo simulation

    NASA Technical Reports Server (NTRS)

    Xiao, Rong-Fu; Alexander, J. Iwan D.; Rosenberger, Franz

    1988-01-01

    The combined effects of nutrient diffusion and surface kinetics on the crystal morphology were investigated using a Monte Carlo model to simulate the evolving morphology of a crystal growing from a two-component gaseous nutrient phase. The model combines nutrient diffusion, based on a modified diffusion-limited aggregation process, with anisotropic surface-attachment kinetics and surface diffusion. A variety of conditions, ranging from kinetic-controlled to diffusion-controlled growth, were examined. Successive transitions from compact faceted (dominant surface kinetics) to open dendritic morphologies (dominant volume diffusion) were obtained.

  1. Monte Carlo simulation of nonadiabatic expansion in cometary atmospheres - Halley

    NASA Technical Reports Server (NTRS)

    Hodges, R. Richard, Jr.

    1990-01-01

    Monte Carlo methods developed for the characterization of velocity-dependent collision processes and ballistic transports in planetary exospheres form the basis of the present computer simulation of icy comet atmospheres, which iteratively undertakes the simultaneous determination of velocity distribution for five neutral species (water, together with suprathermal OH, H2, O, and H) in a flow regime varying from the hydrodynamic to the ballistic. Experimental data from the neutral mass spectrometer carried by Giotto for its March, 1986 encounter with Halley are compared with a model atmosphere.

  2. A Post-Monte-Carlo Sensitivity Analysis Code

    Energy Science and Technology Software Center (ESTSC)

    2000-04-04

    SATOOL (Sensitivity Analysis TOOL) is a code for sensitivity analysis, following an uncertainity analysis with Monte Carlo simulations. Sensitivity analysis identifies those input variables, whose variance contributes dominatly to the variance in the output. This analysis can be used to reduce the variance in the output variables by redefining the "sensitive" variables with greater precision, i.e. with lower variance. The code identifies a group of sensitive variables, ranks them in the order of importance andmore » also quantifies the relative importance among the sensitive variables.« less

  3. Monte Carlo simulation of a noisy quantum channel with memory.

    PubMed

    Akhalwaya, Ismail; Moodley, Mervlyn; Petruccione, Francesco

    2015-10-01

    The classical capacity of quantum channels is well understood for channels with uncorrelated noise. For the case of correlated noise, however, there are still open questions. We calculate the classical capacity of a forgetful channel constructed by Markov switching between two depolarizing channels. Techniques have previously been applied to approximate the output entropy of this channel and thus its capacity. In this paper, we use a Metropolis-Hastings Monte Carlo approach to numerically calculate the entropy. The algorithm is implemented in parallel and its performance is studied and optimized. The effects of memory on the capacity are explored and previous results are confirmed to higher precision. PMID:26565361

  4. Monte-Carlo histories of refractory interstellar dust

    NASA Technical Reports Server (NTRS)

    Clayton, D. D.; Liffman, K.

    1988-01-01

    Monte-carlo histories of 6 x 10 to the 6th individual dust particles injected uniformly from stars into the interstellar medium during a 6 x 10 to the 9th year history are calculated. The particles are given a two-phase internal structure of successive thermal condensates, and are distributed in initial radius as 1/a-cubed over the value of a between 0.01 and 0.1 micron. The evolution of this system illustrates the distinction between several different lifetimes for interstellar dust. Most are destroyed, but some grow in size. Several important consequences for interstellar dust are described.

  5. Stability of light positronic atoms: Quantum Monte Carlo studies

    SciTech Connect

    Harju, A.; Barbiellini, B.; Nieminen, R.M.

    1996-12-01

    We present accurate ground-state energies for the positronium atom in a Coulomb field of point charge {ital Z} ({ital X}{sup {ital Z}}Ps), for the positronium hydrogen (HPs) and positronium lithium (LiPs) atoms. Calculations are done using the diffusion quantum Monte Carlo (DQMC) method. For {ital X}{sup {ital Z}}Ps, the critical value of {ital Z} for binding is examined. While HPs is stable, the results show that LiPs is unstable against dissociation to a lithium atom and a positronium. {copyright} {ital 1996 The American Physical Society.}

  6. Monte Carlo modelling of a simple accident dosemeter.

    TOXLINE Toxicology Bibliographic Information

    Devine RT

    2005-01-01

    A simple dosemeter made of a sulphur tablet, bare and cadmium-covered indium foils and a cadmium-covered copper foil has been modelled using MCNP5. Studies of the model without phantoms or other confounding factors have shown that the cross sections and fluence-to-dose factors generated by the Monte Carlo method agree with those generated by analytic expressions for the high-energy component. In this study, the effect of location on phantoms is studied and an extension of this study to low and intermediate energies is done. The activities expected from exposure to four critical assemblies on phantom is calculated and compared with observations.

  7. Validation of Phonon Physics in the CDMS Detector Monte Carlo

    NASA Astrophysics Data System (ADS)

    McCarthy, K. A.; Leman, S. W.; Anderson, A.; Brandt, D.; Brink, P. L.; Cabrera, B.; Cherry, M.; Do Couto E Silva, E.; Cushman, P.; Doughty, T.; Figueroa-Feliciano, E.; Kim, P.; Mirabolfathi, N.; Novak, L.; Partridge, R.; Pyle, M.; Reisetter, A.; Resch, R.; Sadoulet, B.; Serfass, B.; Sundqvist, K. M.; Tomada, A.

    2012-06-01

    The SuperCDMS collaboration is a dark matter search effort aimed at detecting the scattering of WIMP dark matter from nuclei in cryogenic germanium targets. The CDMS Detector Monte Carlo (CDMS-DMC) is a simulation tool aimed at achieving a deeper understanding of the performance of the SuperCDMS detectors and aiding the dark matter search analysis. We present results from validation of the phonon physics described in the CDMS-DMC and outline work towards utilizing it in future WIMP search analyses.

  8. Neutronic calculations for CANDU thorium systems using Monte Carlo techniques

    NASA Astrophysics Data System (ADS)

    Saldideh, M.; Shayesteh, M.; Eshghi, M.

    2014-08-01

    In this paper, we have investigated the prospects of exploiting the rich world thorium reserves using Canada Deuterium Uranium (CANDU) reactors. The analysis is performed using the Monte Carlo MCNP code in order to understand how much time the reactor is in criticality conduction. Four different fuel compositions have been selected for analysis. We have obtained the infinite multiplication factor, k?, under full power operation of the reactor over 8 years. The neutronic flux distribution in the full core reactor has already been investigated.

  9. Monte Carlo simulation of vibrational relaxation in nitrogen

    NASA Technical Reports Server (NTRS)

    Olynick, David P.; Hassan, H. A.; Moss, James N.

    1990-01-01

    Monte Carlo simulation of nonequilibrium vibrational relaxation of (rotationless) N2 using transition probabilities form an extended SSH theory is presented. For the range of temperatures considered, 4000-8000 K, the vibrational levels were found to be reasonably close to an equilibrium distribution at an average vibrational temperature based on the vibrational energy of the gas. As a result, they do not show any statistically significant evidence of the bottleneck observed in earlier studies of N2. Based on this finding, it appears that, for the temperature range considered, dissociation commences after all vibrational levels equilibrate at the translational temperature.

  10. Monte Carlo simulation for neutrino detection from Minna Bluff, Antarctica

    NASA Astrophysics Data System (ADS)

    Barrella, Taylor; Vieregg, Abigail; Saltzberg, David

    2012-03-01

    We present a simple Monte Carlo simulation for a possible neutrino detection experiment. The detector would be composed of an array of radio antennas on Minna Bluff, Antarctica, designed to detect Cherenkov radiation from ultra-high-energy neutrinos. The simulation generates neutrino interactions in the Ross Ice Shelf below the antennas to determine the expected number of detected events per year. Though the results predict less than one event per year, the detection of tau neutrinos should increase the event rate for detectors embedded in the ice.

  11. Continuous-Estimator Representation for Monte Carlo Criticality Diagnostics

    SciTech Connect

    Kiedrowski, Brian C.; Brown, Forrest B.

    2012-06-18

    An alternate means of computing diagnostics for Monte Carlo criticality calculations is proposed. Overlapping spherical regions or estimators are placed covering the fissile material with a minimum center-to-center separation of the 'fission distance', which is defined herein, and a radius that is some multiple thereof. Fission neutron production is recorded based upon a weighted average of proximities to centers for all the spherical estimators. These scores are used to compute the Shannon entropy, and shown to reproduce the value, to within an additive constant, determined from a well-placed mesh by a user. The spherical estimators are also used to assess statistical coverage.

  12. Adaptively Learning an Importance Function Using Transport Constrained Monte Carlo

    SciTech Connect

    Booth, T.E.

    1998-06-22

    It is well known that a Monte Carlo estimate can be obtained with zero-variance if an exact importance function for the estimate is known. There are many ways that one might iteratively seek to obtain an ever more exact importance function. This paper describes a method that has obtained ever more exact importance functions that empirically produce an error that is dropping exponentially with computer time. The method described herein constrains the importance function to satisfy the (adjoint) Boltzmann transport equation. This constraint is provided by using the known form of the solution, usually referred to as the Case eigenfunction solution.

  13. Monte Carlo Lokalisierung Fahrerloser Transportfahrzeuge mit drahtlosen Sensornetzwerken

    NASA Astrophysics Data System (ADS)

    Rhrig, Christof; Bchter, Hubert; Kirsch, Christopher

    Ein drahtloses Sensornetzwerk ist ein Funknetz aus Kleinstrechnern, das aus vielen kleinen, dicht verteilten Sensorknoten besteht. Neben den klassischen Anwendungen wie z.B. dem Umweltmonitoring kann es auch zur Lokalisierung von Objekten verwendet werden. Der Beitrag beschreibt die Lokalisierung von Fahrerlosen Transportfahrzeugen mit dem drahtlosen Sensornetzwerk nanoLOC des Herstellers Nanotron Technologies. Aufbauend auf einem angepassten Monte Carlo Algorithmus wird ein Sensormodell entwickelt, welches eine Lokalisierung durch Entfernungsmessung zu ortsfesten Sensorknoten ermglicht. Es werden experimentelle Ergebnisse prsentiert, die zeigen, dass mit dem nanoLOC-System eine Positionsbestimmung eines Fahrerlosen Transportfahrzeugs mit einem Fehler kleiner als 0,5 m erreichbar ist.

  14. More about Zener drag studies with Monte Carlo simulations

    NASA Astrophysics Data System (ADS)

    Di Prinzio, Carlos L.; Druetta, Esteban; Nasello, Olga Beatriz

    2013-03-01

    Grain growth (GG) processes in the presence of second-phase and stationary particles have been widely studied but the results found are inconsistent. We present new GG simulations in two- and three-dimensional (2D and 3D) polycrystalline samples with second phase stationary particles, using the Monte Carlo technique. Simulations using values of particle concentration greater than 15% and particle radii different from 1 or 3 are performed, thus covering a range of particle radii and concentrations not previously studied. It is shown that only the results for 3D samples follow Zener's law.

  15. 3D Monte Carlo radiation transfer modelling of photodynamic therapy

    NASA Astrophysics Data System (ADS)

    Campbell, C. Louise; Christison, Craig; Brown, C. Tom A.; Wood, Kenneth; Valentine, Ronan M.; Moseley, Harry

    2015-06-01

    The effects of ageing and skin type on Photodynamic Therapy (PDT) for different treatment methods have been theoretically investigated. A multilayered Monte Carlo Radiation Transfer model is presented where both daylight activated PDT and conventional PDT are compared. It was found that light penetrates deeper through older skin with a lighter complexion, which translates into a deeper effective treatment depth. The effect of ageing was found to be larger for darker skin types. The investigation further strengthens the usage of daylight as a potential light source for PDT where effective treatment depths of about 2 mm can be achieved.

  16. Monte Carlo calculations of few-body and light nuclei

    SciTech Connect

    Wiringa, R.B.

    1992-01-01

    A major goal in nuclear physics is to understand how nuclear structure comes about from the underlying interactions between nucleons. This requires modelling nuclei as collections of strongly interacting particles. Using realistic nucleon-nucleon potentials, supplemented with consistent three-nucleon potentials and two-body electroweak current operators, variational Monte Carlo methods are used to calculate nuclear ground-state properties, such as the binding energy, electromagnetic form factors, and momentum distributions. Other properties such as excited states and low-energy reactions are also calculable with these methods.

  17. Monte Carlo calculations of few-body and light nuclei

    SciTech Connect

    Wiringa, R.B.

    1992-02-01

    A major goal in nuclear physics is to understand how nuclear structure comes about from the underlying interactions between nucleons. This requires modelling nuclei as collections of strongly interacting particles. Using realistic nucleon-nucleon potentials, supplemented with consistent three-nucleon potentials and two-body electroweak current operators, variational Monte Carlo methods are used to calculate nuclear ground-state properties, such as the binding energy, electromagnetic form factors, and momentum distributions. Other properties such as excited states and low-energy reactions are also calculable with these methods.

  18. Element Agglomeration Algebraic Multilevel Monte-Carlo Library

    SciTech Connect

    2015-02-19

    ElagMC is a parallel C++ library for Multilevel Monte Carlo simulations with algebraically constructed coarse spaces. ElagMC enables Multilevel variance reduction techniques in the context of general unstructured meshes by using the specialized element-based agglomeration techniques implemented in ELAG (the Element-Agglomeration Algebraic Multigrid and Upscaling Library developed by U. Villa and P. Vassilevski and currently under review for public release). The ElabMC library can support different type of deterministic problems, including mixed finite element discretizations of subsurface flow problems.

  19. Element Agglomeration Algebraic Multilevel Monte-Carlo Library

    Energy Science and Technology Software Center (ESTSC)

    2015-02-19

    ElagMC is a parallel C++ library for Multilevel Monte Carlo simulations with algebraically constructed coarse spaces. ElagMC enables Multilevel variance reduction techniques in the context of general unstructured meshes by using the specialized element-based agglomeration techniques implemented in ELAG (the Element-Agglomeration Algebraic Multigrid and Upscaling Library developed by U. Villa and P. Vassilevski and currently under review for public release). The ElabMC library can support different type of deterministic problems, including mixed finite element discretizationsmore » of subsurface flow problems.« less

  20. Quantitative Monte Carlo-based holmium-166 SPECT reconstruction

    SciTech Connect

    Elschot, Mattijs; Smits, Maarten L. J.; Nijsen, Johannes F. W.; Lam, Marnix G. E. H.; Zonnenberg, Bernard A.; Bosch, Maurice A. A. J. van den; Jong, Hugo W. A. M. de; Viergever, Max A.

    2013-11-15

    Purpose: Quantitative imaging of the radionuclide distribution is of increasing interest for microsphere radioembolization (RE) of liver malignancies, to aid treatment planning and dosimetry. For this purpose, holmium-166 ({sup 166}Ho) microspheres have been developed, which can be visualized with a gamma camera. The objective of this work is to develop and evaluate a new reconstruction method for quantitative {sup 166}Ho SPECT, including Monte Carlo-based modeling of photon contributions from the full energy spectrum.Methods: A fast Monte Carlo (MC) simulator was developed for simulation of {sup 166}Ho projection images and incorporated in a statistical reconstruction algorithm (SPECT-fMC). Photon scatter and attenuation for all photons sampled from the full {sup 166}Ho energy spectrum were modeled during reconstruction by Monte Carlo simulations. The energy- and distance-dependent collimator-detector response was modeled using precalculated convolution kernels. Phantom experiments were performed to quantitatively evaluate image contrast, image noise, count errors, and activity recovery coefficients (ARCs) of SPECT-fMC in comparison with those of an energy window-based method for correction of down-scattered high-energy photons (SPECT-DSW) and a previously presented hybrid method that combines MC simulation of photopeak scatter with energy window-based estimation of down-scattered high-energy contributions (SPECT-ppMC+DSW). Additionally, the impact of SPECT-fMC on whole-body recovered activities (A{sup est}) and estimated radiation absorbed doses was evaluated using clinical SPECT data of six {sup 166}Ho RE patients.Results: At the same noise level, SPECT-fMC images showed substantially higher contrast than SPECT-DSW and SPECT-ppMC+DSW in spheres ?17 mm in diameter. The count error was reduced from 29% (SPECT-DSW) and 25% (SPECT-ppMC+DSW) to 12% (SPECT-fMC). ARCs in five spherical volumes of 1.96106.21 ml were improved from 32%63% (SPECT-DSW) and 50%80% (SPECT-ppMC+DSW) to 76%103% (SPECT-fMC). Furthermore, SPECT-fMC recovered whole-body activities were most accurate (A{sup est}= 1.06 A ? 5.90 MBq, R{sup 2}= 0.97) and SPECT-fMC tumor absorbed doses were significantly higher than with SPECT-DSW (p = 0.031) and SPECT-ppMC+DSW (p = 0.031).Conclusions: The quantitative accuracy of {sup 166}Ho SPECT is improved by Monte Carlo-based modeling of the image degrading factors. Consequently, the proposed reconstruction method enables accurate estimation of the radiation absorbed dose in clinical practice.

  1. Optical Monte Carlo modeling of a true portwine stain anatomy

    NASA Astrophysics Data System (ADS)

    Barton, Jennifer K.; Pfefer, T. Joshua; Welch, Ashley J.; Smithies, Derek J.; Nelson, Jerry; van Gemert, Martin J.

    1998-04-01

    A unique Monte Carlo program capable of accommodating an arbitrarily complex geometry was used to determine the energy deposition in a true port wine stain anatomy. Serial histologic sections taken from a biopsy of a dark red, laser therapy resistant stain were digitized and used to create the program input for simulation at wavelengths of 532 and 585 nm. At both wavelengths, the greatest energy deposition occurred in the superficial blood vessels, and subsequently decreased with depth as the laser beam was attenuated. However, more energy was deposited in the epidermis and superficial blood vessels at 532 nm than at 585 nm.

  2. Monte Carlo simulation of reentry flows with ionization

    NASA Technical Reports Server (NTRS)

    Taylor, Jeff C.; Carlson, Ann B.; Hassan, H. A.

    1992-01-01

    The Direct Simulation Monte Carlo method is applied to a rarefied, weakly ionized, hypersonic flow over a blunt axisymmetric body. An ionization model based on the concept of ambipolar diffusion is used and a model for the sheath is presented. The effects of the new modeling techniques are investigated for flow over the Project Fire II configuration at 11.37 km/s at an altitude of 84.6 km. The calculated results are presented and compared with both experimental data and solutions where ionization effects were not included. In general, the calculated results overpredict the experimental values by about 15-20 percent.

  3. Monte Carlo simulations of scattered power from irradiated optical elements

    NASA Astrophysics Data System (ADS)

    Secco, Eleonora; Sánchez del Río, Manuel

    2011-09-01

    A computer tool for the evaluation of the absorbed and re-scattered power from optical elements in a synchrotron beamline has been written using the Monte Carlo package PENELOPE. A precise estimation of this power is needed to assist in the design of the shielding inside the optical chambers that receive high power, like for the Upgrade Programme at the ESRF. The results for scattered power calculation are presented for three cases i) a Glidcop mirror for the SESAME Synchrotron, ii) a silicon crystal in use at the ESRF beamline ID06, and iii) a Laue crystal for the new monochromator of the ESRF ID17 beamline.

  4. Quantum Monte Carlo Calculations of A <=6 Nuclei

    NASA Astrophysics Data System (ADS)

    Pudliner, B. S.; Pandharipande, V. R.; Carlson, J.; Wiringa, R. B.

    1995-05-01

    The energies of 3H, 3He, and 4He ground states, the 32- and 12- scattering states of 5He, the ground states of 6He, 6Li, and 6Be, and the 3+ and 0+ excited states of 6Li have been accurately calculated with the Green's function Monte Carlo method using realistic models of two- and three-nucleon interactions. The splitting of the A = 3 isospin T = 12 and A = 6 isospin T = 1, J? = 0+ multiplets is also studied. The observed energies and radii are generally well reproduced, however, some definite differences between theory and experiment can be identified.

  5. Monte Carlo simulation of nonadiabatic expansion in cometary atmospheres - Halley

    SciTech Connect

    Hodges, R.R. Jr. )

    1990-02-01

    Monte Carlo methods developed for the characterization of velocity-dependent collision processes and ballistic transports in planetary exospheres form the basis of the present computer simulation of icy comet atmospheres, which iteratively undertakes the simultaneous determination of velocity distribution for five neutral species (water, together with suprathermal OH, H2, O, and H) in a flow regime varying from the hydrodynamic to the ballistic. Experimental data from the neutral mass spectrometer carried by Giotto for its March, 1986 encounter with Halley are compared with a model atmosphere. 31 refs.

  6. MONTE CARLO ADVANCES FOR THE EOLUS ASCI PROJECT

    SciTech Connect

    J. S. HENDRICK; G. W. MCKINNEY; L. J. COX

    2000-01-01

    The Eolus ASCI project includes parallel, 3-D transport simulation for various nuclear applications. The codes developed within this project provide neutral and charged particle transport, detailed interaction physics, numerous source and tally capabilities, and general geometry packages. One such code is MCNPW which is a general purpose, 3-dimensional, time-dependent, continuous-energy Monte Carlo fully-coupled N-Particle transport code. Significant advances are also being made in the areas of modern software engineering and parallel computing. These advances are described in detail.

  7. Positronic molecule calculations using Monte Carlo configuration interaction

    NASA Astrophysics Data System (ADS)

    Coe, Jeremy P.; Paterson, Martin J.

    2016-02-01

    We modify the Monte Carlo configuration interaction procedure to model atoms and molecules combined with a positron. We test this method with standard quantum chemistry basis sets on a number of positronic systems and compare results with the literature and full configuration interaction when appropriate. We consider positronium hydride, positronium hydroxide, lithium positride and a positron interacting with lithium, magnesium or lithium hydride. We demonstrate that we can capture much of the full configuration interaction results, but often require less than 10% of the configurations of these multireference wavefunctions. The effect of the number of frozen orbitals is also discussed.

  8. Burnup calculation methodology in the serpent 2 Monte Carlo code

    SciTech Connect

    Leppaenen, J.; Isotalo, A.

    2012-07-01

    This paper presents two topics related to the burnup calculation capabilities in the Serpent 2 Monte Carlo code: advanced time-integration methods and improved memory management, accomplished by the use of different optimization modes. The development of the introduced methods is an important part of re-writing the Serpent source code, carried out for the purpose of extending the burnup calculation capabilities from 2D assembly-level calculations to large 3D reactor-scale problems. The progress is demonstrated by repeating a PWR test case, originally carried out in 2009 for the validation of the newly-implemented burnup calculation routines in Serpent 1. (authors)

  9. Implict Monte Carlo Radiation Transport Simulations of Four Test Problems

    SciTech Connect

    Gentile, N

    2007-08-01

    Radiation transport codes, like almost all codes, are difficult to develop and debug. It is helpful to have small, easy to run test problems with known answers to use in development and debugging. It is also prudent to re-run test problems periodically during development to ensure that previous code capabilities have not been lost. We describe four radiation transport test problems with analytic or approximate analytic answers. These test problems are suitable for use in debugging and testing radiation transport codes. We also give results of simulations of these test problems performed with an Implicit Monte Carlo photonics code.

  10. Neutron streaming Monte Carlo radiation transport code MORSE-CG

    SciTech Connect

    Halley, A.M.; Miller, W.H.

    1986-11-01

    Calculations have been performed using the Monte Carlo code, MORSE-CG, to determine the neutron streaming through various straight and stepped gaps between radiation shield sectors in the conceptual tokamak fusion power plant design STARFIRE. This design calls for ''pie-shaped'' radiation shields with gaps between segments. It is apparent that some type of offset, or stepped gap, configuration will be necessary to reduce neutron streaming through these gaps. To evaluate this streaming problem, a MORSE-to-MORSE coupling technique was used, consisting of two separate transport calculations, which together defined the entire transport problem. The results define the effectiveness of various gap configurations to eliminate radiation streaming.

  11. Communication: Water on hexagonal boron nitride from diffusion Monte Carlo

    SciTech Connect

    Al-Hamdani, Yasmine S.; Ma, Ming; Michaelides, Angelos; Alfè, Dario; Lilienfeld, O. Anatole von

    2015-05-14

    Despite a recent flurry of experimental and simulation studies, an accurate estimate of the interaction strength of water molecules with hexagonal boron nitride is lacking. Here, we report quantum Monte Carlo results for the adsorption of a water monomer on a periodic hexagonal boron nitride sheet, which yield a water monomer interaction energy of −84 ± 5 meV. We use the results to evaluate the performance of several widely used density functional theory (DFT) exchange correlation functionals and find that they all deviate substantially. Differences in interaction energies between different adsorption sites are however better reproduced by DFT.

  12. Relativistic 3-D nuclear hydrodynamics with Monte Carlo pions

    SciTech Connect

    Zingman, J.A.; McAbee, T.L.; Wilson, J.R.; Alonso, C.T.

    1987-05-01

    A model for relativistic three-dimensional hydrodynamical nuclear fluids has been coupled to a Monte Carlo pion model which treats the production, scattering, and absorption of pions in relativistic nuclear fluids. The model is dynamic and allows us to explicitly follow the temporal and spatial development of pion components through an entire collision process and into the final state. Such calculations will be necessary to extract meaningful information from measured RHIC pion distributions. We present preliminary results and discussion for /sup 139/La + /sup 139/La collisions at 1350 MeV/nuc (lab) and at various impact parameters. 13 refs., 2 figs.

  13. Analysis of real-time networks with monte carlo methods

    NASA Astrophysics Data System (ADS)

    Mauclair, C.; Durrieu, G.

    2013-12-01

    Communication networks in embedded systems are ever more large and complex. A better understanding of the dynamics of these networks is necessary to use them at best and lower costs. Todays tools are able to compute upper bounds of end-to-end delays that a packet being sent through the network could suffer. However, in the case of asynchronous networks, those worst end-to-end delay (WEED) cases are rarely observed in practice or through simulations due to the scarce situations that lead to worst case scenarios. A novel approach based on Monte Carlo methods is suggested to study the effects of the asynchrony on the performances.

  14. FZ2MC: A Tool for Monte Carlo Transport Code Geometry Manipulation

    SciTech Connect

    Hackel, B M; Nielsen Jr., D E; Procassini, R J

    2009-02-25

    The process of creating and validating combinatorial geometry representations of complex systems for use in Monte Carlo transport simulations can be both time consuming and error prone. To simplify this process, a tool has been developed which employs extensions of the Form-Z commercial solid modeling tool. The resultant FZ2MC (Form-Z to Monte Carlo) tool permits users to create, modify and validate Monte Carlo geometry and material composition input data. Plugin modules that export this data to an input file, as well as parse data from existing input files, have been developed for several Monte Carlo codes. The FZ2MC tool is envisioned as a 'universal' tool for the manipulation of Monte Carlo geometry and material data. To this end, collaboration on the development of plug-in modules for additional Monte Carlo codes is desired.

  15. Markov chain Monte Carlo based Approaches for Inverse Problems

    NASA Astrophysics Data System (ADS)

    Chen, J.; Hoverten, M.; Vasco, D.; Hou, Z.; Rubin, Y.

    2005-12-01

    Inverse modeling of heterogeneous subsurface systems is difficult. One of the main challenges is the lack of effective and flexible inversion methods that can handle complex practical situations, which may be characterized by non-Gaussian likelihood functions and prior distributions, multiple local optimal solutions, as well as nonlinearity and non-uniqueness of the relationships between parameters and measurements. This study presents a Markov chain Monte Carlo (MCMC) based approach for inverting complex data sets. This approach includes three major steps: (1) Build a stochastic model within the Bayesian framework; (2) Generate many samples from the joint posterior distribution using MCMC methods; (3) Make inferences about unknown parameters from the generated samples. The use of MCMC methods makes our approach very flexible for solving complex inversion problems. First, we can virtually use any types of likelihood functions and prior distributions in the Bayesian model. This allows us to build inversion models primarily based on complex practical situations. Second, MCMC methods are well suitable for parallel computing. This allows us to incorporate computationally intensive forward simulation models into the inversion procedures and allows us to avoid being trapped in multiple local modes of the joint posterior distribution. Finally, MCMC methods generate many samples of unknown parameters. This allows for quantification of uncertainty in estimation of each unknown parameter. To demonstrate our approach, we applied it on geophysical seismic and electromagnetic (EM) data for estimating porosity and natural gas saturation in deepwater gas reservoir. Conventional techniques (such as seismic methods) for gas exploration often suffer a large degree of uncertainty because seismic properties of a medium are not sensitive to gas saturation in the medium. In contrast, electrical properties of a medium are very sensitive to gas saturation. Therefore, EM techniques have the potential of providing information for reducing the uncertainty. We explore in this study the combined use of seismic and EM data using MCMC methods based on layered reservoir models. We consider gas saturation and porosity in each layer of the reservoir, seismic velocities and density in the layers below and above the reservoir, and electrical conductivity in the overburden as random variables. We consider pre-stack seismic amplitude versus offsets (AVO) measurements in a given time window and the amplitudes and phases of the recorded electrical field as data. Using the Bayes' theorem, we get the joint posterior distribution function of all the unknowns. Using MCMC sampling methods, we obtain many samples for each of the unknowns. We demonstrate the efficiency of the developed model for joint inversion of seismic AVO and EM data, and the benefits of incorporating EM data into gas saturation estimation, using two case studies, one is a synthetic case study, and the other is a field case study. Results show that the incorporation of EM data reduces the uncertainty within estimation of both gas saturation and porosity.

  16. Hierarchical fractional-step approximations and parallel kinetic Monte Carlo algorithms

    NASA Astrophysics Data System (ADS)

    Arampatzis, Giorgos; Katsoulakis, Markos A.; Plech?, Petr; Taufer, Michela; Xu, Lifan

    2012-10-01

    We present a mathematical framework for constructing and analyzing parallel algorithms for lattice kinetic Monte Carlo (KMC) simulations. The resulting algorithms have the capacity to simulate a wide range of spatio-temporal scales in spatially distributed, non-equilibrium physiochemical processes with complex chemistry and transport micro-mechanisms. Rather than focusing on constructing exactly the stochastic trajectories, our approach relies on approximating the evolution of observables, such as density, coverage, correlations and so on. More specifically, we develop a spatial domain decomposition of the Markov operator (generator) that describes the evolution of all observables according to the kinetic Monte Carlo algorithm. This domain decomposition corresponds to a decomposition of the Markov generator into a hierarchy of operators and can be tailored to specific hierarchical parallel architectures such as multi-core processors or clusters of Graphical Processing Units (GPUs). Based on this operator decomposition, we formulate parallel Fractional step kinetic Monte Carlo algorithms by employing the Trotter Theorem and its randomized variants; these schemes, (a) are partially asynchronous on each fractional step time-window, and (b) are characterized by their communication schedule between processors. The proposed mathematical framework allows us to rigorously justify the numerical and statistical consistency of the proposed algorithms, showing the convergence of our approximating schemes to the original serial KMC. The approach also provides a systematic evaluation of different processor communicating schedules. We carry out a detailed benchmarking of the parallel KMC schemes using available exact solutions, for example, in Ising-type systems and we demonstrate the capabilities of the method to simulate complex spatially distributed reactions at very large scales on GPUs. Finally, we discuss work load balancing between processors and propose a re-balancing scheme based on probabilistic mass transport methods.

  17. Monte Carlo simulations will change the way we treat patients with proton beams today

    PubMed Central

    2014-01-01

    Within the past two decades, the evolution of Monte Carlo simulation tools, coupled with our better understanding of physics processes and computer technology has enabled accurate and efficient prediction of particle interactions with tissue. Monte Carlo simulations have now been applied for routine clinical applications. This commentary outlines how simulations have the potential to change clinical practice particularly in proton therapy. Specifically, Monte Carlo simulations will impact treatment outcome analysis, reduce treatment volumes and help understand proton-induced radiation biology. PMID:24896200

  18. Automated Monte Carlo biasing for photon-generated electrons near surfaces.

    SciTech Connect

    Franke, Brian Claude; Crawford, Martin James; Kensek, Ronald Patrick

    2009-09-01

    This report describes efforts to automate the biasing of coupled electron-photon Monte Carlo particle transport calculations. The approach was based on weight-windows biasing. Weight-window settings were determined using adjoint-flux Monte Carlo calculations. A variety of algorithms were investigated for adaptivity of the Monte Carlo tallies. Tree data structures were used to investigate spatial partitioning. Functional-expansion tallies were used to investigate higher-order spatial representations.

  19. Neutronic analysis code for fuel assembly using a vectorized Monte Carlo method

    SciTech Connect

    Morimoto, Y.; Maruyama, H.; Ishii, K.; Aoyama, M. . Energy Research Lab.)

    1989-12-01

    A fuel assembly analysis code, VMONT, in which a multigroup neutron transport calculation is combined with a burnup calculation, has been developed for comprehensive design work use. The neutron transport calculation is performed with a vectorized Monte Carlo method that can realize speeds {gt}10 times faster than those of a scalar Monte Carlo method. The validity of the VMONT code is shown through test calculations against continuous energy Monte Carlo calculations and the proteus tight lattice experiment.

  20. MonteGrappa: An iterative Monte Carlo program to optimize biomolecular potentials in simplified models

    NASA Astrophysics Data System (ADS)

    Tiana, G.; Villa, F.; Zhan, Y.; Capelli, R.; Paissoni, C.; Sormanni, P.; Heard, E.; Giorgetti, L.; Meloni, R.

    2015-01-01

    Simplified models, including implicit-solvent and coarse-grained models, are useful tools to investigate the physical properties of biological macromolecules of large size, like protein complexes, large DNA/RNA strands and chromatin fibres. While advanced Monte Carlo techniques are quite efficient in sampling the conformational space of such models, the availability of realistic potentials is still a limitation to their general applicability. The recent development of a computational scheme capable of designing potentials to reproduce any kind of experimental data that can be expressed as thermal averages of conformational properties of the system has partially alleviated the problem. Here we present a program that implements the optimization of the potential with respect to the experimental data through an iterative Monte Carlo algorithm and a rescaling of the probability of the sampled conformations. The Monte Carlo sampling includes several types of moves, suitable for different kinds of system, and various sampling schemes, such as fixed-temperature, replica-exchange and adaptive simulated tempering. The conformational properties whose thermal averages are used as inputs currently include contact functions, distances and functions of distances, but can be easily extended to any function of the coordinates of the system.

  1. Efficiency of Health Care Production in Low-Resource Settings: A Monte-Carlo Simulation to Compare the Performance of Data Envelopment Analysis, Stochastic Distance Functions, and an Ensemble Model

    PubMed Central

    Giorgio, Laura Di; Flaxman, Abraham D.; Moses, Mark W.; Fullman, Nancy; Hanlon, Michael; Conner, Ruben O.; Wollum, Alexandra; Murray, Christopher J. L.

    2016-01-01

    Low-resource countries can greatly benefit from even small increases in efficiency of health service provision, supporting a strong case to measure and pursue efficiency improvement in low- and middle-income countries (LMICs). However, the knowledge base concerning efficiency measurement remains scarce for these contexts. This study shows that current estimation approaches may not be well suited to measure technical efficiency in LMICs and offers an alternative approach for efficiency measurement in these settings. We developed a simulation environment which reproduces the characteristics of health service production in LMICs, and evaluated the performance of Data Envelopment Analysis (DEA) and Stochastic Distance Function (SDF) for assessing efficiency. We found that an ensemble approach (ENS) combining efficiency estimates from a restricted version of DEA (rDEA) and restricted SDF (rSDF) is the preferable method across a range of scenarios. This is the first study to analyze efficiency measurement in a simulation setting for LMICs. Our findings aim to heighten the validity and reliability of efficiency analyses in LMICs, and thus inform policy dialogues about improving the efficiency of health service production in these settings. PMID:26812685

  2. Efficiency of Health Care Production in Low-Resource Settings: A Monte-Carlo Simulation to Compare the Performance of Data Envelopment Analysis, Stochastic Distance Functions, and an Ensemble Model.

    PubMed

    Giorgio, Laura Di; Flaxman, Abraham D; Moses, Mark W; Fullman, Nancy; Hanlon, Michael; Conner, Ruben O; Wollum, Alexandra; Murray, Christopher J L

    2016-01-01

    Low-resource countries can greatly benefit from even small increases in efficiency of health service provision, supporting a strong case to measure and pursue efficiency improvement in low- and middle-income countries (LMICs). However, the knowledge base concerning efficiency measurement remains scarce for these contexts. This study shows that current estimation approaches may not be well suited to measure technical efficiency in LMICs and offers an alternative approach for efficiency measurement in these settings. We developed a simulation environment which reproduces the characteristics of health service production in LMICs, and evaluated the performance of Data Envelopment Analysis (DEA) and Stochastic Distance Function (SDF) for assessing efficiency. We found that an ensemble approach (ENS) combining efficiency estimates from a restricted version of DEA (rDEA) and restricted SDF (rSDF) is the preferable method across a range of scenarios. This is the first study to analyze efficiency measurement in a simulation setting for LMICs. Our findings aim to heighten the validity and reliability of efficiency analyses in LMICs, and thus inform policy dialogues about improving the efficiency of health service production in these settings. PMID:26812685

  3. A Monte Carlo Dispersion Analysis of the X-33 Simulation Software

    NASA Technical Reports Server (NTRS)

    Williams, Peggy S.

    2001-01-01

    A Monte Carlo dispersion analysis has been completed on the X-33 software simulation. The simulation is based on a preliminary version of the software and is primarily used in an effort to define and refine how a Monte Carlo dispersion analysis would have been done on the final flight-ready version of the software. This report gives an overview of the processes used in the implementation of the dispersions and describes the methods used to accomplish the Monte Carlo analysis. Selected results from 1000 Monte Carlo runs are presented with suggestions for improvements in future work.

  4. Monte Carlo analysis of energy dependent anisotropy of bremsstrahlung x-ray spectra

    SciTech Connect

    Kakonyi, Robert; Erdelyi, Miklos; Szabo, Gabor

    2009-09-15

    The energy resolved emission angle dependence of x-ray spectra was analyzed by MCNPX (Monte Carlo N particle Monte Carlo) simulator. It was shown that the spectral photon flux had a maximum at a well-defined emission angle due to the anisotropy of the bremsstrahlung process. The higher the relative photon energy, the smaller the emission angle belonging to the maximum was. The trends predicted by the Monte Carlo simulations were experimentally verified. The Monte Carlo results were compared to both the Institute of Physics and Engineering in Medicine spectra table and the SPEKCALCV1.0 code.

  5. Monte Carlo Simulation Of Soot Evolution along Lagrangian Trajectories in a Turbulent Flame

    NASA Astrophysics Data System (ADS)

    Abdelgadir, Ahmed; Zhou, Kun; Attili, Antonio; Bisetti, Fabrizio

    2013-11-01

    A newly developed Monte Carlo method is used to simulate soot formation and growth in a turbulent n-heptane/air flame. The Monte Carlo method is used to simulate the soot evolution along selected Lagragnian trajectories obtained from a direct numerical simulation of a turbulent sooting jet flame [Attili et al., Direct and Large-Eddy Simulation 9, Springer, 2013] based on a high-order method of moments. The method adopts an operator splitting approach, which splits the deterministic processes (nucleation, surface growth and oxidation) from coagulation, which is treated stochastically. The purpose of this work is to assess the solution based on the moment method and to investigate the soot particle size distribution (PSD) that is not available in methods of moments. Nucleation and coagulation have the greatest effect on the PSD, therefore, various coagulation models are considered. Along each trajectory, one or more rapid nucleation events occur, affecting the shape of the PSD. It is shown that oxidation and surface growth affect the PSD quantitatively, but do not change the shape significantly.

  6. Alpha-particle Monte Carlo simulation for microdosimetric calculations using a commercial spreadsheet.

    PubMed

    Roeske, John C; Hoggarth, Mark

    2007-04-01

    Alpha-particle emitters are currently being evaluated in the treatment of cancer. Because of the short range and high linear energy transfer (LET) of most therapeutic alpha-particle emitters, there are significant stochastic variations in the energy deposited within the cellular nucleus. Hence microdosimetric spectra are often necessary to interpret biological endpoints. However, alpha-particle microdosimetric codes are not readily available. In this paper, we describe how a commercial spreadsheet may be used to perform a Monte Carlo simulation of alpha-particle transport. Subsequently, this information is used to determine the distribution of path lengths, energy deposited, and specific energy for a single alpha-particle traversal through the cell nucleus. These data may then be used to determine microdosimetric parameters for multiple alpha-particle emissions. In our analysis, comparison of the first and second moments of the single-event spectra with previously published data show agreement on the order of a few per cent. These small discrepancies are due to differences in interpolation of stopping powers between the various algorithms. Thus, the spreadsheet Monte Carlo method represents a simple and efficient method to calculate single-event spectra for alpha-particle emitters. Copies of the spreadsheet are available from the corresponding author upon request. PMID:17374919

  7. Quantum Monte Carlo algorithms for electronic structure at the petascale; the endstation project.

    SciTech Connect

    Kim, J; Ceperley, D M; Purwanto, W; Walter, E J; Krakauer, H; Zhang, S W; Kent, P.R. C; Hennig, R G; Umrigar, C; Bajdich, M; Kolorenc, J; Mitas, L; Srinivasan, A

    2008-10-01

    Over the past two decades, continuum quantum Monte Carlo (QMC) has proved to be an invaluable tool for predicting of the properties of matter from fundamental principles. By solving the Schrodinger equation through a stochastic projection, it achieves the greatest accuracy and reliability of methods available for physical systems containing more than a few quantum particles. QMC enjoys scaling favorable to quantum chemical methods, with a computational effort which grows with the second or third power of system size. This accuracy and scalability has enabled scientific discovery across a broad spectrum of disciplines. The current methods perform very efficiently at the terascale. The quantum Monte Carlo Endstation project is a collaborative effort among researchers in the field to develop a new generation of algorithms, and their efficient implementations, which will take advantage of the upcoming petaflop architectures. Some aspects of these developments are discussed here. These tools will expand the accuracy, efficiency and range of QMC applicability and enable us to tackle challenges which are currently out of reach. The methods will be applied to several important problems including electronic and structural properties of water, transition metal oxides, nanosystems and ultracold atoms.

  8. Development of a radiative flux evaluation program with a 3-D Monte Carlo radiative transfer code

    NASA Astrophysics Data System (ADS)

    Okata, Megumi; Nakajima, Teruyuki; Barker, Howard W.; Donovan, David P.

    2013-05-01

    In this paper, we have developed a three-dimensional (3D) Monte Carlo radiative transfer code that can treat a broadband solar flux calculation implemented with k-distribution parameters [1]. We used this code for generating the radiative flux profile and heating rate profile in the atmosphere including broken clouds. In order to construct 3-D extinction coefficient fields, we tried following three methods: 1) Minimum cloud Information Deviation Profiling Method (MIDPM), 2) numerical simulations by a non-hydrostatic model with bin cloud microphysics model and 3) idealized stochastic clouds generated by randomized extinction coefficient distribution and regularly-distributed tiled clouds. Using these constructed 3-D cloud systems, we calculated the radiation field by our Monte Carlo radiative transfer code at wavelengths of 0.5, 1.6 and 2.1 microns. We then compared the results with Plane Parallel Approximation (PPA) and a reflectivity of 3-D with Independent Pixel Approximation (IPA). In the case of wavelength 0.5 microns, as expected, all the discrepancies between 3-D clouds and equivalent IPA clouds are smaller than the discrepancies between 3-D clouds and equivalent PPA clouds. At maximum the reflectivity difference for the PPA and IPA is about equal to fluxes of 30 Wm-2 and 10 Wm-2, respectively.

  9. Spreaders and sponges define metastasis in lung cancer: a Markov chain Monte Carlo mathematical model.

    PubMed

    Newton, Paul K; Mason, Jeremy; Bethel, Kelly; Bazhenova, Lyudmila; Nieva, Jorge; Norton, Larry; Kuhn, Peter

    2013-05-01

    The classic view of metastatic cancer progression is that it is a unidirectional process initiated at the primary tumor site, progressing to variably distant metastatic sites in a fairly predictable, although not perfectly understood, fashion. A Markov chain Monte Carlo mathematical approach can determine a pathway diagram that classifies metastatic tumors as "spreaders" or "sponges" and orders the timescales of progression from site to site. In light of recent experimental evidence highlighting the potential significance of self-seeding of primary tumors, we use a Markov chain Monte Carlo (MCMC) approach, based on large autopsy data sets, to quantify the stochastic, systemic, and often multidirectional aspects of cancer progression. We quantify three types of multidirectional mechanisms of progression: (i) self-seeding of the primary tumor, (ii) reseeding of the primary tumor from a metastatic site (primary reseeding), and (iii) reseeding of metastatic tumors (metastasis reseeding). The model shows that the combined characteristics of the primary and the first metastatic site to which it spreads largely determine the future pathways and timescales of systemic disease. PMID:23447576

  10. FAST MONTE CARLO SIMULATION METHODS FOR BIOLOGICAL REACTION-DIFFUSION SYSTEMS IN SOLUTION AND ON SURFACES

    PubMed Central

    KERR, REX A.; BARTOL, THOMAS M.; KAMINSKY, BORIS; DITTRICH, MARKUS; CHANG, JEN-CHIEN JACK; BADEN, SCOTT B.; SEJNOWSKI, TERRENCE J.; STILES, JOEL R.

    2010-01-01

    Many important physiological processes operate at time and space scales far beyond those accessible to atom-realistic simulations, and yet discrete stochastic rather than continuum methods may best represent finite numbers of molecules interacting in complex cellular spaces. We describe and validate new tools and algorithms developed for a new version of the MCell simulation program (MCell3), which supports generalized Monte Carlo modeling of diffusion and chemical reaction in solution, on surfaces representing membranes, and combinations thereof. A new syntax for describing the spatial directionality of surface reactions is introduced, along with optimizations and algorithms that can substantially reduce computational costs (e.g., event scheduling, variable time and space steps). Examples for simple reactions in simple spaces are validated by comparison to analytic solutions. Thus we show how spatially realistic Monte Carlo simulations of biological systems can be far more cost-effective than often is assumed, and provide a level of accuracy and insight beyond that of continuum methods. PMID:20151023

  11. BEAM: a Monte Carlo code to simulate radiotherapy treatment units.

    PubMed

    Rogers, D W; Faddegon, B A; Ding, G X; Ma, C M; We, J; Mackie, T R

    1995-05-01

    This paper describes BEAM, a general purpose Monte Carlo code to simulate the radiation beams from radiotherapy units including high-energy electron and photon beams, 60Co beams and orthovoltage units. The code handles a variety of elementary geometric entities which the user puts together as needed (jaws, applicators, stacked cones, mirrors, etc.), thus allowing simulation of a wide variety of accelerators. The code is not restricted to cylindrical symmetry. It incorporates a variety of powerful variance reduction techniques such as range rejection, bremsstrahlung splitting and forcing photon interactions. The code allows direct calculation of charge in the monitor ion chamber. It has the capability of keeping track of each particle's history and using this information to score separate dose components (e.g., to determine the dose from electrons scattering off the applicator). The paper presents a variety of calculated results to demonstrate the code's capabilities. The calculated dose distributions in a water phantom irradiated by electron beams from the NRC 35 MeV research accelerator, a Varian Clinac 2100C, a Philips SL75-20, an AECL Therac 20 and a Scanditronix MM50 are all shown to be in good agreement with measurements at the 2 to 3% level. Eighteen electron spectra from four different commercial accelerators are presented and various aspects of the electron beams from a Clinac 2100C are discussed. Timing requirements and selection of parameters for the Monte Carlo calculations are discussed. PMID:7643786

  12. Monte Carlo modeling of human tooth optical coherence tomography imaging

    NASA Astrophysics Data System (ADS)

    Shi, Boya; Meng, Zhuo; Wang, Longzhi; Liu, Tiegen

    2013-07-01

    We present a Monte Carlo model for optical coherence tomography (OCT) imaging of human tooth. The model is implemented by combining the simulation of a Gaussian beam with simulation for photon propagation in a two-layer human tooth model with non-parallel surfaces through a Monte Carlo method. The geometry and the optical parameters of the human tooth model are chosen on the basis of the experimental OCT images. The results show that the simulated OCT images are qualitatively consistent with the experimental ones. Using the model, we demonstrate the following: firstly, two types of photons contribute to the information of morphological features and noise in the OCT image of a human tooth, respectively. Secondly, the critical imaging depth of the tooth model is obtained, and it is found to decrease significantly with increasing mineral loss, simulated as different enamel scattering coefficients. Finally, the best focus position is located below and close to the dental surface by analysis of the effect of focus positions on the OCT signal and critical imaging depth. We anticipate that this modeling will become a powerful and accurate tool for a preliminary numerical study of the OCT technique on diseases of dental hard tissue in human teeth.

  13. Monte Carlo simulation of classical spin models with chaotic billiards

    NASA Astrophysics Data System (ADS)

    Suzuki, Hideyuki

    2013-11-01

    It has recently been shown that the computing abilities of Boltzmann machines, or Ising spin-glass models, can be implemented by chaotic billiard dynamics without any use of random numbers. In this paper, we further numerically investigate the capabilities of the chaotic billiard dynamics as a deterministic alternative to random Monte Carlo methods by applying it to classical spin models in statistical physics. First, we verify that the billiard dynamics can yield samples that converge to the true distribution of the Ising model on a small lattice, and we show that it appears to have the same convergence rate as random Monte Carlo sampling. Second, we apply the billiard dynamics to finite-size scaling analysis of the critical behavior of the Ising model and show that the phase-transition point and the critical exponents are correctly obtained. Third, we extend the billiard dynamics to spins that take more than two states and show that it can be applied successfully to the Potts model. We also discuss the possibility of extensions to continuous-valued models such as the XY model.

  14. Performance and Improvements of Flat-histogram Monte Carlo Simulations

    NASA Astrophysics Data System (ADS)

    Wu, Y.; Kner, M.; Colonna-Romano, L.; Trebst, S.; Gould, H.; Machta, J.; Troyer, M.

    We study the performance of Monte Carlo simulations that samples a broad histogram in energy by determining the mean first passage time to span the entire energy space of d-dimensional Ising-Potts models. For the d = 1, 2,3 Ising model, the mean first passage time ? of flat-histogram Monte Carlo methods with single-spin flip updates, such as the Wang-Landau algorithm or the multicanonical method, scales with the number of spins N = L d as ? N 2 L z. The exponent z is found to be decrease as the dimensionality d is increased. In the mean field limit of infinite dimensions we find that z vanishes up to a logarithmic correction. We then demonstrate how the flat-histogram algorithms can be improved by two complementary approaches cluster dynamics and ensemble optimization technique. Both approaches are found to improve the random walk in energy space so that ? N 2 up to logarithmic corrections for the d = 1, 2 Ising model.

  15. Monte Carlo simulation of zinc protoporphyrin fluorescence in the retina

    NASA Astrophysics Data System (ADS)

    Chen, Xiaoyan; Lane, Stephen

    2010-02-01

    We have used Monte Carlo simulation of autofluorescence in the retina to determine that noninvasive detection of nutritional iron deficiency is possible. Nutritional iron deficiency (which leads to iron deficiency anemia) affects more than 2 billion people worldwide, and there is an urgent need for a simple, noninvasive diagnostic test. Zinc protoporphyrin (ZPP) is a fluorescent compound that accumulates in red blood cells and is used as a biomarker for nutritional iron deficiency. We developed a computational model of the eye, using parameters that were identified either by literature search, or by direct experimental measurement to test the possibility of detecting ZPP non-invasively in retina. By incorporating fluorescence into Steven Jacques' original code for multi-layered tissue, we performed Monte Carlo simulation of fluorescence in the retina and determined that if the beam is not focused on a blood vessel in a neural retina layer or if part of light is hitting the vessel, ZPP fluorescence will be 10-200 times higher than background lipofuscin fluorescence coming from the retinal pigment epithelium (RPE) layer directly below. In addition we found that if the light can be focused entirely onto a blood vessel in the neural retina layer, the fluorescence signal comes only from ZPP. The fluorescence from layers below in this second situation does not contribute to the signal. Therefore, the possibility that a device could potentially be built and detect ZPP fluorescence in retina looks very promising.

  16. Ultracold atoms at unitarity within quantum Monte Carlo methods

    SciTech Connect

    Morris, Andrew J.; Lopez Rios, P.; Needs, R. J.

    2010-03-15

    Variational and diffusion quantum Monte Carlo (VMC and DMC) calculations of the properties of the zero-temperature fermionic gas at unitarity are reported. Our study differs from earlier ones mainly in that we have constructed more accurate trial wave functions and used a larger system size, we have studied the dependence of the energy on the particle density and well width, and we have achieved much smaller statistical error bars. The correct value of the universal ratio of the energy of the interacting to that of the noninteracting gas, {xi}, is still a matter of debate. We find DMC values of {xi} of 0.4244(1) with 66 particles and 0.4339(1) with 128 particles. The spherically averaged pair-correlation functions, momentum densities, and one-body density matrices are very similar in VMC and DMC, which suggests that our results for these quantities are very accurate. We find, however, some differences between the VMC and DMC results for the two-body density matrices and condensate fractions, which indicates that these quantities are more sensitive to the quality of the trial wave function. Our best estimate of the condensate fraction of 0.51 is smaller than the values from earlier quantum Monte Carlo calculations.

  17. On the time scale associated with Monte Carlo simulations

    SciTech Connect

    Bal, Kristof M. Neyts, Erik C.

    2014-11-28

    Uniform-acceptance force-bias Monte Carlo (fbMC) methods have been shown to be a powerful technique to access longer timescales in atomistic simulations allowing, for example, phase transitions and growth. Recently, a new fbMC method, the time-stamped force-bias Monte Carlo (tfMC) method, was derived with inclusion of an estimated effective timescale; this timescale, however, does not seem able to explain some of the successes the method. In this contribution, we therefore explicitly quantify the effective timescale tfMC is able to access for a variety of systems, namely a simple single-particle, one-dimensional model system, the Lennard-Jones liquid, an adatom on the Cu(100) surface, a silicon crystal with point defects and a highly defected graphene sheet, in order to gain new insights into the mechanisms by which tfMC operates. It is found that considerable boosts, up to three orders of magnitude compared to molecular dynamics, can be achieved for solid state systems by lowering of the apparent activation barrier of occurring processes, while not requiring any system-specific input or modifications of the method. We furthermore address the pitfalls of using the method as a replacement or complement of molecular dynamics simulations, its ability to explicitly describe correct dynamics and reaction mechanisms, and the association of timescales to MC simulations in general.

  18. Monte Carlo track structure for radiation biology and space applications

    NASA Technical Reports Server (NTRS)

    Nikjoo, H.; Uehara, S.; Khvostunov, I. G.; Cucinotta, F. A.; Wilson, W. E.; Goodhead, D. T.

    2001-01-01

    Over the past two decades event by event Monte Carlo track structure codes have increasingly been used for biophysical modelling and radiotherapy. Advent of these codes has helped to shed light on many aspects of microdosimetry and mechanism of damage by ionising radiation in the cell. These codes have continuously been modified to include new improved cross sections and computational techniques. This paper provides a summary of input data for ionizations, excitations and elastic scattering cross sections for event by event Monte Carlo track structure simulations for electrons and ions in the form of parametric equations, which makes it easy to reproduce the data. Stopping power and radial distribution of dose are presented for ions and compared with experimental data. A model is described for simulation of full slowing down of proton tracks in water in the range 1 keV to 1 MeV. Modelling and calculations are presented for the response of a TEPC proportional counter irradiated with 5 MeV alpha-particles. Distributions are presented for the wall and wall-less counters. Data shows contribution of indirect effects to the lineal energy distribution for the wall counters responses even at such a low ion energy.

  19. A pure-sampling quantum Monte Carlo algorithm.

    PubMed

    Ospadov, Egor; Rothstein, Stuart M

    2015-01-14

    The objective of pure-sampling quantum Monte Carlo is to calculate physical properties that are independent of the importance sampling function being employed in the calculation, save for the mismatch of its nodal hypersurface with that of the exact wave function. To achieve this objective, we report a pure-sampling algorithm that combines features of forward walking methods of pure-sampling and reptation quantum Monte Carlo (RQMC). The new algorithm accurately samples properties from the mixed and pure distributions simultaneously in runs performed at a single set of time-steps, over which extrapolation to zero time-step is performed. In a detailed comparison, we found RQMC to be less efficient. It requires different sets of time-steps to accurately determine the energy and other properties, such as the dipole moment. We implement our algorithm by systematically increasing an algorithmic parameter until the properties converge to statistically equivalent values. As a proof in principle, we calculated the fixed-node energy, static ? polarizability, and other one-electron expectation values for the ground-states of LiH and water molecules. These quantities are free from importance sampling bias, population control bias, time-step bias, extrapolation-model bias, and the finite-field approximation. We found excellent agreement with the accepted values for the energy and a variety of other properties for those systems. PMID:25591345

  20. Uncertainty of NURBS surface fit by Monte Carlo simulations

    NASA Astrophysics Data System (ADS)

    Koch, Karl-Rudolf

    2009-12-01

    A free-form surface expressed by NURBS (nonuniform rational B-splines) is fitted to the measured coordinates of points by the lofting method. The unknown control points of the free-form surface are therefore not simultaneously estimated but determined by cross-sectional curve fits. This uses much less computer time than the simultaneous estimation and gives identical results. The free-form surface should be determined with an uncertainty which does not considerably surpass the uncertainty of the measurements. This is investigated here for the example of a free-form surface for a pothole in a road determined by the measurements of a laserscanner. The uncertainties are expressed by standard deviations and confidence intervals. They are computed using Monte Carlo simulations for the positioning of a point by the measured coordinates and by fitting a free-form surface. The resulting uncertainties agree. In addition, the uncertainties of quantities characterizing the shape and the slope of the surface are determined by Monte Carlo simulations. It turns out that the uncertainties resulting from the measurements and from the free-form surface fit are approximately identical.

  1. A pure-sampling quantum Monte Carlo algorithm

    NASA Astrophysics Data System (ADS)

    Ospadov, Egor; Rothstein, Stuart M.

    2015-01-01

    The objective of pure-sampling quantum Monte Carlo is to calculate physical properties that are independent of the importance sampling function being employed in the calculation, save for the mismatch of its nodal hypersurface with that of the exact wave function. To achieve this objective, we report a pure-sampling algorithm that combines features of forward walking methods of pure-sampling and reptation quantum Monte Carlo (RQMC). The new algorithm accurately samples properties from the mixed and pure distributions simultaneously in runs performed at a single set of time-steps, over which extrapolation to zero time-step is performed. In a detailed comparison, we found RQMC to be less efficient. It requires different sets of time-steps to accurately determine the energy and other properties, such as the dipole moment. We implement our algorithm by systematically increasing an algorithmic parameter until the properties converge to statistically equivalent values. As a proof in principle, we calculated the fixed-node energy, static ? polarizability, and other one-electron expectation values for the ground-states of LiH and water molecules. These quantities are free from importance sampling bias, population control bias, time-step bias, extrapolation-model bias, and the finite-field approximation. We found excellent agreement with the accepted values for the energy and a variety of other properties for those systems.

  2. A Monte Carlo methodology for modelling ashfall hazards

    NASA Astrophysics Data System (ADS)

    Hurst, Tony; Smith, Warwick

    2004-12-01

    We have developed a methodology for quantifying the probability of particular thicknesses of tephra at any given site, using Monte Carlo methods. This is a part of the development of a probabilistic volcanic hazard model (PVHM) for New Zealand, for hazards planning and insurance purposes. We use an established program (ASHFALL) to model individual eruptions, where the likely thickness of ash deposited at selected sites depends on the location of the volcano, eruptive volume, column height and ash size, and the wind conditions. A Monte Carlo procedure allows us to simulate the variations in eruptive volume and in wind conditions by analysing repeat eruptions, each time allowing the parameters to vary randomly according to known or assumed distributions. Actual wind velocity profiles are used, with randomness included by selection of a starting date. This method can handle the effects of multiple volcanic sources, each source with its own characteristics. We accumulate the tephra thicknesses from all sources to estimate the combined ashfall hazard, expressed as the frequency with which any given depth of tephra is likely to be deposited at selected sites. These numbers are expressed as annual probabilities or as mean return periods. We can also use this method for obtaining an estimate of how often and how large the eruptions from a particular volcano have been. Results from sediment cores in Auckland give useful bounds for the likely total volumes erupted from Egmont Volcano (Mt. Taranaki), 280 km away, during the last 130,000 years.

  3. Application of Monte Carlo codes to neutron dosimetry

    SciTech Connect

    Prevo, C.T.

    1982-06-15

    In neutron dosimetry, calculations enable one to predict the response of a proposed dosimeter before effort is expended to design and fabricate the neutron instrument or dosimeter. The nature of these calculations requires the use of computer programs that implement mathematical models representing the transport of radiation through attenuating media. Numerical, and in some cases analytical, solutions of these models can be obtained by one of several calculational techniques. All of these techniques are either approximate solutions to the well-known Boltzmann equation or are based on kernels obtained from solutions to the equation. The Boltzmann equation is a precise mathematical description of neutron behavior in terms of position, energy, direction, and time. The solution of the transport equation represents the average value of the particle flux density. Integral forms of the transport equation are generally regarded as the formal basis for the Monte Carlo method, the results of which can in principle be made to approach the exact solution. This paper focuses on the Monte Carlo technique.

  4. A pure-sampling quantum Monte Carlo algorithm

    SciTech Connect

    Ospadov, Egor; Rothstein, Stuart M.

    2015-01-14

    The objective of pure-sampling quantum Monte Carlo is to calculate physical properties that are independent of the importance sampling function being employed in the calculation, save for the mismatch of its nodal hypersurface with that of the exact wave function. To achieve this objective, we report a pure-sampling algorithm that combines features of forward walking methods of pure-sampling and reptation quantum Monte Carlo (RQMC). The new algorithm accurately samples properties from the mixed and pure distributions simultaneously in runs performed at a single set of time-steps, over which extrapolation to zero time-step is performed. In a detailed comparison, we found RQMC to be less efficient. It requires different sets of time-steps to accurately determine the energy and other properties, such as the dipole moment. We implement our algorithm by systematically increasing an algorithmic parameter until the properties converge to statistically equivalent values. As a proof in principle, we calculated the fixed-node energy, static α polarizability, and other one-electron expectation values for the ground-states of LiH and water molecules. These quantities are free from importance sampling bias, population control bias, time-step bias, extrapolation-model bias, and the finite-field approximation. We found excellent agreement with the accepted values for the energy and a variety of other properties for those systems.

  5. Monte Carlo Simulation of Massive Absorbers for Cryogenic Calorimeters

    SciTech Connect

    Brandt, D.; Asai, M.; Brink, P.L.; Cabrera, B.; Silva, E.do Couto e; Kelsey, M.; Leman, S.W.; McArthy, K.; Resch, R.; Wright, D.; Figueroa-Feliciano, E.; /MIT

    2012-06-12

    There is a growing interest in cryogenic calorimeters with macroscopic absorbers for applications such as dark matter direct detection and rare event search experiments. The physics of energy transport in calorimeters with absorber masses exceeding several grams is made complex by the anisotropic nature of the absorber crystals as well as the changing mean free paths as phonons decay to progressively lower energies. We present a Monte Carlo model capable of simulating anisotropic phonon transport in cryogenic crystals. We have initiated the validation process and discuss the level of agreement between our simulation and experimental results reported in the literature, focusing on heat pulse propagation in germanium. The simulation framework is implemented using Geant4, a toolkit originally developed for high-energy physics Monte Carlo simulations. Geant4 has also been used for nuclear and accelerator physics, and applications in medical and space sciences. We believe that our current work may open up new avenues for applications in material science and condensed matter physics.

  6. High-Fidelity Coupled Monte-Carlo/Thermal-Hydraulics Calculations

    NASA Astrophysics Data System (ADS)

    Ivanov, Aleksandar; Sanchez, Victor; Ivanov, Kostadin

    2014-06-01

    Monte Carlo methods have been used as reference reactor physics calculation tools worldwide. The advance in computer technology allows the calculation of detailed flux distributions in both space and energy. In most of the cases however, those calculations are done under the assumption of homogeneous material density and temperature distributions. The aim of this work is to develop a consistent methodology for providing realistic three-dimensional thermal-hydraulic distributions by coupling the in-house developed sub-channel code SUBCHANFLOW with the standard Monte-Carlo transport code MCNP. In addition to the innovative technique of on-the fly material definition, a flux-based weight-window technique has been introduced to improve both the magnitude and the distribution of the relative errors. Finally, a coupled code system for the simulation of steady-state reactor physics problems has been developed. Besides the problem of effective feedback data interchange between the codes, the treatment of temperature dependence of the continuous energy nuclear data has been investigated.

  7. James Webb Space Telescope (JWST) Stationkeeping Monte Carlo Simulations

    NASA Technical Reports Server (NTRS)

    Dichmann, Donald J.; Alberding, Cassandra; Yu, Wayne

    2014-01-01

    The James Webb Space Telescope (JWST) will launch in 2018 into a Libration Point Orbit (LPO) around the Sun-EarthMoon (SEM) L2 point, with a planned mission lifetime of 11 years. This paper discusses our approach to Stationkeeping (SK) maneuver planning to determine an adequate SK delta-V budget. The SK maneuver planning for JWST is made challenging by two factors: JWST has a large Sunshield, and JWST will be repointed regularly producing significant changes in Solar Radiation Pressure (SRP). To accurately model SRP we employ the Solar Pressure and Drag (SPAD) tool, which uses ray tracing to accurately compute SRP force as a function of attitude. As an additional challenge, the future JWST observation schedule will not be known at the time of SK maneuver planning. Thus there will be significant variation in SRP between SK maneuvers, and the future variation in SRP is unknown. We have enhanced an earlier SK simulation to create a Monte Carlo simulation that incorporates random draws for uncertainties that affect the budget, including random draws of the observation schedule. Each SK maneuver is planned to optimize delta-V magnitude, subject to constraints on spacecraft pointing. We report the results of the Monte Carlo simulations and discuss possible improvements during flight operations to reduce the SK delta-V budget.

  8. Stationkeeping Monte Carlo Simulation for the James Webb Space Telescope

    NASA Technical Reports Server (NTRS)

    Dichmann, Donald J.; Alberding, Cassandra M.; Yu, Wayne H.

    2014-01-01

    The James Webb Space Telescope (JWST) is scheduled to launch in 2018 into a Libration Point Orbit (LPO) around the Sun-Earth/Moon (SEM) L2 point, with a planned mission lifetime of 10.5 years after a six-month transfer to the mission orbit. This paper discusses our approach to Stationkeeping (SK) maneuver planning to determine an adequate SK delta-V budget. The SK maneuver planning for JWST is made challenging by two factors: JWST has a large Sunshield, and JWST will be repointed regularly producing significant changes in Solar Radiation Pressure (SRP). To accurately model SRP we employ the Solar Pressure and Drag (SPAD) tool, which uses ray tracing to accurately compute SRP force as a function of attitude. As an additional challenge, the future JWST observation schedule will not be known at the time of SK maneuver planning. Thus there will be significant variation in SRP between SK maneuvers, and the future variation in SRP is unknown. We have enhanced an earlier SK simulation to create a Monte Carlo simulation that incorporates random draws for uncertainties that affect the budget, including random draws of the observation schedule. Each SK maneuver is planned to optimize delta-V magnitude, subject to constraints on spacecraft pointing. We report the results of the Monte Carlo simulations and discuss possible improvements during flight operations to reduce the SK delta-V budget.

  9. Fourier Monte Carlo renormalization-group approach to crystalline membranes.

    PubMed

    Trster, A

    2015-02-01

    The computation of the critical exponent ? characterizing the universal elastic behavior of crystalline membranes in the flat phase continues to represent challenges to theorists as well as computer simulators that manifest themselves in a considerable spread of numerical results for ? published in the literature. We present additional insight into this problem that results from combining Wilson's momentum shell renormalization-group method with the power of modern computer simulations based on the Fourier Monte Carlo algorithm. After discussing the ideas and difficulties underlying this combined scheme, we present a calculation of the renormalization-group flow of the effective two-dimensional Young modulus for momentum shells of different thickness. Extrapolation to infinite shell thickness allows us to produce results in reasonable agreement with those obtained by functional renormalization group or by Fourier Monte Carlo simulations in combination with finite-size scaling. Moreover, our method allows us to obtain a decent estimate for the value of the Wegner exponent ? that determines the leading correction to scaling, which in turn allows us to refine our numerical estimate for ? previously obtained from precise finite-size scaling data. PMID:25768483

  10. Energy Modulated Photon Radiotherapy: A Monte Carlo Feasibility Study

    PubMed Central

    Zhang, Ying; Feng, Yuanming; Ming, Xin

    2016-01-01

    A novel treatment modality termed energy modulated photon radiotherapy (EMXRT) was investigated. The first step of EMXRT was to determine beam energy for each gantry angle/anatomy configuration from a pool of photon energy beams (2 to 10 MV) with a newly developed energy selector. An inverse planning system using gradient search algorithm was then employed to optimize photon beam intensity of various beam energies based on presimulated Monte Carlo pencil beam dose distributions in patient anatomy. Finally, 3D dose distributions in six patients of different tumor sites were simulated with Monte Carlo method and compared between EMXRT plans and clinical IMRT plans. Compared to current IMRT technique, the proposed EMXRT method could offer a better paradigm for the radiotherapy of lung cancers and pediatric brain tumors in terms of normal tissue sparing and integral dose. For prostate, head and neck, spine, and thyroid lesions, the EMXRT plans were generally comparable to the IMRT plans. Our feasibility study indicated that lower energy (<6 MV) photon beams could be considered in modern radiotherapy treatment planning to achieve a more personalized care for individual patient with dosimetric gains. PMID:26977413

  11. Utilizing Monte Carlo Simulations to Optimize Institutional Empiric Antipseudomonal Therapy

    PubMed Central

    Tennant, Sarah J.; Burgess, Donna R.; Rybak, Jeffrey M.; Martin, Craig A.; Burgess, David S.

    2015-01-01

    Pseudomonas aeruginosa is a common pathogen implicated in nosocomial infections with increasing resistance to a limited arsenal of antibiotics. Monte Carlo simulation provides antimicrobial stewardship teams with an additional tool to guide empiric therapy. We modeled empiric therapies with antipseudomonal β-lactam antibiotic regimens to determine which were most likely to achieve probability of target attainment (PTA) of ≥90%. Microbiological data for P. aeruginosa was reviewed for 2012. Antibiotics modeled for intermittent and prolonged infusion were aztreonam, cefepime, meropenem, and piperacillin/tazobactam. Using minimum inhibitory concentrations (MICs) from institution-specific isolates, and pharmacokinetic and pharmacodynamic parameters from previously published studies, a 10,000-subject Monte Carlo simulation was performed for each regimen to determine PTA. MICs from 272 isolates were included in this analysis. No intermittent infusion regimens achieved PTA ≥90%. Prolonged infusions of cefepime 2000 mg Q8 h, meropenem 1000 mg Q8 h, and meropenem 2000 mg Q8 h demonstrated PTA of 93%, 92%, and 100%, respectively. Prolonged infusions of piperacillin/tazobactam 4.5 g Q6 h and aztreonam 2 g Q8 h failed to achieved PTA ≥90% but demonstrated PTA of 81% and 73%, respectively. Standard doses of β-lactam antibiotics as intermittent infusion did not achieve 90% PTA against P. aeruginosa isolated at our institution; however, some prolonged infusions were able to achieve these targets.

  12. Monte Carlo algorithm for simulating fermions on Lefschetz thimbles

    NASA Astrophysics Data System (ADS)

    Alexandru, Andrei; Başar, Gökçe; Bedaque, Paulo

    2016-01-01

    A possible solution of the notorious sign problem preventing direct Monte Carlo calculations for systems with nonzero chemical potential is to deform the integration region in the complex plane to a Lefschetz thimble. We investigate this approach for a simple fermionic model. We introduce an easy to implement Monte Carlo algorithm to sample the dominant thimble. Our algorithm relies only on the integration of the gradient flow in the numerically stable direction, which gives it a distinct advantage over the other proposed algorithms. We demonstrate the stability and efficiency of the algorithm by applying it to an exactly solvable fermionic model and compare our results with the analytical ones. We report a very good agreement for a certain region in the parameter space where the dominant contribution comes from a single thimble, including a region where standard methods suffer from a severe sign problem. However, we find that there are also regions in the parameter space where the contribution from multiple thimbles is important, even in the continuum limit.

  13. Monte Carlo simulation of quantum Zeno effect in the brain

    NASA Astrophysics Data System (ADS)

    Georgiev, Danko

    2015-12-01

    Environmental decoherence appears to be the biggest obstacle for successful construction of quantum mind theories. Nevertheless, the quantum physicist Henry Stapp promoted the view that the mind could utilize quantum Zeno effect to influence brain dynamics and that the efficacy of such mental efforts would not be undermined by environmental decoherence of the brain. To address the physical plausibility of Stapp's claim, we modeled the brain using quantum tunneling of an electron in a multiple-well structure such as the voltage sensor in neuronal ion channels and performed Monte Carlo simulations of quantum Zeno effect exerted by the mind upon the brain in the presence or absence of environmental decoherence. The simulations unambiguously showed that the quantum Zeno effect breaks down for timescales greater than the brain decoherence time. To generalize the Monte Carlo simulation results for any n-level quantum system, we further analyzed the change of brain entropy due to the mind probing actions and proved a theorem according to which local projections cannot decrease the von Neumann entropy of the unconditional brain density matrix. The latter theorem establishes that Stapp's model is physically implausible but leaves a door open for future development of quantum mind theories provided the brain has a decoherence-free subspace.

  14. Treatment planning for a small animal using Monte Carlo simulation

    SciTech Connect

    Chow, James C. L.; Leung, Michael K. K.

    2007-12-15

    The development of a small animal model for radiotherapy research requires a complete setup of customized imaging equipment, irradiators, and planning software that matches the sizes of the subjects. The purpose of this study is to develop and demonstrate the use of a flexible in-house research environment for treatment planning on small animals. The software package, called DOSCTP, provides a user-friendly platform for DICOM computed tomography-based Monte Carlo dose calculation using the EGSnrcMP-based DOSXYZnrc code. Validation of the treatment planning was performed by comparing the dose distributions for simple photon beam geometries calculated through the Pinnacle3 treatment planning system and measurements. A treatment plan for a mouse based on a CT image set by a 360-deg photon arc is demonstrated. It is shown that it is possible to create 3D conformal treatment plans for small animals with consideration of inhomogeneities using small photon beam field sizes in the diameter range of 0.5-5 cm, with conformal dose covering the target volume while sparing the surrounding critical tissue. It is also found that Monte Carlo simulation is suitable to carry out treatment planning dose calculation for small animal anatomy with voxel size about one order of magnitude smaller than that of the human.

  15. Monte Carlo model for electron degradation in methane gas

    NASA Astrophysics Data System (ADS)

    Bhardwaj, Anil; Mukundan, Vrinda

    2015-06-01

    We present a Monte Carlo model for degradation of 1-10,000 eV electrons in an atmosphere of methane. The electron impact cross sections for CH4 are compiled and analytical representations of these cross sections are used as input to the model."Yield spectra", which provides information about the number of inelastic events that have taken place in each energy bin, is used to calculate the yield (or population) of various inelastic processes. The numerical yield spectra, obtained from the Monte Carlo simulations, is represented analytically, thus generating the Analytical Yield Spectra (AYS). AYS is employed to obtain the mean energy per ion pair and efficiencies of various inelastic processes. Mean energy per ion pair for neutral CH4 is found to be 26 (27.8) eV at 10 (0.1) keV. Efficiency calculation showed that ionization is the dominant process at energies > 50 eV, for which more than 50% of the incident electron energy is used. Above 25 eV, dissociation has an efficiency of ? 27 %. Below 10 eV, vibrational excitation dominates. Contribution of emission is around 1.2% at 10 keV. Efficiency of attachment process is ? 0.1 % at 8 eV and efficiency falls down to negligibly small values at energies greater than 15 eV. The efficiencies can be used to calculate volume production rate in planetary atmospheres by folding with electron production rate and integrating over energy.

  16. Improved criticality convergence via a modified Monte Carlo iteration method

    SciTech Connect

    Booth, Thomas E; Gubernatis, James E

    2009-01-01

    Nuclear criticality calculations with Monte Carlo codes are normally done using a power iteration method to obtain the dominant eigenfunction and eigenvalue. In the last few years it has been shown that the power iteration method can be modified to obtain the first two eigenfunctions. This modified power iteration method directly subtracts out the second eigenfunction and thus only powers out the third and higher eigenfunctions. The result is a convergence rate to the dominant eigenfunction being |k{sub 3}|/k{sub 1} instead of |k{sub 2}|/k{sub 1}. One difficulty is that the second eigenfunction contains particles of both positive and negative weights that must sum somehow to maintain the second eigenfunction. Summing negative and positive weights can be done using point detector mechanics, but this sometimes can be quite slow. We show that an approximate cancellation scheme is sufficient to accelerate the convergence to the dominant eigenfunction. A second difficulty is that for some problems the Monte Carlo implementation of the modified power method has some stability problems. We also show that a simple method deals with this in an effective, but ad hoc manner.

  17. A multi-scale Monte Carlo method for electrolytes

    NASA Astrophysics Data System (ADS)

    Liang, Yihao; Xu, Zhenli; Xing, Xiangjun

    2015-08-01

    Artifacts arise in the simulations of electrolytes using periodic boundary conditions (PBCs). We show the origin of these artifacts are the periodic image charges and the constraint of charge neutrality inside the simulation box, both of which are unphysical from the view point of real systems. To cure these problems, we introduce a multi-scale Monte Carlo (MC) method, where ions inside a spherical cavity are simulated explicitly, while ions outside are treated implicitly using a continuum theory. Using the method of Debye charging, we explicitly derive the effective interactions between ions inside the cavity, arising due to the fluctuations of ions outside. We find that these effective interactions consist of two types: (1) a constant cavity potential due to the asymmetry of the electrolyte, and (2) a reaction potential that depends on the positions of all ions inside. Combining the grand canonical Monte Carlo (GCMC) with a recently developed fast algorithm based on image charge method, we perform a multi-scale MC simulation of symmetric electrolytes, and compare it with other simulation methods, including PBC + GCMC method, as well as large scale MC simulation. We demonstrate that our multi-scale MC method is capable of capturing the correct physics of a large system using a small scale simulation.

  18. Simulating oblique incident irradiation using the BEAMnrc Monte Carlo code.

    PubMed

    Downes, P; Spezi, E

    2009-04-01

    A new source for the simulation of oblique incident irradiation has been developed for the BEAMnrc Monte Carlo code. In this work, we describe a method for the simulation of any component that is rotated at some angle relative to the central axis of the modelled radiation unit. The performance of the new BEAMnrc source was validated against experimental measurements. The comparison with ion chamber data showed very good agreement between experiments and calculation for a number of oblique irradiation angles ranging from 0 degrees to 30 degrees . The routine was also cross-validated, in geometrically equivalent conditions, against a different radiation source available in the DOSXYZnrc code. The test showed excellent consistency between the two routines. The new radiation source can be particularly useful for the Monte Carlo simulation of radiation units in which the radiation beam is tilted with respect to the unit's central axis. To highlight this, a modern cone-beam CT unit is modelled using this new source and validated against measurement. PMID:19287082

  19. Variational Monte Carlo investigation of SU (N ) Heisenberg chains

    NASA Astrophysics Data System (ADS)

    Dufour, Jérôme; Nataf, Pierre; Mila, Frédéric

    2015-05-01

    Motivated by recent experimental progress in the context of ultracold multicolor fermionic atoms in optical lattices, we have investigated the properties of the SU (N) Heisenberg chain with totally antisymmetric irreducible representations, the effective model of Mott phases with m Monte Carlo based on Gutzwiller projected fermionic wave functions, we have been able to verify these predictions for a representative number of cases with N ≤10 and m ≤N /2 , and we have shown that the opening of a gap is associated to a spontaneous dimerization or trimerization depending on the value of m and N . We have also investigated the marginal cases where Abelian bosonization did not lead to any prediction. In these cases, variational Monte Carlo predicts that the ground state is critical with exponents consistent with conformal field theory.

  20. MONTE-CARLO BURNUP CALCULATION UNCERTAINTY QUANTIFICATION AND PROPAGATION DETERMINATION

    SciTech Connect

    Sternat, M.; Nichols, T.

    2011-06-09

    Reactor burnup or depletion codes are used thoroughly in the fields of nuclear forensics and nuclear safeguards. Two common codes include MONTEBURNS and MCNPX/CINDER. These are Monte-Carlo depletion routines utilizing MCNP for neutron transport calculations and either ORIGEN or CINDER for burnup calculations. Uncertainties exist in the MCNP steps, but this information is not passed to the depletion calculations or saved. To quantify this transport uncertainty and determine how it propagates between burnup steps, a statistical analysis of multiple repeated depletion runs is performed. The reactor model chosen is the Oak Ridge Research Reactor (ORR) in a single assembly, infinite lattice configuration. This model was burned for a 150 day cycle broken down into three steps. The output isotopics as well as effective multiplication factor (k-effective) were tabulated and histograms were created at each burnup step using the Scott Method to determine the bin width. The distributions for each code are a statistical benchmark and comparisons made. It was expected that the gram quantities and k-effective histograms would produce normally distributed results since they were produced from a Monte-Carlo routine, but some of the results appear to not. Statistical analyses are performed using the {chi}{sup 2} test against a normal distribution for the k-effective results and several isotopes including {sup 134}Cs, {sup 137}Cs, {sup 235}U, {sup 238}U, {sup 237}Np, {sup 238}Pu, {sup 239}Pu, and {sup 240}Pu.