Science.gov

Sample records for monte carlo finite

  1. Quantum Monte Carlo finite temperature electronic structure of quantum dots

    NASA Astrophysics Data System (ADS)

    Leino, Markku; Rantala, Tapio T.

    2002-08-01

    Quantum Monte Carlo methods allow a straightforward procedure for evaluation of electronic structures with a proper treatment of electronic correlations. This can be done even at finite temperatures [1]. We test the Path Integral Monte Carlo (PIMC) simulation method [2] for one and two electrons in one and three dimensional harmonic oscillator potentials and apply it in evaluation of finite temperature effects of single and coupled quantum dots. Our simulations show the correct finite temperature excited state populations including degeneracy in cases of one and three dimensional harmonic oscillators. The simulated one and two electron distributions of a single and coupled quantum dots are compared to those from experiments and other theoretical (0 K) methods [3]. Distributions are shown to agree and the finite temperature effects are discussed. Computational capacity is found to become the limiting factor in simulations with increasing accuracy. Other essential aspects of PIMC and its capability in this type of calculations are also discussed. [1] R.P. Feynman: Statistical Mechanics, Addison Wesley, 1972. [2] D.M. Ceperley, Rev.Mod.Phys. 67, 279 (1995). [3] M. Pi, A. Emperador and M. Barranco, Phys.Rev.B 63, 115316 (2001).

  2. Quantum Monte Carlo calculations of two neutrons in finite volume

    DOE PAGES

    Klos, P.; Lynn, J. E.; Tews, I.; ...

    2016-11-18

    Ab initio calculations provide direct access to the properties of pure neutron systems that are challenging to study experimentally. In addition to their importance for fundamental physics, their properties are required as input for effective field theories of the strong interaction. In this work, we perform auxiliary-field diffusion Monte Carlo calculations of the ground state and first excited state of two neutrons in a finite box, considering a simple contact potential as well as chiral effective field theory interactions. We compare the results against exact diagonalizations and present a detailed analysis of the finite-volume effects, whose understanding is crucial formore » determining observables from the calculated energies. Finally, using the Lüscher formula, we extract the low-energy S-wave scattering parameters from ground- and excited-state energies for different box sizes.« less

  3. Quantum Monte Carlo calculations of two neutrons in finite volume

    SciTech Connect

    Klos, P.; Lynn, J. E.; Tews, I.; Gandolfi, Stefano; Gezerlis, A.; Hammer, H. -W.; Hoferichter, M.; Schwenk, A.

    2016-11-18

    Ab initio calculations provide direct access to the properties of pure neutron systems that are challenging to study experimentally. In addition to their importance for fundamental physics, their properties are required as input for effective field theories of the strong interaction. In this work, we perform auxiliary-field diffusion Monte Carlo calculations of the ground state and first excited state of two neutrons in a finite box, considering a simple contact potential as well as chiral effective field theory interactions. We compare the results against exact diagonalizations and present a detailed analysis of the finite-volume effects, whose understanding is crucial for determining observables from the calculated energies. Finally, using the Lüscher formula, we extract the low-energy S-wave scattering parameters from ground- and excited-state energies for different box sizes.

  4. Monte Carlo calculations of the finite density Thirring model

    NASA Astrophysics Data System (ADS)

    Alexandru, Andrei; Başar, Gökçe; Bedaque, Paulo F.; Ridgway, Gregory W.; Warrington, Neill C.

    2017-01-01

    We present results of the numerical simulation of the two-dimensional Thirring model at finite density and temperature. The severe sign problem is dealt with by deforming the domain of integration into complex field space. This is the first example where a fermionic sign problem is solved in a quantum field theory by using the holomorphic gradient flow approach, a generalization of the Lefschetz thimble method.

  5. A Monte Carlo-finite element model for strain energy controlled microstructural evolution - 'Rafting' in superalloys

    NASA Technical Reports Server (NTRS)

    Gayda, J.; Srolovitz, D. J.

    1989-01-01

    This paper presents a specialized microstructural lattice model, MCFET (Monte Carlo finite element technique), which simulates microstructural evolution in materials in which strain energy has an important role in determining morphology. The model is capable of accounting for externally applied stress, surface tension, misfit, elastic inhomogeneity, elastic anisotropy, and arbitrary temperatures. The MCFET analysis was found to compare well with the results of analytical calculations of the equilibrium morphologies of isolated particles in an infinite matrix.

  6. Permutation blocking path integral Monte Carlo approach to the uniform electron gas at finite temperature.

    PubMed

    Dornheim, Tobias; Schoof, Tim; Groth, Simon; Filinov, Alexey; Bonitz, Michael

    2015-11-28

    The uniform electron gas (UEG) at finite temperature is of high current interest due to its key relevance for many applications including dense plasmas and laser excited solids. In particular, density functional theory heavily relies on accurate thermodynamic data for the UEG. Until recently, the only existing first-principle results had been obtained for N = 33 electrons with restricted path integral Monte Carlo (RPIMC), for low to moderate density, rs=r¯/aB≳1. These data have been complemented by configuration path integral Monte Carlo (CPIMC) simulations for rs ≤ 1 that substantially deviate from RPIMC towards smaller rs and low temperature. In this work, we present results from an independent third method-the recently developed permutation blocking path integral Monte Carlo (PB-PIMC) approach [T. Dornheim et al., New J. Phys. 17, 073017 (2015)] which we extend to the UEG. Interestingly, PB-PIMC allows us to perform simulations over the entire density range down to half the Fermi temperature (θ = kBT/EF = 0.5) and, therefore, to compare our results to both aforementioned methods. While we find excellent agreement with CPIMC, where results are available, we observe deviations from RPIMC that are beyond the statistical errors and increase with density.

  7. The phase diagrams of a finite superlattice with disordered interface: A Monte Carlo study

    NASA Astrophysics Data System (ADS)

    Boughrara, M.; Kerouad, M.; Zaim, A.

    2009-06-01

    Monte Carlo Simulation (MCS) has been used to study critical and compensation behavior of a ferrimagnetic superlattice on a simple cubic lattice. The superlattice consists of k unit cells, where the unit cell contains L layers of spin -1/2 A atoms, L layers of spin -1 B atoms and a disordered interface in between that is characterized by a random arrangement of A and B atoms of ApB type and a negative A- B coupling. We investigate the finite and the infinite superlattices and we found that the existence and the number of the compensation points depend strongly on the thickness of the superlattice (number of unit cells).

  8. Monte Carlo Benchmark

    SciTech Connect

    2010-10-20

    The "Monte Carlo Benchmark" (MCB) is intended to model the computatiional performance of Monte Carlo algorithms on parallel architectures. It models the solution of a simple heuristic transport equation using a Monte Carlo technique. The MCB employs typical features of Monte Carlo algorithms such as particle creation, particle tracking, tallying particle information, and particle destruction. Particles are also traded among processors using MPI calls.

  9. Path Integral Monte Carlo finite-temperature electronic structure of quantum dots

    NASA Astrophysics Data System (ADS)

    Leino, Markku; Rantala, Tapio T.

    2003-03-01

    Quantum Monte Carlo methods allow a straightforward procedure for evaluation of electronic structures with a proper treatment of electronic correlations. This can be done even at finite temperatures [1]. We apply the Path Integral Monte Carlo (PIMC) simulation method [2] for one and two electrons in a single and double quantum dots. With this approach we evaluate the electronic distributions and correlations, and finite temperature effects on those. Temperature increase broadens the one-electron distribution as expected. This effect is smaller for correlated electrons than for single ones. The simulated one and two electron distributions of a single and two coupled quantum dots are also compared to those from experiments and other theoretical (0 K) methods [3]. Computational capacity is found to become the limiting factor in simulations with increasing accuracy. This and other essential aspects of PIMC and its capability in this type of calculations are also discussed. [1] R.P. Feynman: Statistical Mechanics, Addison Wesley, 1972. [2] D.M. Ceperley, Rev.Mod.Phys. 67, 279 (1995). [3] M. Pi, A. Emperador and M. Barranco, Phys.Rev.B 63, 115316 (2001).

  10. Quantum Monte Carlo simulations of the BCS-BEC crossover at finite temperature

    NASA Astrophysics Data System (ADS)

    Bulgac, Aurel; Drut, Joaquín E.; Magierski, Piotr

    2008-08-01

    The quantum Monte Carlo method for spin- (1)/(2) fermions at finite temperature is formulated for dilute systems with an s -wave interaction. The motivation and the formalism are discussed along with descriptions of the algorithm and various numerical issues. We report on results for the energy, entropy, and chemical potential as a function of temperature. We give upper bounds on the critical temperature Tc for the onset of superfluidity, obtained by studying the finite-size scaling of the condensate fraction. All of these quantities were computed for couplings around the unitary regime in the range -0.5⩽(kFa)-1⩽0.2 , where a is the s -wave scattering length and kF is the Fermi momentum of a noninteracting gas at the same density. In all cases our data are consistent with normal Fermi gas behavior above a characteristic temperature T0>Tc , which depends on the coupling and is obtained by studying the deviation of the caloric curve from that of a free Fermi gas. For TcMonte Carlo results by other groups.

  11. Quantum dynamics at finite temperature: Time-dependent quantum Monte Carlo study

    SciTech Connect

    Christov, Ivan P.

    2016-08-15

    In this work we investigate the ground state and the dissipative quantum dynamics of interacting charged particles in an external potential at finite temperature. The recently devised time-dependent quantum Monte Carlo (TDQMC) method allows a self-consistent treatment of the system of particles together with bath oscillators first for imaginary-time propagation of Schrödinger type of equations where both the system and the bath converge to their finite temperature ground state, and next for real time calculation where the dissipative dynamics is demonstrated. In that context the application of TDQMC appears as promising alternative to the path-integral related techniques where the real time propagation can be a challenge.

  12. Fermionic path-integral Monte Carlo results for the uniform electron gas at finite temperature.

    PubMed

    Filinov, V S; Fortov, V E; Bonitz, M; Moldabekov, Zh

    2015-03-01

    The uniform electron gas (UEG) at finite temperature has recently attracted substantial interest due to the experimental progress in the field of warm dense matter. To explain the experimental data, accurate theoretical models for high-density plasmas are needed that depend crucially on the quality of the thermodynamic properties of the quantum degenerate nonideal electrons and of the treatment of their interaction with the positive background. Recent fixed-node path-integral Monte Carlo (RPIMC) data are believed to be the most accurate for the UEG at finite temperature, but they become questionable at high degeneracy when the Brueckner parameter rs=a/aB--the ratio of the mean interparticle distance to the Bohr radius--approaches 1. The validity range of these simulations and their predictive capabilities for the UEG are presently unknown. This is due to the unknown quality of the used fixed nodes and of the finite-size scaling from N=33 simulated particles (per spin projection) to the macroscopic limit. To analyze these questions, we present alternative direct fermionic path integral Monte Carlo (DPIMC) simulations that are independent from RPIMC. Our simulations take into account quantum effects not only in the electron system but also in their interaction with the uniform positive background. Also, we use substantially larger particle numbers (up to three times more) and perform an extrapolation to the macroscopic limit. We observe very good agreement with RPIMC, for the polarized electron gas, up to moderate densities around rs=4, and larger deviations for the unpolarized case, for low temperatures. For higher densities (high electron degeneracy), rs≲1.5, both RPIMC and DPIMC are problematic due to the increased fermion sign problem.

  13. Phase diagrams of a finite superlattice with two disordered interfaces: Monte Carlo simulation

    NASA Astrophysics Data System (ADS)

    Feraoun, A.; Zaim, A.; Kerouad, M.

    2015-11-01

    The phase diagrams and the magnetic properties of an Ising finite superlattice with two disordered interfaces are investigated using Monte Carlo simulations based on Metropolis algorithm. The superlattice consists of k unit cells, where the unit cell contains L layers of spin-1/2 (A atoms), L layers of spin-1 (B atoms), and a disordered interface, with two layers in between, which is characterized by a random arrangement of A and B atoms Ap B1-p A1-p Bp with a negative A-B coupling. The A-A and B-B exchange coupling are considered ferromagnetic. We have investigated the effects of the thickness of the film, the crystal field interactions and the surface exchanges coupling on the magnetic properties. The obtained results show that the number of compensation points and the number of first-order phase transition lines depend strongly on the thickness, the probability p and the exchange interactions in the surfaces.

  14. Monte Carlo simulation of finite mass nucleons interacting via a neutral, scalar boson field

    NASA Astrophysics Data System (ADS)

    Szybisz, L.; Zabolitzky, J. G.

    1987-03-01

    A recently proposed Monte Carlo method to solve the Schrödinger equation when expressed in Fock space is applied to the hamiltonian which describes the interaction of nucleons via a neutral, scalar boson field. The fact that a nucleon has finite mass is taken into account and a gaussian cut-off for the nucleon form factor is adopted. The problem is solved for systems with A = 1 and 2 sources (nucleons) in the three-dimensional continuous space. From the results for A = 1 a bare nucleon mass, mB c2 = 962.58 ± 0.06 MeV, is obtained. This value is used to determine the binding energy for an A = 2 system by means of this new algorithm. The result, B(2) = 2.14 ± 0.50 MeV, is consistent with the value corresponding to the static potential approximation.

  15. Liquid crystal free energy relaxation by a theoretically informed Monte Carlo method using a finite element quadrature approach.

    PubMed

    Armas-Pérez, Julio C; Hernández-Ortiz, Juan P; de Pablo, Juan J

    2015-12-28

    A theoretically informed Monte Carlo method is proposed for Monte Carlo simulation of liquid crystals on the basis of theoretical representations in terms of coarse-grained free energy functionals. The free energy functional is described in the framework of the Landau-de Gennes formalism. A piecewise finite element discretization is used to approximate the alignment field, thereby providing an excellent geometrical representation of curved interfaces and accurate integration of the free energy. The method is suitable for situations where the free energy functional includes highly non-linear terms, including chirality or high-order deformation modes. The validity of the method is established by comparing the results of Monte Carlo simulations to traditional Ginzburg-Landau minimizations of the free energy using a finite difference scheme, and its usefulness is demonstrated in the context of simulations of chiral liquid crystal droplets with and without nanoparticle inclusions.

  16. MCFET - A MICROSTRUCTURAL LATTICE MODEL FOR STRAIN ORIENTED PROBLEMS: A COMBINED MONTE CARLO FINITE ELEMENT TECHNIQUE

    NASA Technical Reports Server (NTRS)

    Gayda, J.

    1994-01-01

    A specialized, microstructural lattice model, termed MCFET for combined Monte Carlo Finite Element Technique, has been developed to simulate microstructural evolution in material systems where modulated phases occur and the directionality of the modulation is influenced by internal and external stresses. Since many of the physical properties of materials are determined by microstructure, it is important to be able to predict and control microstructural development. MCFET uses a microstructural lattice model that can incorporate all relevant driving forces and kinetic considerations. Unlike molecular dynamics, this approach was developed specifically to predict macroscopic behavior, not atomistic behavior. In this approach, the microstructure is discretized into a fine lattice. Each element in the lattice is labeled in accordance with its microstructural identity. Diffusion of material at elevated temperatures is simulated by allowing exchanges of neighboring elements if the exchange lowers the total energy of the system. A Monte Carlo approach is used to select the exchange site while the change in energy associated with stress fields is computed using a finite element technique. The MCFET analysis has been validated by comparing this approach with a closed-form, analytical method for stress-assisted, shape changes of a single particle in an infinite matrix. Sample MCFET analyses for multiparticle problems have also been run and, in general, the resulting microstructural changes associated with the application of an external stress are similar to that observed in Ni-Al-Cr alloys at elevated temperatures. This program is written in FORTRAN for use on a 370 series IBM mainframe. It has been implemented on an IBM 370 running VM/SP and an IBM 3084 running MVS. It requires the IMSL math library and 220K of RAM for execution. The standard distribution medium for this program is a 9-track 1600 BPI magnetic tape in EBCDIC format.

  17. Monte Carlo Example Programs

    SciTech Connect

    Kalos, M.

    2006-05-09

    The Monte Carlo example programs VARHATOM and DMCATOM are two small, simple FORTRAN programs that illustrate the use of the Monte Carlo Mathematical technique for calculating the ground state energy of the hydrogen atom.

  18. A finite-temperature Monte Carlo algorithm for network forming materials

    NASA Astrophysics Data System (ADS)

    Vink, Richard L. C.

    2014-03-01

    Computer simulations of structure formation in network forming materials (such as amorphous semiconductors, glasses, or fluids containing hydrogen bonds) are challenging. The problem is that large structural changes in the network topology are rare events, making it very difficult to equilibrate these systems. To overcome this problem, Wooten, Winer, and Weaire [Phys. Rev. Lett. 54, 1392 (1985)] proposed a Monte Carlo bond-switch move, constructed to alter the network topology at every step. The resulting algorithm is well suited to study networks at zero temperature. However, since thermal fluctuations are ignored, it cannot be used to probe the phase behavior at finite temperature. In this paper, a modification of the original bond-switch move is proposed, in which detailed balance and ergodicity are both obeyed, thereby facilitating a correct sampling of the Boltzmann distribution for these systems at any finite temperature. The merits of the modified algorithm are demonstrated in a detailed investigation of the melting transition in a two-dimensional 3-fold coordinated network.

  19. Monte-Carlo finite elements gyrokinetic simulations of Alfven modes in tokamaks

    NASA Astrophysics Data System (ADS)

    Bottino, Alberto; Biancalani, Alessandro; Palermo, Francesco; Tronko, Natalia

    2016-10-01

    The global gyrokinetic code ORB5 can simultaneously include electromagnetic perturbations, general ideal MHD axisymmetric equilibria, zonal-flow preserving sources, collisions, and the ability to solve the full core plasma including the magnetic axis. In this work, a Monte Carlo Particle In Cell Finite Element model, starting from a gyrokinetic discrete Lagrangian, is derived and implemented into the ORB5 code. The variations of the Lagrangian are used to obtain the time continuous equations of motion for the particles and the Finite Element approximation of the field equations. The Noether theorem for the semi-discretised system, implies a certain number of conservation properties for the final set of equation. Linear and nonlinear results, concerning Alfvén instabilities, in the presence of an energetic particle population, and microinstabilities, such as electromagnetic ion temperature gradient (ITG) driven modes and kinetic ballooning modes (KBM), will be presented and discussed. Due to losses of energetic particles, Alfvén instabilities can not only affect plasma stability and damage the walls, but also strongly impact the heating efficiency of a fusion reactor and ultimately the possibility of reaching ignition.

  20. A Monte Carlo error analysis program for near-Mars, finite-burn, orbital transfer maneuvers

    NASA Technical Reports Server (NTRS)

    Green, R. N.; Hoffman, L. H.; Young, G. R.

    1972-01-01

    A computer program was developed which performs an error analysis of a minimum-fuel, finite-thrust, transfer maneuver between two Keplerian orbits in the vicinity of Mars. The method of analysis is the Monte Carlo approach where each off-nominal initial orbit is targeted to the desired final orbit. The errors in the initial orbit are described by two covariance matrices of state deviations and tracking errors. The function of the program is to relate these errors to the resulting errors in the final orbit. The equations of motion for the transfer trajectory are those of a spacecraft maneuvering with constant thrust and mass-flow rate in the neighborhood of a single body. The thrust vector is allowed to rotate in a plane with a constant pitch rate. The transfer trajectory is characterized by six control parameters and the final orbit is defined, or partially defined, by the desired target parameters. The program is applicable to the deboost maneuver (hyperbola to ellipse), orbital trim maneuver (ellipse to ellipse), fly-by maneuver (hyperbola to hyperbola), escape maneuvers (ellipse to hyperbola), and deorbit maneuver.

  1. Comparing finite difference time domain and Monte Carlo modeling of human skin interaction with terahertz radiation

    NASA Astrophysics Data System (ADS)

    Ibey, Bennett L.; Payne, Jason A.; Mixon, Dustin G.; Thomas, Robert J.; Roach, William P.

    2008-02-01

    Assessing the biological reaction to electromagnetic (EM) radiation of all frequencies and intensities is essential to the understanding of both the potential damage caused by the radiation and the inherent mechanisms within biology that respond, protect, or propagate damage to surrounding tissues. To understand this reaction, one may model the electromagnetic irradiation of tissue phantoms according to empirically measured or intelligently estimated dielectric properties. Of interest in this study is the terahertz region (0.2-2.0 THz), ranging from millimeter to infrared waves, which has been studied only recently due to lack of efficient sources. The specific interaction between this radiation and human tissue is greatly influenced by the significant EM absorption of water across this range, which induces a pronounced heating of the tissue surface. This study compares the Monte Carlo Multi-Layer (MCML) and Finite Difference Time Domain (FDTD) approaches for modeling the terahertz irradiation of human dermal tissue. Two congruent simulations were performed on a one-dimensional tissue model with unit power intensity profile. This works aims to verify the use of either technique for modeling terahertz-tissue interaction for minimally scattering tissues.

  2. Monte Carlo analysis for finite-temperature magnetism of Nd2Fe14B permanent magnet

    NASA Astrophysics Data System (ADS)

    Toga, Yuta; Matsumoto, Munehisa; Miyashita, Seiji; Akai, Hisazumi; Doi, Shotaro; Miyake, Takashi; Sakuma, Akimasa

    2016-11-01

    We investigate the effects of magnetic inhomogeneities and thermal fluctuations on the magnetic properties of a rare-earth intermetallic compound, Nd2Fe14B . The constrained Monte Carlo method is applied to a Nd2Fe14B bulk system to realize the experimentally observed spin reorientation and magnetic anisotropy constants KmA(m =1 ,2 ,4 ) at finite temperatures. Subsequently, it is found that the temperature dependence of K1A deviates from the Callen-Callen law, K1A(T ) ∝M (T) 3 , even above room temperature, TR˜300 K , when the Fe (Nd) anisotropy terms are removed to leave only the Nd (Fe) anisotropy terms. This is because the exchange couplings between Nd moments and Fe spins are much smaller than those between Fe spins. It is also found that the exponent n in the external magnetic field Hext response of barrier height FB=FB0(1-Hext/H0) n is less than 2 in the low-temperature region below TR, whereas n approaches 2 when T >TR , indicating the presence of Stoner-Wohlfarth-type magnetization rotation. This reflects the fact that the magnetic anisotropy is mainly governed by the K1A term in the T >TR region.

  3. Monte Carlo fundamentals

    SciTech Connect

    Brown, F.B.; Sutton, T.M.

    1996-02-01

    This report is composed of the lecture notes from the first half of a 32-hour graduate-level course on Monte Carlo methods offered at KAPL. These notes, prepared by two of the principle developers of KAPL`s RACER Monte Carlo code, cover the fundamental theory, concepts, and practices for Monte Carlo analysis. In particular, a thorough grounding in the basic fundamentals of Monte Carlo methods is presented, including random number generation, random sampling, the Monte Carlo approach to solving transport problems, computational geometry, collision physics, tallies, and eigenvalue calculations. Furthermore, modern computational algorithms for vector and parallel approaches to Monte Carlo calculations are covered in detail, including fundamental parallel and vector concepts, the event-based algorithm, master/slave schemes, parallel scaling laws, and portability issues.

  4. Coupled Lagrangian Monte Carlo PDF-CFD computation of gas turbine combustor flowfields with finite-rate chemistry

    SciTech Connect

    Tolpadi, A.K.; Hu, I.Z.; Correa, S.M.; Burrus, D.L.

    1997-07-01

    A coupled Lagrangian Monte Carlo Probability Density Function (PDF)-Eulerian Computational Fluid Dynamics (CFD) technique is presented for calculating steady three-dimensional turbulent reacting flow in a gas turbine combustor. PDF transport methods model turbulence-combustion interactions more accurately than conventional turbulence models with an assumed shape PDF. The PDF transport equation was solved using a Lagrangian particle tracking Monte Carlo (MC) method. The PDF modeled was over composition only. This MC module has been coupled with CONCERT, which is a fully elliptic three-dimensional body-fitted CFD code based on pressure correction techniques. In an earlier paper, this computational approach was described, but only fast chemistry calculations were presented in a typical aircraft engine combustor. In the present paper, reduced chemistry schemes were incorporated into the MC module that enabled the modeling of finite rate effects in gas turbine flames and therefore the prediction of CO and NO{sub x} emissions. With the inclusion of these finite rate effects, the gas temperatures obtained were also more realistic. Initially, a two scalar scheme was implemented that allowed validation against Raman data taken in a recirculation bluff body stabilized CO/H{sub 2}/N{sub 2}-air flame. Good agreement of the temperature and major species were obtained. Next, finite rate computations were performed in a single annular aircraft engine combustor by incorporating a simple three scalar reduced chemistry scheme for Jet A fuel. This three scalar scheme was an extension of the two scalar scheme for CO/H{sub 2}/N{sub 2} fuel. The solutions obtained using the present approach were compared with those obtained using the fast chemistry PDF transport approach as well as the presumed shape PDF method. The calculated exhaust gas temperature using the finite rate model showed the best agreement with measurements made by a thermocouple rake.

  5. Multilevel sequential Monte Carlo samplers

    SciTech Connect

    Beskos, Alexandros; Jasra, Ajay; Law, Kody; Tempone, Raul; Zhou, Yan

    2016-08-24

    Here, we study the approximation of expectations w.r.t. probability distributions associated to the solution of partial differential equations (PDEs); this scenario appears routinely in Bayesian inverse problems. In practice, one often has to solve the associated PDE numerically, using, for instance finite element methods and leading to a discretisation bias, with the step-size level hL. In addition, the expectation cannot be computed analytically and one often resorts to Monte Carlo methods. In the context of this problem, it is known that the introduction of the multilevel Monte Carlo (MLMC) method can reduce the amount of computational effort to estimate expectations, for a given level of error. This is achieved via a telescoping identity associated to a Monte Carlo approximation of a sequence of probability distributions with discretisation levels ${\\infty}$ >h0>h1 ...>hL. In many practical problems of interest, one cannot achieve an i.i.d. sampling of the associated sequence of probability distributions. A sequential Monte Carlo (SMC) version of the MLMC method is introduced to deal with this problem. In conclusion, it is shown that under appropriate assumptions, the attractive property of a reduction of the amount of computational effort to estimate expectations, for a given level of error, can be maintained within the SMC context.

  6. Multilevel sequential Monte Carlo samplers

    DOE PAGES

    Beskos, Alexandros; Jasra, Ajay; Law, Kody; ...

    2016-08-24

    Here, we study the approximation of expectations w.r.t. probability distributions associated to the solution of partial differential equations (PDEs); this scenario appears routinely in Bayesian inverse problems. In practice, one often has to solve the associated PDE numerically, using, for instance finite element methods and leading to a discretisation bias, with the step-size level hL. In addition, the expectation cannot be computed analytically and one often resorts to Monte Carlo methods. In the context of this problem, it is known that the introduction of the multilevel Monte Carlo (MLMC) method can reduce the amount of computational effort to estimate expectations, for a given level of error. This is achieved via a telescoping identity associated to a Monte Carlo approximation of a sequence of probability distributions with discretisation levelsmore » $${\\infty}$$ >h0>h1 ...>hL. In many practical problems of interest, one cannot achieve an i.i.d. sampling of the associated sequence of probability distributions. A sequential Monte Carlo (SMC) version of the MLMC method is introduced to deal with this problem. In conclusion, it is shown that under appropriate assumptions, the attractive property of a reduction of the amount of computational effort to estimate expectations, for a given level of error, can be maintained within the SMC context.« less

  7. Monte Carlo Reliability Analysis.

    DTIC Science & Technology

    1987-10-01

    to Stochastic Processes , Prentice-Hall, Englewood Cliffs, NJ, 1975. (5) R. E. Barlow and F. Proscham, Statistical TheorX of Reliability and Life...Lewis and Z. Tu, "Monte Carlo Reliability Modeling by Inhomogeneous ,Markov Processes, Reliab. Engr. 16, 277-296 (1986). (4) E. Cinlar, Introduction

  8. Accuracy and convergence of coupled finite-volume/Monte Carlo codes for plasma edge simulations of nuclear fusion reactors

    SciTech Connect

    Ghoos, K.; Dekeyser, W.; Samaey, G.; Börner, P.; Baelmans, M.

    2016-10-01

    The plasma and neutral transport in the plasma edge of a nuclear fusion reactor is usually simulated using coupled finite volume (FV)/Monte Carlo (MC) codes. However, under conditions of future reactors like ITER and DEMO, convergence issues become apparent. This paper examines the convergence behaviour and the numerical error contributions with a simplified FV/MC model for three coupling techniques: Correlated Sampling, Random Noise and Robbins Monro. Also, practical procedures to estimate the errors in complex codes are proposed. Moreover, first results with more complex models show that an order of magnitude speedup can be achieved without any loss in accuracy by making use of averaging in the Random Noise coupling technique.

  9. Finite-size effects at critical points with anisotropic correlations: Phenomenological scaling theory and Monte Carlo simulations

    NASA Astrophysics Data System (ADS)

    Binder, Kurt; Wang, Jian-Sheng

    1989-04-01

    Various thermal equilibrium and nonequilibrium phase transitions exist where the correlation lengths in different lattice directions diverge with different exponents v ‖, v ⊥: uniaxial Lifshitz points, the Kawasaki spin exchange model driven by an electric field, etc. An extension of finite-size scaling concepts to such anisotropic situations is proposed, including a discussion of (generalized) rectangular geometries, with linear dimension L ‖ in the special direction and linear dimensions L ⊥ in all other directions. The related shape effects for L ‖≠ L ⊥ but isotropic critical points are also discussed. Particular attention is paid to the case where the generalized hyperscaling relation v ‖+( d-1) v ⊥=γ+2 β does not hold. As a test of these ideas, a Monte Carlo simulation study for shape effects at isotropic critical point in the two-dimensional Ising model is presented, considering subsystems of a 1024x1024 square lattice at criticality.

  10. Wang-Landau method for calculating Rényi entropies in finite-temperature quantum Monte Carlo simulations.

    PubMed

    Inglis, Stephen; Melko, Roger G

    2013-01-01

    We implement a Wang-Landau sampling technique in quantum Monte Carlo (QMC) simulations for the purpose of calculating the Rényi entanglement entropies and associated mutual information. The algorithm converges an estimate for an analog to the density of states for stochastic series expansion QMC, allowing a direct calculation of Rényi entropies without explicit thermodynamic integration. We benchmark results for the mutual information on two-dimensional (2D) isotropic and anisotropic Heisenberg models, a 2D transverse field Ising model, and a three-dimensional Heisenberg model, confirming a critical scaling of the mutual information in cases with a finite-temperature transition. We discuss the benefits and limitations of broad sampling techniques compared to standard importance sampling methods.

  11. PARAMETRIC STUDY OF TISSUE OPTICAL CLEARING BY LOCALIZED MECHANICAL COMPRESSION USING COMBINED FINITE ELEMENT AND MONTE CARLO SIMULATION.

    PubMed

    Vogt, William C; Shen, Haiou; Wang, Ge; Rylander, Christopher G

    2010-07-01

    Tissue Optical Clearing Devices (TOCDs) have been shown to increase light transmission through mechanically compressed regions of naturally turbid biological tissues. We hypothesize that zones of high compressive strain induced by TOCD pins produce localized water displacement and reversible changes in tissue optical properties. In this paper, we demonstrate a novel combined mechanical finite element model and optical Monte Carlo model which simulates TOCD pin compression of an ex vivo porcine skin sample and modified spatial photon fluence distributions within the tissue. Results of this simulation qualitatively suggest that light transmission through the skin can be significantly affected by changes in compressed tissue geometry as well as concurrent changes in tissue optical properties. The development of a comprehensive multi-domain model of TOCD application to tissues such as skin could ultimately be used as a framework for optimizing future design of TOCDs.

  12. Multi-Resolution Markov-Chain-Monte-Carlo Approach for System Identification with an Application to Finite-Element Models

    SciTech Connect

    Johannesson, G; Glaser, R E; Lee, C L; Nitao, J J; Hanley, W G

    2005-02-07

    Estimating unknown system configurations/parameters by combining system knowledge gained from a computer simulation model on one hand and from observed data on the other hand is challenging. An example of such inverse problem is detecting and localizing potential flaws or changes in a structure by using a finite-element model and measured vibration/displacement data. We propose a probabilistic approach based on Bayesian methodology. This approach does not only yield a single best-guess solution, but a posterior probability distribution over the parameter space. In addition, the Bayesian approach provides a natural framework to accommodate prior knowledge. A Markov chain Monte Carlo (MCMC) procedure is proposed to generate samples from the posterior distribution (an ensemble of likely system configurations given the data). The MCMC procedure proposed explores the parameter space at different resolutions (scales), resulting in a more robust and efficient procedure. The large-scale exploration steps are carried out using coarser-resolution finite-element models, yielding a considerable decrease in computational time, which can be a crucial for large finite-element models. An application is given using synthetic displacement data from a simple cantilever beam with MCMC exploration carried out at three different resolutions.

  13. Fundamentals of Monte Carlo

    SciTech Connect

    Wollaber, Allan Benton

    2016-06-16

    This is a powerpoint presentation which serves as lecture material for the Parallel Computing summer school. It goes over the fundamentals of the Monte Carlo calculation method. The material is presented according to the following outline: Introduction (background, a simple example: estimating π), Why does this even work? (The Law of Large Numbers, The Central Limit Theorem), How to sample (inverse transform sampling, rejection), and An example from particle transport.

  14. Monte Carlo eikonal scattering

    NASA Astrophysics Data System (ADS)

    Gibbs, W. R.; Dedonder, J. P.

    2012-08-01

    Background: The eikonal approximation is commonly used to calculate heavy-ion elastic scattering. However, the full evaluation has only been done (without the use of Monte Carlo techniques or additional approximations) for α-α scattering.Purpose: Develop, improve, and test the Monte Carlo eikonal method for elastic scattering over a wide range of nuclei, energies, and angles.Method: Monte Carlo evaluation is used to calculate heavy-ion elastic scattering for heavy nuclei including the center-of-mass correction introduced in this paper and the Coulomb interaction in terms of a partial-wave expansion. A technique for the efficient expansion of the Glauber amplitude in partial waves is developed.Results: Angular distributions are presented for a number of nuclear pairs over a wide energy range using nucleon-nucleon scattering parameters taken from phase-shift analyses and densities from independent sources. We present the first calculations of the Glauber amplitude, without further approximation, and with realistic densities for nuclei heavier than helium. These densities respect the center-of-mass constraints. The Coulomb interaction is included in these calculations.Conclusion: The center-of-mass and Coulomb corrections are essential. Angular distributions can be predicted only up to certain critical angles which vary with the nuclear pairs and the energy, but we point out that all critical angles correspond to a momentum transfer near 1 fm-1.

  15. Monte Carlo fluorescence microtomography

    NASA Astrophysics Data System (ADS)

    Cong, Alexander X.; Hofmann, Matthias C.; Cong, Wenxiang; Xu, Yong; Wang, Ge

    2011-07-01

    Fluorescence microscopy allows real-time monitoring of optical molecular probes for disease characterization, drug development, and tissue regeneration. However, when a biological sample is thicker than 1 mm, intense scattering of light would significantly degrade the spatial resolution of fluorescence microscopy. In this paper, we develop a fluorescence microtomography technique that utilizes the Monte Carlo method to image fluorescence reporters in thick biological samples. This approach is based on an l0-regularized tomography model and provides an excellent solution. Our studies on biomimetic tissue scaffolds have demonstrated that the proposed approach is capable of localizing and quantifying the distribution of optical molecular probe accurately and reliably.

  16. Finite temperature path integral Monte Carlo simulations of structural and dynamical properties of ArN-CO2 clusters

    NASA Astrophysics Data System (ADS)

    Wang, Lecheng; Xie, Daiqian

    2012-08-01

    We report finite temperature quantum mechanical simulations of structural and dynamical properties of ArN-CO2 clusters using a path integral Monte Carlo algorithm. The simulations are based on a newly developed analytical Ar-CO2 interaction potential obtained by fitting ab initio results to an anisotropic two-dimensional Morse/Long-range function. The calculated distributions of argon atoms around the CO2 molecule in ArN-CO2 clusters with different sizes are consistent to the previous studies of the configurations of the clusters. A first-order perturbation theory is used to quantitatively predict the CO2 vibrational frequency shift in different clusters. The first-solvation shell is completed at N = 17. Interestingly, our simulations for larger ArN-CO2 clusters showed several different structures of the argon shell around the doped CO2 molecule. The observed two distinct peaks (2338.8 and 2344.5 cm-1) in the υ3 band of CO2 may be due to the different arrangements of argon atoms around the dopant molecule.

  17. Calculation of dose distribution in compressible breast tissues using finite element modeling, Monte Carlo simulation and thermoluminescence dosimeters.

    PubMed

    Mohammadyari, Parvin; Faghihi, Reza; Mosleh-Shirazi, Mohammad Amin; Lotfi, Mehrzad; Hematiyan, Mohammad Rahim; Koontz, Craig; Meigooni, Ali S

    2015-12-07

    Compression is a technique to immobilize the target or improve the dose distribution within the treatment volume during different irradiation techniques such as AccuBoost(®) brachytherapy. However, there is no systematic method for determination of dose distribution for uncompressed tissue after irradiation under compression. In this study, the mechanical behavior of breast tissue between compressed and uncompressed states was investigated. With that, a novel method was developed to determine the dose distribution in uncompressed tissue after irradiation of compressed breast tissue. Dosimetry was performed using two different methods, namely, Monte Carlo simulations using the MCNP5 code and measurements using thermoluminescent dosimeters (TLD). The displacement of the breast elements was simulated using a finite element model and calculated using ABAQUS software. From these results, the 3D dose distribution in uncompressed tissue was determined. The geometry of the model was constructed from magnetic resonance images of six different women volunteers. The mechanical properties were modeled by using the Mooney-Rivlin hyperelastic material model. Experimental dosimetry was performed by placing the TLD chips into the polyvinyl alcohol breast equivalent phantom. The results determined that the nodal displacements, due to the gravitational force and the 60 Newton compression forces (with 43% contraction in the loading direction and 37% expansion in the orthogonal direction) were determined. Finally, a comparison of the experimental data and the simulated data showed agreement within 11.5%  ±  5.9%.

  18. Scaling/LER study of Si GAA nanowire FET using 3D finite element Monte Carlo simulations

    NASA Astrophysics Data System (ADS)

    Elmessary, Muhammad A.; Nagy, Daniel; Aldegunde, Manuel; Seoane, Natalia; Indalecio, Guillermo; Lindberg, Jari; Dettmer, Wulf; Perić, Djordje; García-Loureiro, Antonio J.; Kalna, Karol

    2017-02-01

    3D Finite Element (FE) Monte Carlo (MC) simulation toolbox incorporating 2D Schrödinger equation quantum corrections is employed to simulate ID-VG characteristics of a 22 nm gate length gate-all-around (GAA) Si nanowire (NW) FET demonstrating an excellent agreement against experimental data at both low and high drain biases. We then scale the Si GAA NW according to the ITRS specifications to a gate length of 10 nm predicting that the NW FET will deliver the required on-current of above 1 mA/ μ m and a superior electrostatic integrity with a nearly ideal sub-threshold slope of 68 mV/dec and a DIBL of 39 mV/V. In addition, we use a calibrated 3D FE quantum corrected drift-diffusion (DD) toolbox to investigate the effects of NW line-edge roughness (LER) induced variability on the sub-threshold characteristics (threshold voltage (VT), OFF-current (IOFF), sub-threshold slope (SS) and drain-induced-barrier-lowering (DIBL)) for the 22 nm and 10 nm gate length GAA NW FETs at low and high drain biases. We simulate variability with two LER correlation lengths (CL = 20 nm and 10 nm) and three root mean square values (RMS = 0.6, 0.7 and 0.85 nm).

  19. Experimental validation of Monte Carlo and finite-element methods for the estimation of the optical path length in inhomogeneous tissue

    NASA Astrophysics Data System (ADS)

    Okada, Eiji; Schweiger, Martin; Arridge, Simon R.; Firbank, Michael; Delpy, David T.

    1996-07-01

    To validate models of light propagation in biological tissue, experiments to measure the mean time of flight have been carried out on several solid cylindrical layered phantoms. The optical properties of the inner cylinders of the phantoms were close to those of adult brain white matter, whereas a range of scattering or absorption coefficients was chosen for the outer layer. Experimental results for the mean optical path length have been compared with the predictions of both an exact Monte Carlo (MC) model and a diffusion equation, with two differing boundary conditions implemented in a finite-element method (FEM). The MC and experimental results are in good agreement despite poor statistics for large fiber spacings, whereas good agreement with the FEM prediction requires a careful choice of proper boundary conditions. measurement, Monte Carlo method, finite-element method.

  20. Monte Carlo Computation of the Finite-Size Scaling Function: an Alternative Approach

    NASA Astrophysics Data System (ADS)

    Kim, Jae-Kwon; de Souza, Adauto J. F.; Landau, D. P.

    1996-03-01

    We show how to compute numerically a finite-size-scaling function which is particularly effective in extracting accurate infinite- volume -limit values (bulk values) of certain physical quantities^1. We illustrate our procedure for the two and three dimensional Ising models, and report our bulk values for the correlation lenth, magnetic susceptibility, and renormalized four-point coupling constant. Based on these bulk values we extract the values of various critical parameters. ^1 J.-K. Kim, Euro. Phys. Lett. 28, 211 (1994) Research supported in part by the NSF ^Permanent address: Departmento de Fisica e Matematica, Universidade Federal Rural de Pernambuco, 52171-900, Recife, Pernambuco, Brazil

  1. MCMini: Monte Carlo on GPGPU

    SciTech Connect

    Marcus, Ryan C.

    2012-07-25

    MCMini is a proof of concept that demonstrates the possibility for Monte Carlo neutron transport using OpenCL with a focus on performance. This implementation, written in C, shows that tracing particles and calculating reactions on a 3D mesh can be done in a highly scalable fashion. These results demonstrate a potential path forward for MCNP or other Monte Carlo codes.

  2. Quantum Gibbs ensemble Monte Carlo

    SciTech Connect

    Fantoni, Riccardo; Moroni, Saverio

    2014-09-21

    We present a path integral Monte Carlo method which is the full quantum analogue of the Gibbs ensemble Monte Carlo method of Panagiotopoulos to study the gas-liquid coexistence line of a classical fluid. Unlike previous extensions of Gibbs ensemble Monte Carlo to include quantum effects, our scheme is viable even for systems with strong quantum delocalization in the degenerate regime of temperature. This is demonstrated by an illustrative application to the gas-superfluid transition of {sup 4}He in two dimensions.

  3. Wormhole Hamiltonian Monte Carlo

    PubMed Central

    Lan, Shiwei; Streets, Jeffrey; Shahbaba, Babak

    2015-01-01

    In machine learning and statistics, probabilistic inference involving multimodal distributions is quite difficult. This is especially true in high dimensional problems, where most existing algorithms cannot easily move from one mode to another. To address this issue, we propose a novel Bayesian inference approach based on Markov Chain Monte Carlo. Our method can effectively sample from multimodal distributions, especially when the dimension is high and the modes are isolated. To this end, it exploits and modifies the Riemannian geometric properties of the target distribution to create wormholes connecting modes in order to facilitate moving between them. Further, our proposed method uses the regeneration technique in order to adapt the algorithm by identifying new modes and updating the network of wormholes without affecting the stationary distribution. To find new modes, as opposed to redis-covering those previously identified, we employ a novel mode searching algorithm that explores a residual energy function obtained by subtracting an approximate Gaussian mixture density (based on previously discovered modes) from the target density function. PMID:25861551

  4. Wormhole Hamiltonian Monte Carlo.

    PubMed

    Lan, Shiwei; Streets, Jeffrey; Shahbaba, Babak

    2014-07-31

    In machine learning and statistics, probabilistic inference involving multimodal distributions is quite difficult. This is especially true in high dimensional problems, where most existing algorithms cannot easily move from one mode to another. To address this issue, we propose a novel Bayesian inference approach based on Markov Chain Monte Carlo. Our method can effectively sample from multimodal distributions, especially when the dimension is high and the modes are isolated. To this end, it exploits and modifies the Riemannian geometric properties of the target distribution to create wormholes connecting modes in order to facilitate moving between them. Further, our proposed method uses the regeneration technique in order to adapt the algorithm by identifying new modes and updating the network of wormholes without affecting the stationary distribution. To find new modes, as opposed to redis-covering those previously identified, we employ a novel mode searching algorithm that explores a residual energy function obtained by subtracting an approximate Gaussian mixture density (based on previously discovered modes) from the target density function.

  5. Interaction picture density matrix quantum Monte Carlo

    SciTech Connect

    Malone, Fionn D. Lee, D. K. K.; Foulkes, W. M. C.; Blunt, N. S.; Shepherd, James J.; Spencer, J. S.

    2015-07-28

    The recently developed density matrix quantum Monte Carlo (DMQMC) algorithm stochastically samples the N-body thermal density matrix and hence provides access to exact properties of many-particle quantum systems at arbitrary temperatures. We demonstrate that moving to the interaction picture provides substantial benefits when applying DMQMC to interacting fermions. In this first study, we focus on a system of much recent interest: the uniform electron gas in the warm dense regime. The basis set incompleteness error at finite temperature is investigated and extrapolated via a simple Monte Carlo sampling procedure. Finally, we provide benchmark calculations for a four-electron system, comparing our results to previous work where possible.

  6. The Rational Hybrid Monte Carlo algorithm

    NASA Astrophysics Data System (ADS)

    Clark, Michael

    2006-12-01

    The past few years have seen considerable progress in algorithmic development for the generation of gauge fields including the effects of dynamical fermions. The Rational Hybrid Monte Carlo (RHMC) algorithm, where Hybrid Monte Carlo is performed using a rational approximation in place the usual inverse quark matrix kernel is one of these developments. This algorithm has been found to be extremely beneficial in many areas of lattice QCD (chiral fermions, finite temperature, Wilson fermions etc.). We review the algorithm and some of these benefits, and we compare against other recent algorithm developements. We conclude with an update of the Berlin wall plot comparing costs of all popular fermion formulations.

  7. Isotropic Monte Carlo Grain Growth

    SciTech Connect

    Mason, J.

    2013-04-25

    IMCGG performs Monte Carlo simulations of normal grain growth in metals on a hexagonal grid in two dimensions with periodic boundary conditions. This may be performed with either an isotropic or a misorientation - and incliantion-dependent grain boundary energy.

  8. Radiative transfer equation for predicting light propagation in biological media: comparison of a modified finite volume method, the Monte Carlo technique, and an exact analytical solution.

    PubMed

    Asllanaj, Fatmir; Contassot-Vivier, Sylvain; Liemert, André; Kienle, Alwin

    2014-01-01

    We examine the accuracy of a modified finite volume method compared to analytical and Monte Carlo solutions for solving the radiative transfer equation. The model is used for predicting light propagation within a two-dimensional absorbing and highly forward-scattering medium such as biological tissue subjected to a collimated light beam. Numerical simulations for the spatially resolved reflectance and transmittance are presented considering refractive index mismatch with Fresnel reflection at the interface, homogeneous and two-layered media. Time-dependent as well as steady-state cases are considered. In the steady state, it is found that the modified finite volume method is in good agreement with the other two methods. The relative differences between the solutions are found to decrease with spatial mesh refinement applied for the modified finite volume method obtaining <2.4%. In the time domain, the fourth-order Runge-Kutta method is used for the time semi-discretization of the radiative transfer equation. An agreement among the modified finite volume method, Runge-Kutta method, and Monte Carlo solutions are shown, but with relative differences higher than in the steady state.

  9. Generalizing the self-healing diffusion Monte Carlo approach to finite temperature: a path for the optimization of low-energy many-body basis expansions

    SciTech Connect

    Kim, Jeongnim; Reboredo, Fernando A

    2014-01-01

    The self-healing diffusion Monte Carlo method for complex functions [F. A. Reboredo J. Chem. Phys. {\\bf 136}, 204101 (2012)] and some ideas of the correlation function Monte Carlo approach [D. M. Ceperley and B. Bernu, J. Chem. Phys. {\\bf 89}, 6316 (1988)] are blended to obtain a method for the calculation of thermodynamic properties of many-body systems at low temperatures. In order to allow the evolution in imaginary time to describe the density matrix, we remove the fixed-node restriction using complex antisymmetric trial wave functions. A statistical method is derived for the calculation of finite temperature properties of many-body systems near the ground state. In the process we also obtain a parallel algorithm that optimizes the many-body basis of a small subspace of the many-body Hilbert space. This small subspace is optimized to have maximum overlap with the one expanded by the lower energy eigenstates of a many-body Hamiltonian. We show in a model system that the Helmholtz free energy is minimized within this subspace as the iteration number increases. We show that the subspace expanded by the small basis systematically converges towards the subspace expanded by the lowest energy eigenstates. Possible applications of this method to calculate the thermodynamic properties of many-body systems near the ground state are discussed. The resulting basis can be also used to accelerate the calculation of the ground or excited states with Quantum Monte Carlo.

  10. Proton Upset Monte Carlo Simulation

    NASA Technical Reports Server (NTRS)

    O'Neill, Patrick M.; Kouba, Coy K.; Foster, Charles C.

    2009-01-01

    The Proton Upset Monte Carlo Simulation (PROPSET) program calculates the frequency of on-orbit upsets in computer chips (for given orbits such as Low Earth Orbit, Lunar Orbit, and the like) from proton bombardment based on the results of heavy ion testing alone. The software simulates the bombardment of modern microelectronic components (computer chips) with high-energy (.200 MeV) protons. The nuclear interaction of the proton with the silicon of the chip is modeled and nuclear fragments from this interaction are tracked using Monte Carlo techniques to produce statistically accurate predictions.

  11. Suitable Candidates for Monte Carlo Solutions.

    ERIC Educational Resources Information Center

    Lewis, Jerome L.

    1998-01-01

    Discusses Monte Carlo methods, powerful and useful techniques that rely on random numbers to solve deterministic problems whose solutions may be too difficult to obtain using conventional mathematics. Reviews two excellent candidates for the application of Monte Carlo methods. (ASK)

  12. A Classroom Note on Monte Carlo Integration.

    ERIC Educational Resources Information Center

    Kolpas, Sid

    1998-01-01

    The Monte Carlo method provides approximate solutions to a variety of mathematical problems by performing random sampling simulations with a computer. Presents a program written in Quick BASIC simulating the steps of the Monte Carlo method. (ASK)

  13. Applications of Monte Carlo Methods in Calculus.

    ERIC Educational Resources Information Center

    Gordon, Sheldon P.; Gordon, Florence S.

    1990-01-01

    Discusses the application of probabilistic ideas, especially Monte Carlo simulation, to calculus. Describes some applications using the Monte Carlo method: Riemann sums; maximizing and minimizing a function; mean value theorems; and testing conjectures. (YP)

  14. Monte Carlo Simulation of Plumes Spectral Emission

    DTIC Science & Technology

    2005-06-07

    Henyey − Greenstein scattering indicatrix SUBROUTINE Calculation of spectral (group) phase function of Monte - Carlo Simulation of Plumes...calculations; b) Computing code SRT-RTMC-NSM intended for narrow band Spectral Radiation Transfer Ray Tracing Simulation by the Monte - Carlo method with...project) Computing codes for random ( Monte - Carlo ) simulation of molecular lines with reference to a problem of radiation transfer

  15. Monte Carlo Simulation for Perusal and Practice.

    ERIC Educational Resources Information Center

    Brooks, Gordon P.; Barcikowski, Robert S.; Robey, Randall R.

    The meaningful investigation of many problems in statistics can be solved through Monte Carlo methods. Monte Carlo studies can help solve problems that are mathematically intractable through the analysis of random samples from populations whose characteristics are known to the researcher. Using Monte Carlo simulation, the values of a statistic are…

  16. Quantum Monte Carlo applied to solids

    SciTech Connect

    Shulenburger, Luke; Mattsson, Thomas R.

    2013-12-01

    We apply diffusion quantum Monte Carlo to a broad set of solids, benchmarking the method by comparing bulk structural properties (equilibrium volume and bulk modulus) to experiment and density functional theory (DFT) based theories. The test set includes materials with many different types of binding including ionic, metallic, covalent, and van der Waals. We show that, on average, the accuracy is comparable to or better than that of DFT when using the new generation of functionals, including one hybrid functional and two dispersion corrected functionals. The excellent performance of quantum Monte Carlo on solids is promising for its application to heterogeneous systems and high-pressure/high-density conditions. Important to the results here is the application of a consistent procedure with regards to the several approximations that are made, such as finite-size corrections and pseudopotential approximations. This test set allows for any improvements in these methods to be judged in a systematic way.

  17. KULLBACK-LEIBLER MARKOV CHAIN MONTE CARLO — A NEW ALGORITHM FOR FINITE MIXTURE ANALYSIS AND ITS APPLICATION TO GENE EXPRESSION DATA

    PubMed Central

    TATARINOVA, TATIANA; BOUCK, JOHN; SCHUMITZKY, ALAN

    2009-01-01

    In this paper, we study Bayesian analysis of nonlinear hierarchical mixture models with a finite but unknown number of components. Our approach is based on Markov chain Monte Carlo (MCMC) methods. One of the applications of our method is directed to the clustering problem in gene expression analysis. From a mathematical and statistical point of view, we discuss the following topics: theoretical and practical convergence problems of the MCMC method; determination of the number of components in the mixture; and computational problems associated with likelihood calculations. In the existing literature, these problems have mainly been addressed in the linear case. One of the main contributions of this paper is developing a method for the nonlinear case. Our approach is based on a combination of methods including Gibbs sampling, random permutation sampling, birth-death MCMC, and Kullback-Leibler distance. PMID:18763739

  18. Kullback-Leibler Markov chain Monte Carlo--a new algorithm for finite mixture analysis and its application to gene expression data.

    PubMed

    Tatarinova, Tatiana; Bouck, John; Schumitzky, Alan

    2008-08-01

    In this paper, we study Bayesian analysis of nonlinear hierarchical mixture models with a finite but unknown number of components. Our approach is based on Markov chain Monte Carlo (MCMC) methods. One of the applications of our method is directed to the clustering problem in gene expression analysis. From a mathematical and statistical point of view, we discuss the following topics: theoretical and practical convergence problems of the MCMC method; determination of the number of components in the mixture; and computational problems associated with likelihood calculations. In the existing literature, these problems have mainly been addressed in the linear case. One of the main contributions of this paper is developing a method for the nonlinear case. Our approach is based on a combination of methods including Gibbs sampling, random permutation sampling, birth-death MCMC, and Kullback-Leibler distance.

  19. CTRANS: A Monte Carlo program for radiative transfer in plane parallel atmospheres with imbedded finite clouds: Development, testing and user's guide

    NASA Technical Reports Server (NTRS)

    1976-01-01

    The program called CTRANS is described which was designed to perform radiative transfer computations in an atmosphere with horizontal inhomogeneities (clouds). Since the atmosphere-ground system was to be richly detailed, the Monte Carlo method was employed. This means that results are obtained through direct modeling of the physical process of radiative transport. The effects of atmopheric or ground albedo pattern detail are essentially built up from their impact upon the transport of individual photons. The CTRANS program actually tracks the photons backwards through the atmosphere, initiating them at a receiver and following them backwards along their path to the Sun. The pattern of incident photons generated through backwards tracking automatically reflects the importance to the receiver of each region of the sky. Further, through backwards tracking, the impact of the finite field of view of the receiver and variations in its response over the field of view can be directly simulated.

  20. Monte Carlo methods in ICF

    SciTech Connect

    Zimmerman, G.B.

    1997-06-24

    Monte Carlo methods appropriate to simulate the transport of x-rays, neutrons, ion and electrons in Inertial Confinement Fusion targets are described and analyzed. The Implicit Monte Carlo method of x-ray transport handles symmetry within indirect drive ICF hohlraums well, but can be improved 50X in efficiency by angular biasing the x-rays towards the fuel capsule. Accurate simulation of thermonuclear burns nd burn diagnostics involves detailed particle source spectra, charged particle ranges, inflight reaction kinematics, corrections for bulk and thermal Doppler effects and variance reduction to obtain adequate statistics for rare events. It is found that the effects of angular Coulomb scattering must be included in models of charged particle transport through heterogeneous materials.

  1. Implicit Monte Carlo with a linear discontinuous finite element material solution and piecewise non-constant opacity

    SciTech Connect

    Wollaeger, Ryan T.; Wollaber, Allan B.; Urbatsch, Todd J.; Densmore, Jeffery D.

    2016-02-23

    Here, the non-linear thermal radiative-transfer equations can be solved in various ways. One popular way is the Fleck and Cummings Implicit Monte Carlo (IMC) method. The IMC method was originally formulated with piecewise-constant material properties. For domains with a coarse spatial grid and large temperature gradients, an error known as numerical teleportation may cause artificially non-causal energy propagation and consequently an inaccurate material temperature. Source tilting is a technique to reduce teleportation error by constructing sub-spatial-cell (or sub-cell) emission profiles from which IMC particles are sampled. Several source tilting schemes exist, but some allow teleportation error to persist. We examine the effect of source tilting in problems with a temperature-dependent opacity. Within each cell, the opacity is evaluated continuously from a temperature profile implied by the source tilt. For IMC, this is a new approach to modeling the opacity. We find that applying both source tilting along with a source tilt-dependent opacity can introduce another dominant error that overly inhibits thermal wavefronts. We show that we can mitigate both teleportation and under-propagation errors if we discretize the temperature equation with a linear discontinuous (LD) trial space. Our method is for opacities ~ 1/T3, but we formulate and test a slight extension for opacities ~ 1/T3.5, where T is temperature. We find our method avoids errors that can be incurred by IMC with continuous source tilt constructions and piecewise-constant material temperature updates.

  2. Implicit Monte Carlo with a linear discontinuous finite element material solution and piecewise non-constant opacity

    DOE PAGES

    Wollaeger, Ryan T.; Wollaber, Allan B.; Urbatsch, Todd J.; ...

    2016-02-23

    Here, the non-linear thermal radiative-transfer equations can be solved in various ways. One popular way is the Fleck and Cummings Implicit Monte Carlo (IMC) method. The IMC method was originally formulated with piecewise-constant material properties. For domains with a coarse spatial grid and large temperature gradients, an error known as numerical teleportation may cause artificially non-causal energy propagation and consequently an inaccurate material temperature. Source tilting is a technique to reduce teleportation error by constructing sub-spatial-cell (or sub-cell) emission profiles from which IMC particles are sampled. Several source tilting schemes exist, but some allow teleportation error to persist. We examinemore » the effect of source tilting in problems with a temperature-dependent opacity. Within each cell, the opacity is evaluated continuously from a temperature profile implied by the source tilt. For IMC, this is a new approach to modeling the opacity. We find that applying both source tilting along with a source tilt-dependent opacity can introduce another dominant error that overly inhibits thermal wavefronts. We show that we can mitigate both teleportation and under-propagation errors if we discretize the temperature equation with a linear discontinuous (LD) trial space. Our method is for opacities ~ 1/T3, but we formulate and test a slight extension for opacities ~ 1/T3.5, where T is temperature. We find our method avoids errors that can be incurred by IMC with continuous source tilt constructions and piecewise-constant material temperature updates.« less

  3. The D0 Monte Carlo

    SciTech Connect

    Womersley, J. . Dept. of Physics)

    1992-10-01

    The D0 detector at the Fermilab Tevatron began its first data taking run in May 1992. For analysis of the expected 25 pb[sup [minus]1] data sample, roughly half a million simulated events will be needed. The GEANT-based Monte Carlo program used to generate these events is described, together with comparisons to test beam data. Some novel techniques used to speed up execution and simplify geometrical input are described.

  4. Path Integral Monte Carlo Methods for Fermions

    NASA Astrophysics Data System (ADS)

    Ethan, Ethan; Dubois, Jonathan; Ceperley, David

    2014-03-01

    In general, Quantum Monte Carlo methods suffer from a sign problem when simulating fermionic systems. This causes the efficiency of a simulation to decrease exponentially with the number of particles and inverse temperature. To circumvent this issue, a nodal constraint is often implemented, restricting the Monte Carlo procedure from sampling paths that cause the many-body density matrix to change sign. Unfortunately, this high-dimensional nodal surface is not a priori known unless the system is exactly solvable, resulting in uncontrolled errors. We will discuss two possible routes to extend the applicability of finite-temperatue path integral Monte Carlo. First we extend the regime where signful simulations are possible through a novel permutation sampling scheme. Afterwards, we discuss a method to variationally improve the nodal surface by minimizing a free energy during simulation. Applications of these methods will include both free and interacting electron gases, concluding with discussion concerning extension to inhomogeneous systems. Support from DOE DE-FG52-09NA29456, DE-AC52-07NA27344, LLNL LDRD 10- ERD-058, and the Lawrence Scholar program.

  5. COMPARISONS OF THE FINITE-ELEMENT-WITH-DISCONTIGUOUS-SUPPORT METHOD TO CONTINUOUS-ENERGY MONTE CARLO FOR PIN-CELL PROBLEMS

    SciTech Connect

    A. T. Till; M. Hanuš; J. Lou; J. E. Morel; M. L. Adams

    2016-05-01

    The standard multigroup (MG) method for energy discretization of the transport equation can be sensitive to approximations in the weighting spectrum chosen for cross-section averaging. As a result, MG often inaccurately treats important phenomena such as self-shielding variations across a material. From a finite-element viewpoint, MG uses a single fixed basis function (the pre-selected spectrum) within each group, with no mechanism to adapt to local solution behavior. In this work, we introduce the Finite-Element-with-Discontiguous-Support (FEDS) method, whose only approximation with respect to energy is that the angular flux is a linear combination of unknowns multiplied by basis functions. A basis function is non-zero only in the discontiguous set of energy intervals associated with its energy element. Discontiguous energy elements are generalizations of bands and are determined by minimizing a norm of the difference between snapshot spectra and their averages over the energy elements. We begin by presenting the theory of the FEDS method. We then compare to continuous-energy Monte Carlo for one-dimensional slab and two-dimensional pin-cell problem. We find FEDS to be accurate and efficient at producing quantities of interest such as reaction rates and eigenvalues. Results show that FEDS converges at a rate that is approximately first-order in the number of energy elements and that FEDS is less sensitive to weighting spectrum than standard MG.

  6. Iterated finite-orbit Monte Carlo simulations with full-wave fields for modeling tokamak ion cyclotron resonance frequency wave heating experiments

    SciTech Connect

    Choi, M.; Chan, V. S.; Lao, L. L.; Pinsker, R. I.; Green, D.; Berry, L. A.; Jaeger, F.; Park, J. M.; Heidbrink, W. W.; Liu, D.; Podesta, M.; Harvey, R.; Smithe, D. N.; Bonoli, P.

    2010-05-15

    The five-dimensional finite-orbit Monte Carlo code ORBIT-RF[M. Choi et al., Phys. Plasmas 12, 1 (2005)] is successfully coupled with the two-dimensional full-wave code all-orders spectral algorithm (AORSA) [E. F. Jaeger et al., Phys. Plasmas 13, 056101 (2006)] in a self-consistent way to achieve improved predictive modeling for ion cyclotron resonance frequency (ICRF) wave heating experiments in present fusion devices and future ITER [R. Aymar et al., Nucl. Fusion 41, 1301 (2001)]. The ORBIT-RF/AORSA simulations reproduce fast-ion spectra and spatial profiles qualitatively consistent with fast ion D-alpha [W. W. Heidbrink et al., Plasma Phys. Controlled Fusion 49, 1457 (2007)] spectroscopic data in both DIII-D [J. L. Luxon, Nucl. Fusion 42, 614 (2002)] and National Spherical Torus Experiment [M. Ono et al., Nucl. Fusion 41, 1435 (2001)] high harmonic ICRF heating experiments. This work verifies that both finite-orbit width effect of fast-ion due to its drift motion along the torus and iterations between fast-ion distribution and wave fields are important in modeling ICRF heating experiments.

  7. Iterated finite-orbit Monte Carlo simulations with full-wave fields for modeling tokamak ion cyclotron resonance frequency wave heating experiments

    SciTech Connect

    Choi, M.; Green, David L; Heidbrink, W. W.; Harvey, R. W.; Liu, D.; Chan, V. S.; Berry, Lee A; Jaeger, Erwin Frederick; Lao, L.L.; Pinsker, R. I.; Podesta, M.; Smithe, D. N.; Park, J. M.; Bonoli, P.

    2010-01-01

    The five-dimensional finite-orbit Monte Carlo code ORBIT-RF [M. Choi , Phys. Plasmas 12, 1 (2005)] is successfully coupled with the two-dimensional full-wave code all-orders spectral algorithm (AORSA) [E. F. Jaeger , Phys. Plasmas 13, 056101 (2006)] in a self-consistent way to achieve improved predictive modeling for ion cyclotron resonance frequency (ICRF) wave heating experiments in present fusion devices and future ITER [R. Aymar , Nucl. Fusion 41, 1301 (2001)]. The ORBIT-RF/AORSA simulations reproduce fast-ion spectra and spatial profiles qualitatively consistent with fast ion D-alpha [W. W. Heidbrink , Plasma Phys. Controlled Fusion 49, 1457 (2007)] spectroscopic data in both DIII-D [J. L. Luxon, Nucl. Fusion 42, 614 (2002)] and National Spherical Torus Experiment [M. Ono , Nucl. Fusion 41, 1435 (2001)] high harmonic ICRF heating experiments. This work verifies that both finite-orbit width effect of fast-ion due to its drift motion along the torus and iterations between fast-ion distribution and wave fields are important in modeling ICRF heating experiments. (C) 2010 American Institute of Physics. [doi:10.1063/1.3314336

  8. Semistochastic Projector Monte Carlo Method

    NASA Astrophysics Data System (ADS)

    Petruzielo, F. R.; Holmes, A. A.; Changlani, Hitesh J.; Nightingale, M. P.; Umrigar, C. J.

    2012-12-01

    We introduce a semistochastic implementation of the power method to compute, for very large matrices, the dominant eigenvalue and expectation values involving the corresponding eigenvector. The method is semistochastic in that the matrix multiplication is partially implemented numerically exactly and partially stochastically with respect to expectation values only. Compared to a fully stochastic method, the semistochastic approach significantly reduces the computational time required to obtain the eigenvalue to a specified statistical uncertainty. This is demonstrated by the application of the semistochastic quantum Monte Carlo method to systems with a sign problem: the fermion Hubbard model and the carbon dimer.

  9. Multidimensional stochastic approximation Monte Carlo

    NASA Astrophysics Data System (ADS)

    Zablotskiy, Sergey V.; Ivanov, Victor A.; Paul, Wolfgang

    2016-06-01

    Stochastic Approximation Monte Carlo (SAMC) has been established as a mathematically founded powerful flat-histogram Monte Carlo method, used to determine the density of states, g (E ) , of a model system. We show here how it can be generalized for the determination of multidimensional probability distributions (or equivalently densities of states) of macroscopic or mesoscopic variables defined on the space of microstates of a statistical mechanical system. This establishes this method as a systematic way for coarse graining a model system, or, in other words, for performing a renormalization group step on a model. We discuss the formulation of the Kadanoff block spin transformation and the coarse-graining procedure for polymer models in this language. We also apply it to a standard case in the literature of two-dimensional densities of states, where two competing energetic effects are present g (E1,E2) . We show when and why care has to be exercised when obtaining the microcanonical density of states g (E1+E2) from g (E1,E2) .

  10. Single scatter electron Monte Carlo

    SciTech Connect

    Svatos, M.M.

    1997-03-01

    A single scatter electron Monte Carlo code (SSMC), CREEP, has been written which bridges the gap between existing transport methods and modeling real physical processes. CREEP simulates ionization, elastic and bremsstrahlung events individually. Excitation events are treated with an excitation-only stopping power. The detailed nature of these simulations allows for calculation of backscatter and transmission coefficients, backscattered energy spectra, stopping powers, energy deposits, depth dose, and a variety of other associated quantities. Although computationally intense, the code relies on relatively few mathematical assumptions, unlike other charged particle Monte Carlo methods such as the commonly-used condensed history method. CREEP relies on sampling the Lawrence Livermore Evaluated Electron Data Library (EEDL) which has data for all elements with an atomic number between 1 and 100, over an energy range from approximately several eV (or the binding energy of the material) to 100 GeV. Compounds and mixtures may also be used by combining the appropriate element data via Bragg additivity.

  11. Multilevel Monte Carlo simulation of Coulomb collisions

    SciTech Connect

    Rosin, M.S.; Ricketson, L.F.; Dimits, A.M.; Caflisch, R.E.; Cohen, B.I.

    2014-10-01

    We present a new, for plasma physics, highly efficient multilevel Monte Carlo numerical method for simulating Coulomb collisions. The method separates and optimally minimizes the finite-timestep and finite-sampling errors inherent in the Langevin representation of the Landau–Fokker–Planck equation. It does so by combining multiple solutions to the underlying equations with varying numbers of timesteps. For a desired level of accuracy ε, the computational cost of the method is O(ε{sup −2}) or O(ε{sup −2}(lnε){sup 2}), depending on the underlying discretization, Milstein or Euler–Maruyama respectively. This is to be contrasted with a cost of O(ε{sup −3}) for direct simulation Monte Carlo or binary collision methods. We successfully demonstrate the method with a classic beam diffusion test case in 2D, making use of the Lévy area approximation for the correlated Milstein cross terms, and generating a computational saving of a factor of 100 for ε=10{sup −5}. We discuss the importance of the method for problems in which collisions constitute the computational rate limiting step, and its limitations.

  12. Multilevel Monte Carlo simulation of Coulomb collisions

    DOE PAGES

    Rosin, M. S.; Ricketson, L. F.; Dimits, A. M.; ...

    2014-05-29

    We present a new, for plasma physics, highly efficient multilevel Monte Carlo numerical method for simulating Coulomb collisions. The method separates and optimally minimizes the finite-timestep and finite-sampling errors inherent in the Langevin representation of the Landau–Fokker–Planck equation. It does so by combining multiple solutions to the underlying equations with varying numbers of timesteps. For a desired level of accuracy ε , the computational cost of the method is O(ε–2) or (ε–2(lnε)2), depending on the underlying discretization, Milstein or Euler–Maruyama respectively. This is to be contrasted with a cost of O(ε–3) for direct simulation Monte Carlo or binary collision methods.more » We successfully demonstrate the method with a classic beam diffusion test case in 2D, making use of the Lévy area approximation for the correlated Milstein cross terms, and generating a computational saving of a factor of 100 for ε=10–5. Lastly, we discuss the importance of the method for problems in which collisions constitute the computational rate limiting step, and its limitations.« less

  13. Multilevel Monte Carlo simulation of Coulomb collisions

    SciTech Connect

    Rosin, M. S.; Ricketson, L. F.; Dimits, A. M.; Caflisch, R. E.; Cohen, B. I.

    2014-05-29

    We present a new, for plasma physics, highly efficient multilevel Monte Carlo numerical method for simulating Coulomb collisions. The method separates and optimally minimizes the finite-timestep and finite-sampling errors inherent in the Langevin representation of the Landau–Fokker–Planck equation. It does so by combining multiple solutions to the underlying equations with varying numbers of timesteps. For a desired level of accuracy ε , the computational cost of the method is O(ε–2) or (ε–2(lnε)2), depending on the underlying discretization, Milstein or Euler–Maruyama respectively. This is to be contrasted with a cost of O(ε–3) for direct simulation Monte Carlo or binary collision methods. We successfully demonstrate the method with a classic beam diffusion test case in 2D, making use of the Lévy area approximation for the correlated Milstein cross terms, and generating a computational saving of a factor of 100 for ε=10–5. Lastly, we discuss the importance of the method for problems in which collisions constitute the computational rate limiting step, and its limitations.

  14. Complexity of Monte Carlo and deterministic dose-calculation methods.

    PubMed

    Börgers, C

    1998-03-01

    Grid-based deterministic dose-calculation methods for radiotherapy planning require the use of six-dimensional phase space grids. Because of the large number of phase space dimensions, a growing number of medical physicists appear to believe that grid-based deterministic dose-calculation methods are not competitive with Monte Carlo methods. We argue that this conclusion may be premature. Our results do suggest, however, that finite difference or finite element schemes with orders of accuracy greater than one will probably be needed if such methods are to compete well with Monte Carlo methods for dose calculations.

  15. Modeling the impact of prostate edema on LDR brachytherapy: a Monte Carlo dosimetry study based on a 3D biphasic finite element biomechanical model

    NASA Astrophysics Data System (ADS)

    Mountris, K. A.; Bert, J.; Noailly, J.; Rodriguez Aguilera, A.; Valeri, A.; Pradier, O.; Schick, U.; Promayon, E.; Gonzalez Ballester, M. A.; Troccaz, J.; Visvikis, D.

    2017-03-01

    Prostate volume changes due to edema occurrence during transperineal permanent brachytherapy should be taken under consideration to ensure optimal dose delivery. Available edema models, based on prostate volume observations, face several limitations. Therefore, patient-specific models need to be developed to accurately account for the impact of edema. In this study we present a biomechanical model developed to reproduce edema resolution patterns documented in the literature. Using the biphasic mixture theory and finite element analysis, the proposed model takes into consideration the mechanical properties of the pubic area tissues in the evolution of prostate edema. The model’s computed deformations are incorporated in a Monte Carlo simulation to investigate their effect on post-operative dosimetry. The comparison of Day1 and Day30 dosimetry results demonstrates the capability of the proposed model for patient-specific dosimetry improvements, considering the edema dynamics. The proposed model shows excellent ability to reproduce previously described edema resolution patterns and was validated based on previous findings. According to our results, for a prostate volume increase of 10–20% the Day30 urethra D10 dose metric is higher by 4.2%–10.5% compared to the Day1 value. The introduction of the edema dynamics in Day30 dosimetry shows a significant global dose overestimation identified on the conventional static Day30 dosimetry. In conclusion, the proposed edema biomechanical model can improve the treatment planning of transperineal permanent brachytherapy accounting for post-implant dose alterations during the planning procedure.

  16. General polarizability and hyperpolarizability estimators for the path-integral Monte Carlo method applied to small atoms, ions, and molecules at finite temperatures

    NASA Astrophysics Data System (ADS)

    Tiihonen, Juha; Kylänpää, Ilkka; Rantala, Tapio T.

    2016-09-01

    The nonlinear optical properties of matter have a broad relevance and many methods have been invented to compute them from first principles. However, the effects of electronic correlation, finite temperature, and breakdown of the Born-Oppenheimer approximation have turned out to be challenging and tedious to model. Here we propose a straightforward approach and derive general field-free polarizability and hyperpolarizability estimators for the path-integral Monte Carlo method. The estimators are applied to small atoms, ions, and molecules with one or two electrons. With the adiabatic, i.e., Born-Oppenheimer, approximation we obtain accurate tensorial ground state polarizabilities, while the nonadiabatic simulation adds in considerable rovibrational effects and thermal coupling. In both cases, the 0 K, or ground-state, limit is in excellent agreement with the literature. Furthermore, we report here the internal dipole moment of PsH molecule, the temperature dependence of the polarizabilities of H-, and the average dipole polarizabilities and the ground-state hyperpolarizabilities of HeH+ and H 3 + .

  17. Water quality model parameter identification of an open channel in a long distance water transfer project based on finite difference, difference evolution and Monte Carlo.

    PubMed

    Shao, Dongguo; Yang, Haidong; Xiao, Yi; Liu, Biyu

    2014-01-01

    A new method is proposed based on the finite difference method (FDM), differential evolution algorithm and Markov Chain Monte Carlo (MCMC) simulation to identify water quality model parameters of an open channel in a long distance water transfer project. Firstly, this parameter identification problem is considered as a Bayesian estimation problem and the forward numerical model is solved by FDM, and the posterior probability density function of the parameters is deduced. Then these parameters are estimated using a sampling method with differential evolution algorithm and MCMC simulation. Finally this proposed method is compared with FDM-MCMC by a twin experiment. The results show that the proposed method can be used to identify water quality model parameters of an open channel in a long distance water transfer project under different scenarios better with fewer iterations, higher reliability and anti-noise capability compared with FDM-MCMC. Therefore, it provides a new idea and method to solve the traceability problem in sudden water pollution accidents.

  18. Momentum distribution and non-Fermi-liquid behavior in low-doped two-orbital model: Finite-size cluster quantum Monte Carlo approach

    NASA Astrophysics Data System (ADS)

    Kashurnikov, Vladimir A.; Krasavin, Andrey V.; Zhumagulov, Yaroslav V.

    2016-12-01

    The two-dimensional two-orbital Hubbard model is studied with the use of a finite-size cluster world-line quantum Monte Carlo algorithm. This model is widely used for simulation of the band structure of FeAs clusters, which are structure elements of Fe-based high-temperature superconductors. The choice of a special basis set of hypersites allowed us to take into account four-fermion operator terms and to overcome partly the sign problem. Spectral functions and the density of states for various parameters of the model are obtained in the undoped and low-doped regimes. The correlated distortion of the spectral density with the change of doping is observed, and the applicability of the "hard-band" approximation in the doped regime is demonstrated. Profiles of the momentum distribution are obtained for the first Brillouin zone; they have pronounced jump near the Fermi level, which decreases with the growth of the strength of the interaction. The invariance of the Fermi surface with respect to the strength of the interaction is testified. Nesting is found in the case of electron and hole doping. Fermi-liquid parameters of the model are derived. The Z factor grows sharply with the increasing of the level of doping and monotonously decreases with the growth of the strength of the interaction. Moreover, electron-hole doping asymmetry of the Z factor is revealed. The non-Fermi-liquid behavior and the deviation from Luttinger theorem are observed.

  19. Challenges of Monte Carlo Transport

    SciTech Connect

    Long, Alex Roberts

    2016-06-10

    These are slides from a presentation for Parallel Summer School at Los Alamos National Laboratory. Solving discretized partial differential equations (PDEs) of interest can require a large number of computations. We can identify concurrency to allow parallel solution of discrete PDEs. Simulated particles histories can be used to solve the Boltzmann transport equation. Particle histories are independent in neutral particle transport, making them amenable to parallel computation. Physical parameters and method type determine the data dependencies of particle histories. Data requirements shape parallel algorithms for Monte Carlo. Then, Parallel Computational Physics and Parallel Monte Carlo are discussed and, finally, the results are given. The mesh passing method greatly simplifies the IMC implementation and allows simple load-balancing. Using MPI windows and passive, one-sided RMA further simplifies the implementation by removing target synchronization. The author is very interested in implementations of PGAS that may allow further optimization for one-sided, read-only memory access (e.g. Open SHMEM). The MPICH_RMA_OVER_DMAPP option and library is required to make one-sided messaging scale on Trinitite - Moonlight scales poorly. Interconnect specific libraries or functions are likely necessary to ensure performance. BRANSON has been used to directly compare the current standard method to a proposed method on idealized problems. The mesh passing algorithm performs well on problems that are designed to show the scalability of the particle passing method. BRANSON can now run load-imbalanced, dynamic problems. Potential avenues of improvement in the mesh passing algorithm will be implemented and explored. A suite of test problems that stress DD methods will elucidate a possible path forward for production codes.

  20. SciDAC Center for Simulation of Wave-Plasma Interactions - Iterated Finite-Orbit Monte Carlo Simulations with Full-Wave Fields for Modeling Tokamak ICRF Wave Heating Experiments - Final Report

    SciTech Connect

    Choi, Myunghee; Chan, Vincent S.

    2014-02-28

    This final report describes the work performed under U.S. Department of Energy Cooperative Agreement DE-FC02-08ER54954 for the period April 1, 2011 through March 31, 2013. The goal of this project was to perform iterated finite-orbit Monte Carlo simulations with full-wall fields for modeling tokamak ICRF wave heating experiments. In year 1, the finite-orbit Monte-Carlo code ORBIT-RF and its iteration algorithms with the full-wave code AORSA were improved to enable systematical study of the factors responsible for the discrepancy in the simulated and the measured fast-ion FIDA signals in the DIII-D and NSTX ICRF fast-wave (FW) experiments. In year 2, ORBIT-RF was coupled to the TORIC full-wave code for a comparative study of ORBIT-RF/TORIC and ORBIT-RF/AORSA results in FW experiments.

  1. Geometric Monte Carlo and black Janus geometries

    NASA Astrophysics Data System (ADS)

    Bak, Dongsu; Kim, Chanju; Kim, Kyung Kiu; Min, Hyunsoo; Song, Jeong-Pil

    2017-04-01

    We describe an application of the Monte Carlo method to the Janus deformation of the black brane background. We present numerical results for three and five dimensional black Janus geometries with planar and spherical interfaces. In particular, we argue that the 5D geometry with a spherical interface has an application in understanding the finite temperature bag-like QCD model via the AdS/CFT correspondence. The accuracy and convergence of the algorithm are evaluated with respect to the grid spacing. The systematic errors of the method are determined using an exact solution of 3D black Janus. This numerical approach for solving linear problems is unaffected initial guess of a trial solution and can handle an arbitrary geometry under various boundary conditions in the presence of source fields.

  2. TH-C-12A-08: New Compact 10 MV S-Band Linear Accelerator: 3D Finite-Element Design and Monte Carlo Dose Simulations

    SciTech Connect

    Baillie, D; St Aubin, J; Fallone, B; Steciw, S

    2014-06-15

    Purpose: To design a new compact S-band linac waveguide capable of producing a 10 MV x-ray beam, while maintaining the length (27.5 cm) of current 6 MV waveguides. This will allow higher x-ray energies to be used in our linac-MRI systems with the same footprint. Methods: Finite element software COMSOL Multiphysics was used to design an accelerator cavity matching one published in an experiment breakdown study, to ensure that our modeled cavities do not exceed the threshold electric fields published. This cavity was used as the basis for designing an accelerator waveguide, where each cavity of the full waveguide was tuned to resonate at 2.997 GHz by adjusting the cavity diameter. The RF field solution within the waveguide was calculated, and together with an electron-gun phase space generated using Opera3D/SCALA, were input into electron tracking software PARMELA to compute the electron phase space striking the x-ray target. This target phase space was then used in BEAM Monte Carlo simulations to generate percent depth doses curves for this new linac, which were then used to re-optimize the waveguide geometry. Results: The shunt impedance, Q-factor, and peak-to-mean electric field ratio were matched to those published for the breakdown study to within 0.1% error. After tuning the full waveguide, the peak surface fields are calculated to be 207 MV/m, 13% below the breakdown threshold, and a d-max depth of 2.42 cm, a D10/20 value of 1.59, compared to 2.45 cm and 1.59, respectively, for the simulated Varian 10 MV linac and brehmsstrahlung production efficiency 20% lower than a simulated Varian 10 MV linac. Conclusion: This work demonstrates the design of a functional 27.5 cm waveguide producing 10 MV photons with characteristics similar to a Varian 10 MV linac.

  3. Monte Carlo Shower Counter Studies

    NASA Technical Reports Server (NTRS)

    Snyder, H. David

    1991-01-01

    Activities and accomplishments related to the Monte Carlo shower counter studies are summarized. A tape of the VMS version of the GEANT software was obtained and installed on the central computer at Gallaudet University. Due to difficulties encountered in updating this VMS version, a decision was made to switch to the UNIX version of the package. This version was installed and used to generate the set of data files currently accessed by various analysis programs. The GEANT software was used to write files of data for positron and proton showers. Showers were simulated for a detector consisting of 50 alternating layers of lead and scintillator. Each file consisted of 1000 events at each of the following energies: 0.1, 0.5, 2.0, 10, 44, and 200 GeV. Data analysis activities related to clustering, chi square, and likelihood analyses are summarized. Source code for the GEANT user subprograms and data analysis programs are provided along with example data plots.

  4. Improved Monte Carlo Renormalization Group Method

    DOE R&D Accomplishments Database

    Gupta, R.; Wilson, K. G.; Umrigar, C.

    1985-01-01

    An extensive program to analyze critical systems using an Improved Monte Carlo Renormalization Group Method (IMCRG) being undertaken at LANL and Cornell is described. Here we first briefly review the method and then list some of the topics being investigated.

  5. Generalizing the self-healing diffusion Monte Carlo approach to finite temperature: A path for the optimization of low-energy many-body bases

    SciTech Connect

    Reboredo, Fernando A.; Kim, Jeongnim

    2014-02-21

    A statistical method is derived for the calculation of thermodynamic properties of many-body systems at low temperatures. This method is based on the self-healing diffusion Monte Carlo method for complex functions [F. A. Reboredo, J. Chem. Phys. 136, 204101 (2012)] and some ideas of the correlation function Monte Carlo approach [D. M. Ceperley and B. Bernu, J. Chem. Phys. 89, 6316 (1988)]. In order to allow the evolution in imaginary time to describe the density matrix, we remove the fixed-node restriction using complex antisymmetric guiding wave functions. In the process we obtain a parallel algorithm that optimizes a small subspace of the many-body Hilbert space to provide maximum overlap with the subspace spanned by the lowest-energy eigenstates of a many-body Hamiltonian. We show in a model system that the partition function is progressively maximized within this subspace. We show that the subspace spanned by the small basis systematically converges towards the subspace spanned by the lowest energy eigenstates. Possible applications of this method for calculating the thermodynamic properties of many-body systems near the ground state are discussed. The resulting basis can also be used to accelerate the calculation of the ground or excited states with quantum Monte Carlo.

  6. Extra Chance Generalized Hybrid Monte Carlo

    NASA Astrophysics Data System (ADS)

    Campos, Cédric M.; Sanz-Serna, J. M.

    2015-01-01

    We study a method, Extra Chance Generalized Hybrid Monte Carlo, to avoid rejections in the Hybrid Monte Carlo method and related algorithms. In the spirit of delayed rejection, whenever a rejection would occur, extra work is done to find a fresh proposal that, hopefully, may be accepted. We present experiments that clearly indicate that the additional work per sample carried out in the extra chance approach clearly pays in terms of the quality of the samples generated.

  7. Error in Monte Carlo, quasi-error in Quasi-Monte Carlo

    NASA Astrophysics Data System (ADS)

    Kleiss, Ronald; Lazopoulos, Achilleas

    2006-07-01

    While the Quasi-Monte Carlo method of numerical integration achieves smaller integration error than standard Monte Carlo, its use in particle physics phenomenology has been hindered by the absence of a reliable way to estimate that error. The standard Monte Carlo error estimator relies on the assumption that the points are generated independently of each other and, therefore, fails to account for the error improvement advertised by the Quasi-Monte Carlo method. We advocate the construction of an estimator of stochastic nature, based on the ensemble of pointsets with a particular discrepancy value. We investigate the consequences of this choice and give some first empirical results on the suggested estimators.

  8. Monte Carlo docking with ubiquitin.

    PubMed Central

    Cummings, M. D.; Hart, T. N.; Read, R. J.

    1995-01-01

    The development of general strategies for the performance of docking simulations is prerequisite to the exploitation of this powerful computational method. Comprehensive strategies can only be derived from docking experiences with a diverse array of biological systems, and we have chosen the ubiquitin/diubiquitin system as a learning tool for this process. Using our multiple-start Monte Carlo docking method, we have reconstructed the known structure of diubiquitin from its two halves as well as from two copies of the uncomplexed monomer. For both of these cases, our relatively simple potential function ranked the correct solution among the lowest energy configurations. In the experiments involving the ubiquitin monomer, various structural modifications were made to compensate for the lack of flexibility and for the lack of a covalent bond in the modeled interaction. Potentially flexible regions could be identified using available biochemical and structural information. A systematic conformational search ruled out the possibility that the required covalent bond could be formed in one family of low-energy configurations, which was distant from the observed dimer configuration. A variety of analyses was performed on the low-energy dockings obtained in the experiment involving structurally modified ubiquitin. Characterization of the size and chemical nature of the interface surfaces was a powerful adjunct to our potential function, enabling us to distinguish more accurately between correct and incorrect dockings. Calculations with the structure of tetraubiquitin indicated that the dimer configuration in this molecule is much less favorable than that observed in the diubiquitin structure, for a simple monomer-monomer pair. Based on the analysis of our results, we draw conclusions regarding some of the approximations involved in our simulations, the use of diverse chemical and biochemical information in experimental design and the analysis of docking results, as well as

  9. A Monte Carlo investigation of the Hamiltonian mean field model

    NASA Astrophysics Data System (ADS)

    Pluchino, Alessandro; Andronico, Giuseppe; Rapisarda, Andrea

    2005-04-01

    We present a Monte Carlo numerical investigation of the Hamiltonian mean field (HMF) model. We begin by discussing canonical Metropolis Monte Carlo calculations, in order to check the caloric curve of the HMF model and study finite size effects. In the second part of the paper, we present numerical simulations obtained by means of a modified Monte Carlo procedure with the aim to test the stability of those states at minimum temperature and zero magnetization (homogeneous Quasi stationary states), which exist in the condensed phase of the model just below the critical point. For energy densities smaller than the limiting value U∼0.68, we find that these states are unstable confirming a recent result on the Vlasov stability analysis applied to the HMF model.

  10. Electronic structure quantum Monte Carlo

    NASA Astrophysics Data System (ADS)

    Bajdich, Michal; Mitas, Lubos

    2009-04-01

    Quantum Monte Carlo (QMC) is an advanced simulation methodology for studies of manybody quantum systems. The QMC approaches combine analytical insights with stochastic computational techniques for efficient solution of several classes of important many-body problems such as the stationary Schrödinger equation. QMC methods of various flavors have been applied to a great variety of systems spanning continuous and lattice quantum models, molecular and condensed systems, BEC-BCS ultracold condensates, nuclei, etc. In this review, we focus on the electronic structure QMC, i.e., methods relevant for systems described by the electron-ion Hamiltonians. Some of the key QMC achievements include direct treatment of electron correlation, accuracy in predicting energy differences and favorable scaling in the system size. Calculations of atoms, molecules, clusters and solids have demonstrated QMC applicability to real systems with hundreds of electrons while providing 90-95% of the correlation energy and energy differences typically within a few percent of experiments. Advances in accuracy beyond these limits are hampered by the so-called fixed-node approximation which is used to circumvent the notorious fermion sign problem. Many-body nodes of fermion states and their properties have therefore become one of the important topics for further progress in predictive power and efficiency of QMC calculations. Some of our recent results on the wave function nodes and related nodal domain topologies will be briefly reviewed. This includes analysis of few-electron systems and descriptions of exact and approximate nodes using transformations and projections of the highly-dimensional nodal hypersurfaces into the 3D space. Studies of fermion nodes offer new insights into topological properties of eigenstates such as explicit demonstrations that generic fermionic ground states exhibit the minimal number of two nodal domains. Recently proposed trial wave functions based on Pfaffians with

  11. Quantum speedup of Monte Carlo methods.

    PubMed

    Montanaro, Ashley

    2015-09-08

    Monte Carlo methods use random sampling to estimate numerical quantities which are hard to compute deterministically. One important example is the use in statistical physics of rapidly mixing Markov chains to approximately compute partition functions. In this work, we describe a quantum algorithm which can accelerate Monte Carlo methods in a very general setting. The algorithm estimates the expected output value of an arbitrary randomized or quantum subroutine with bounded variance, achieving a near-quadratic speedup over the best possible classical algorithm. Combining the algorithm with the use of quantum walks gives a quantum speedup of the fastest known classical algorithms with rigorous performance bounds for computing partition functions, which use multiple-stage Markov chain Monte Carlo techniques. The quantum algorithm can also be used to estimate the total variation distance between probability distributions efficiently.

  12. Self-learning Monte Carlo method

    NASA Astrophysics Data System (ADS)

    Liu, Junwei; Qi, Yang; Meng, Zi Yang; Fu, Liang

    2017-01-01

    Monte Carlo simulation is an unbiased numerical tool for studying classical and quantum many-body systems. One of its bottlenecks is the lack of a general and efficient update algorithm for large size systems close to the phase transition, for which local updates perform badly. In this Rapid Communication, we propose a general-purpose Monte Carlo method, dubbed self-learning Monte Carlo (SLMC), in which an efficient update algorithm is first learned from the training data generated in trial simulations and then used to speed up the actual simulation. We demonstrate the efficiency of SLMC in a spin model at the phase transition point, achieving a 10-20 times speedup.

  13. Adiabatic optimization versus diffusion Monte Carlo methods

    NASA Astrophysics Data System (ADS)

    Jarret, Michael; Jordan, Stephen P.; Lackey, Brad

    2016-10-01

    Most experimental and theoretical studies of adiabatic optimization use stoquastic Hamiltonians, whose ground states are expressible using only real nonnegative amplitudes. This raises a question as to whether classical Monte Carlo methods can simulate stoquastic adiabatic algorithms with polynomial overhead. Here we analyze diffusion Monte Carlo algorithms. We argue that, based on differences between L1 and L2 normalized states, these algorithms suffer from certain obstructions preventing them from efficiently simulating stoquastic adiabatic evolution in generality. In practice however, we obtain good performance by introducing a method that we call Substochastic Monte Carlo. In fact, our simulations are good classical optimization algorithms in their own right, competitive with the best previously known heuristic solvers for MAX-k -SAT at k =2 ,3 ,4 .

  14. Quantum speedup of Monte Carlo methods

    PubMed Central

    Montanaro, Ashley

    2015-01-01

    Monte Carlo methods use random sampling to estimate numerical quantities which are hard to compute deterministically. One important example is the use in statistical physics of rapidly mixing Markov chains to approximately compute partition functions. In this work, we describe a quantum algorithm which can accelerate Monte Carlo methods in a very general setting. The algorithm estimates the expected output value of an arbitrary randomized or quantum subroutine with bounded variance, achieving a near-quadratic speedup over the best possible classical algorithm. Combining the algorithm with the use of quantum walks gives a quantum speedup of the fastest known classical algorithms with rigorous performance bounds for computing partition functions, which use multiple-stage Markov chain Monte Carlo techniques. The quantum algorithm can also be used to estimate the total variation distance between probability distributions efficiently. PMID:26528079

  15. Monte Carlo inversion of seismic data

    NASA Technical Reports Server (NTRS)

    Wiggins, R. A.

    1972-01-01

    The analytic solution to the linear inverse problem provides estimates of the uncertainty of the solution in terms of standard deviations of corrections to a particular solution, resolution of parameter adjustments, and information distribution among the observations. It is shown that Monte Carlo inversion, when properly executed, can provide all the same kinds of information for nonlinear problems. Proper execution requires a relatively uniform sampling of all possible models. The expense of performing Monte Carlo inversion generally requires strategies to improve the probability of finding passing models. Such strategies can lead to a very strong bias in the distribution of models examined unless great care is taken in their application.

  16. Parallel Markov chain Monte Carlo simulations.

    PubMed

    Ren, Ruichao; Orkoulas, G

    2007-06-07

    With strict detailed balance, parallel Monte Carlo simulation through domain decomposition cannot be validated with conventional Markov chain theory, which describes an intrinsically serial stochastic process. In this work, the parallel version of Markov chain theory and its role in accelerating Monte Carlo simulations via cluster computing is explored. It is shown that sequential updating is the key to improving efficiency in parallel simulations through domain decomposition. A parallel scheme is proposed to reduce interprocessor communication or synchronization, which slows down parallel simulation with increasing number of processors. Parallel simulation results for the two-dimensional lattice gas model show substantial reduction of simulation time for systems of moderate and large size.

  17. Geodesic Monte Carlo on Embedded Manifolds

    PubMed Central

    Byrne, Simon; Girolami, Mark

    2013-01-01

    Markov chain Monte Carlo methods explicitly defined on the manifold of probability distributions have recently been established. These methods are constructed from diffusions across the manifold and the solution of the equations describing geodesic flows in the Hamilton–Jacobi representation. This paper takes the differential geometric basis of Markov chain Monte Carlo further by considering methods to simulate from probability distributions that themselves are defined on a manifold, with common examples being classes of distributions describing directional statistics. Proposal mechanisms are developed based on the geodesic flows over the manifolds of support for the distributions, and illustrative examples are provided for the hypersphere and Stiefel manifold of orthonormal matrices. PMID:25309024

  18. Parallel Markov chain Monte Carlo simulations

    NASA Astrophysics Data System (ADS)

    Ren, Ruichao; Orkoulas, G.

    2007-06-01

    With strict detailed balance, parallel Monte Carlo simulation through domain decomposition cannot be validated with conventional Markov chain theory, which describes an intrinsically serial stochastic process. In this work, the parallel version of Markov chain theory and its role in accelerating Monte Carlo simulations via cluster computing is explored. It is shown that sequential updating is the key to improving efficiency in parallel simulations through domain decomposition. A parallel scheme is proposed to reduce interprocessor communication or synchronization, which slows down parallel simulation with increasing number of processors. Parallel simulation results for the two-dimensional lattice gas model show substantial reduction of simulation time for systems of moderate and large size.

  19. Monte Carlo simulation of neutron scattering instruments

    SciTech Connect

    Seeger, P.A.

    1995-12-31

    A library of Monte Carlo subroutines has been developed for the purpose of design of neutron scattering instruments. Using small-angle scattering as an example, the philosophy and structure of the library are described and the programs are used to compare instruments at continuous wave (CW) and long-pulse spallation source (LPSS) neutron facilities. The Monte Carlo results give a count-rate gain of a factor between 2 and 4 using time-of-flight analysis. This is comparable to scaling arguments based on the ratio of wavelength bandwidth to resolution width.

  20. Monte Carlo Simulation of Surface Reactions

    NASA Astrophysics Data System (ADS)

    Brosilow, Benjamin J.

    A Monte-Carlo study of the catalytic reaction of CO and O_2 over transition metal surfaces is presented, using generalizations of a model proposed by Ziff, Gulari and Barshad (ZGB). A new "constant -coverage" algorithm is described and applied to the model in order to elucidate the behavior near the model's first -order transition, and to draw an analogy between this transition and first-order phase transitions in equilibrium systems. The behavior of the model is then compared to the behavior of CO oxidation systems over Pt single-crystal catalysts. This comparison leads to the introduction of a new variation of the model in which one of the reacting species requires a large ensemble of vacant surface sites in order to adsorb. Further, it is shown that precursor adsorption and an effective Eley-Rideal mechanism must also be included in the model in order to obtain detailed agreement with experiment. Finally, variations of the model on finite and two component lattices are studied as models for low temperature CO oxidation over Noble Metal/Reducible Oxide and alloy catalysts.

  1. Global Monte Carlo Simulation with High Order Polynomial Expansions

    SciTech Connect

    William R. Martin; James Paul Holloway; Kaushik Banerjee; Jesse Cheatham; Jeremy Conlin

    2007-12-13

    The functional expansion technique (FET) was recently developed for Monte Carlo simulation. The basic idea of the FET is to expand a Monte Carlo tally in terms of a high order expansion, the coefficients of which can be estimated via the usual random walk process in a conventional Monte Carlo code. If the expansion basis is chosen carefully, the lowest order coefficient is simply the conventional histogram tally, corresponding to a flat mode. This research project studied the applicability of using the FET to estimate the fission source, from which fission sites can be sampled for the next generation. The idea is that individual fission sites contribute to expansion modes that may span the geometry being considered, possibly increasing the communication across a loosely coupled system and thereby improving convergence over the conventional fission bank approach used in most production Monte Carlo codes. The project examined a number of basis functions, including global Legendre polynomials as well as “local” piecewise polynomials such as finite element hat functions and higher order versions. The global FET showed an improvement in convergence over the conventional fission bank approach. The local FET methods showed some advantages versus global polynomials in handling geometries with discontinuous material properties. The conventional finite element hat functions had the disadvantage that the expansion coefficients could not be estimated directly but had to be obtained by solving a linear system whose matrix elements were estimated. An alternative fission matrix-based response matrix algorithm was formulated. Studies were made of two alternative applications of the FET, one based on the kernel density estimator and one based on Arnoldi’s method of minimized iterations. Preliminary results for both methods indicate improvements in fission source convergence. These developments indicate that the FET has promise for speeding up Monte Carlo fission source

  2. Nonequilibrium Candidate Monte Carlo Simulations with Configurational Freezing Schemes.

    PubMed

    Giovannelli, Edoardo; Gellini, Cristina; Pietraperzia, Giangaetano; Cardini, Gianni; Chelli, Riccardo

    2014-10-14

    Nonequilibrium Candidate Monte Carlo simulation [Nilmeier et al., Proc. Natl. Acad. Sci. U.S.A. 2011, 108, E1009-E1018] is a tool devised to design Monte Carlo moves with high acceptance probabilities that connect uncorrelated configurations. Such moves are generated through nonequilibrium driven dynamics, producing candidate configurations accepted with a Monte Carlo-like criterion that preserves the equilibrium distribution. The probability of accepting a candidate configuration as the next sample in the Markov chain basically depends on the work performed on the system during the nonequilibrium trajectory and increases with decreasing such a work. It is thus strategically relevant to find ways of producing nonequilibrium moves with low work, namely moves where dissipation is as low as possible. This is the goal of our methodology, in which we combine Nonequilibrium Candidate Monte Carlo with Configurational Freezing schemes developed by Nicolini et al. (J. Chem. Theory Comput. 2011, 7, 582-593). The idea is to limit the configurational sampling to particles of a well-established region of the simulation sample, namely the region where dissipation occurs, while leaving fixed the other particles. This allows to make the system relaxation faster around the region perturbed by the finite-time switching move and hence to reduce the dissipated work, eventually enhancing the probability of accepting the generated move. Our combined approach enhances significantly configurational sampling, as shown by the case of a bistable dimer immersed in a dense fluid.

  3. Observations on variational and projector Monte Carlo methods.

    PubMed

    Umrigar, C J

    2015-10-28

    Variational Monte Carlo and various projector Monte Carlo (PMC) methods are presented in a unified manner. Similarities and differences between the methods and choices made in designing the methods are discussed. Both methods where the Monte Carlo walk is performed in a discrete space and methods where it is performed in a continuous space are considered. It is pointed out that the usual prescription for importance sampling may not be advantageous depending on the particular quantum Monte Carlo method used and the observables of interest, so alternate prescriptions are presented. The nature of the sign problem is discussed for various versions of PMC methods. A prescription for an exact PMC method in real space, i.e., a method that does not make a fixed-node or similar approximation and does not have a finite basis error, is presented. This method is likely to be practical for systems with a small number of electrons. Approximate PMC methods that are applicable to larger systems and go beyond the fixed-node approximation are also discussed.

  4. Scalable Domain Decomposed Monte Carlo Particle Transport

    SciTech Connect

    O'Brien, Matthew Joseph

    2013-12-05

    In this dissertation, we present the parallel algorithms necessary to run domain decomposed Monte Carlo particle transport on large numbers of processors (millions of processors). Previous algorithms were not scalable, and the parallel overhead became more computationally costly than the numerical simulation.

  5. Monte Carlo Simulation of Counting Experiments.

    ERIC Educational Resources Information Center

    Ogden, Philip M.

    A computer program to perform a Monte Carlo simulation of counting experiments was written. The program was based on a mathematical derivation which started with counts in a time interval. The time interval was subdivided to form a binomial distribution with no two counts in the same subinterval. Then the number of subintervals was extended to…

  6. A comparison of Monte Carlo generators

    SciTech Connect

    Golan, Tomasz

    2015-05-15

    A comparison of GENIE, NEUT, NUANCE, and NuWro Monte Carlo neutrino event generators is presented using a set of four observables: protons multiplicity, total visible energy, most energetic proton momentum, and π{sup +} two-dimensional energy vs cosine distribution.

  7. Monte Carlo studies of uranium calorimetry

    SciTech Connect

    Brau, J.; Hargis, H.J.; Gabriel, T.A.; Bishop, B.L.

    1985-01-01

    Detailed Monte Carlo calculations of uranium calorimetry are presented which reveal a significant difference in the responses of liquid argon and plastic scintillator in uranium calorimeters. Due to saturation effects, neutrons from the uranium are found to contribute only weakly to the liquid argon signal. Electromagnetic sampling inefficiencies are significant and contribute substantially to compensation in both systems. 17 references.

  8. Structural Reliability and Monte Carlo Simulation.

    ERIC Educational Resources Information Center

    Laumakis, P. J.; Harlow, G.

    2002-01-01

    Analyzes a simple boom structure and assesses its reliability using elementary engineering mechanics. Demonstrates the power and utility of Monte-Carlo simulation by showing that such a simulation can be implemented more readily with results that compare favorably to the theoretical calculations. (Author/MM)

  9. Search and Rescue Monte Carlo Simulation.

    DTIC Science & Technology

    1985-03-01

    confidence interval ) of the number of lives saved. A single page output and computer graphic present the information to the user in an easily understood...format. The confidence interval can be reduced by making additional runs of this Monte Carlo model. (Author)

  10. Monte Carlo methods in genetic analysis

    SciTech Connect

    Lin, Shili

    1996-12-31

    Many genetic analyses require computation of probabilities and likelihoods of pedigree data. With more and more genetic marker data deriving from new DNA technologies becoming available to researchers, exact computations are often formidable with standard statistical methods and computational algorithms. The desire to utilize as much available data as possible, coupled with complexities of realistic genetic models, push traditional approaches to their limits. These methods encounter severe methodological and computational challenges, even with the aid of advanced computing technology. Monte Carlo methods are therefore increasingly being explored as practical techniques for estimating these probabilities and likelihoods. This paper reviews the basic elements of the Markov chain Monte Carlo method and the method of sequential imputation, with an emphasis upon their applicability to genetic analysis. Three areas of applications are presented to demonstrate the versatility of Markov chain Monte Carlo for different types of genetic problems. A multilocus linkage analysis example is also presented to illustrate the sequential imputation method. Finally, important statistical issues of Markov chain Monte Carlo and sequential imputation, some of which are unique to genetic data, are discussed, and current solutions are outlined. 72 refs.

  11. Monte Carlo studies of ARA detector optimization

    NASA Astrophysics Data System (ADS)

    Stockham, Jessica

    2013-04-01

    The Askaryan Radio Array (ARA) is a neutrino detector deployed in the Antarctic ice sheet near the South Pole. The array is designed to detect ultra high energy neutrinos in the range of 0.1-10 EeV. Detector optimization is studied using Monte Carlo simulations.

  12. MontePython: Implementing Quantum Monte Carlo using Python

    NASA Astrophysics Data System (ADS)

    Nilsen, Jon Kristian

    2007-11-01

    We present a cross-language C++/Python program for simulations of quantum mechanical systems with the use of Quantum Monte Carlo (QMC) methods. We describe a system for which to apply QMC, the algorithms of variational Monte Carlo and diffusion Monte Carlo and we describe how to implement theses methods in pure C++ and C++/Python. Furthermore we check the efficiency of the implementations in serial and parallel cases to show that the overhead using Python can be negligible. Program summaryProgram title: MontePython Catalogue identifier: ADZP_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/ADZP_v1_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: Standard CPC licence, http://cpc.cs.qub.ac.uk/licence/licence.html No. of lines in distributed program, including test data, etc.: 49 519 No. of bytes in distributed program, including test data, etc.: 114 484 Distribution format: tar.gz Programming language: C++, Python Computer: PC, IBM RS6000/320, HP, ALPHA Operating system: LINUX Has the code been vectorised or parallelized?: Yes, parallelized with MPI Number of processors used: 1-96 RAM: Depends on physical system to be simulated Classification: 7.6; 16.1 Nature of problem: Investigating ab initio quantum mechanical systems, specifically Bose-Einstein condensation in dilute gases of 87Rb Solution method: Quantum Monte Carlo Running time: 225 min with 20 particles (with 4800 walkers moved in 1750 time steps) on 1 AMD Opteron TM Processor 2218 processor; Production run for, e.g., 200 particles takes around 24 hours on 32 such processors.

  13. Novel Quantum Monte Carlo Approaches for Quantum Liquids

    NASA Astrophysics Data System (ADS)

    Rubenstein, Brenda M.

    the eventual hope is to apply this algorithm to the exploration of yet unidentified high-pressure, low-temperature phases of hydrogen, I employ this algorithm to determine whether or not quantum hard spheres can form a low-temperature bcc solid if exchange is not taken into account. In the final chapter of this thesis, I use Path Integral Monte Carlo once again to explore whether glassy para-hydrogen exhibits superfluidity. Physicists have long searched for ways to coax hydrogen into becoming a superfluid. I present evidence that, while glassy hydrogen does not crystallize at the temperatures at which hydrogen might become a superfluid, it nevertheless does not exhibit superfluidity. This is because the average binding energy per p-H2 molecule poses a severe barrier to exchange regardless of whether the system is crystalline. All in all, this work extends the reach of Quantum Monte Carlo methods to new systems and brings the power of existing methods to bear on new problems. Portions of this work have been published in Rubenstein, PRE (2010) and Rubenstein, PRA (2012) [167;169]. Other papers not discussed here published during my Ph.D. include Rubenstein, BPJ (2008) and Rubenstein, PRL (2012) [166;168]. The work in Chapters 6 and 7 is currently unpublished. [166] Brenda M. Rubenstein, Ivan Coluzza, and Mark A. Miller. Controlling the folding and substrate-binding of proteins using polymer brushes. Physical Review Letters, 108(20):208104, May 2012. [167] Brenda M. Rubenstein, J.E. Gubernatis, and J.D. Doll. Comparative monte carlo efficiency by monte carlo analysis. Physical Review E, 82(3):036701, September 2010. [168] Brenda M. Rubenstein and Laura J. Kaufman. The role of extracellular matrix in glioma invasion: A cellular potts model approach. Biophysical Journal, 95(12):5661-- 5680, December 2008. [169] Brenda M. Rubenstein, Shiwei Zhang, and David R. Reichman. Finite-temperature auxiliary-field quantum monte carlo for bose-fermi mixtures. Physical Review A, 86

  14. Monte Carlo implementation of polarized hadronization

    NASA Astrophysics Data System (ADS)

    Matevosyan, Hrayr H.; Kotzinian, Aram; Thomas, Anthony W.

    2017-01-01

    We study the polarized quark hadronization in a Monte Carlo (MC) framework based on the recent extension of the quark-jet framework, where a self-consistent treatment of the quark polarization transfer in a sequential hadronization picture has been presented. Here, we first adopt this approach for MC simulations of the hadronization process with a finite number of produced hadrons, expressing the relevant probabilities in terms of the eight leading twist quark-to-quark transverse-momentum-dependent (TMD) splitting functions (SFs) for elementary q →q'+h transition. We present explicit expressions for the unpolarized and Collins fragmentation functions (FFs) of unpolarized hadrons emitted at rank 2. Further, we demonstrate that all the current spectator-type model calculations of the leading twist quark-to-quark TMD SFs violate the positivity constraints, and we propose a quark model based ansatz for these input functions that circumvents the problem. We validate our MC framework by explicitly proving the absence of unphysical azimuthal modulations of the computed polarized FFs, and by precisely reproducing the earlier derived explicit results for rank-2 pions. Finally, we present the full results for pion unpolarized and Collins FFs, as well as the corresponding analyzing powers from high statistics MC simulations with a large number of produced hadrons for two different model input elementary SFs. The results for both sets of input functions exhibit the same general features of an opposite signed Collins function for favored and unfavored channels at large z and, at the same time, demonstrate the flexibility of the quark-jet framework by producing significantly different dependences of the results at mid to low z for the two model inputs.

  15. Quantum Monte Carlo with directed loops.

    PubMed

    Syljuåsen, Olav F; Sandvik, Anders W

    2002-10-01

    We introduce the concept of directed loops in stochastic series expansion and path-integral quantum Monte Carlo methods. Using the detailed balance rules for directed loops, we show that it is possible to smoothly connect generally applicable simulation schemes (in which it is necessary to include backtracking processes in the loop construction) to more restricted loop algorithms that can be constructed only for a limited range of Hamiltonians (where backtracking can be avoided). The "algorithmic discontinuities" between general and special points (or regions) in parameter space can hence be eliminated. As a specific example, we consider the anisotropic S=1/2 Heisenberg antiferromagnet in an external magnetic field. We show that directed-loop simulations are very efficient for the full range of magnetic fields (zero to the saturation point) and anisotropies. In particular, for weak fields and anisotropies, the autocorrelations are significantly reduced relative to those of previous approaches. The back-tracking probability vanishes continuously as the isotropic Heisenberg point is approached. For the XY model, we show that back tracking can be avoided for all fields extending up to the saturation field. The method is hence particularly efficient in this case. We use directed-loop simulations to study the magnetization process in the two-dimensional Heisenberg model at very low temperatures. For LxL lattices with L up to 64, we utilize the step structure in the magnetization curve to extract gaps between different spin sectors. Finite-size scaling of the gaps gives an accurate estimate of the transverse susceptibility in the thermodynamic limit: chi( perpendicular )=0.0659+/-0.0002.

  16. Monte Carlo Volcano Seismic Moment Tensors

    NASA Astrophysics Data System (ADS)

    Waite, G. P.; Brill, K. A.; Lanza, F.

    2015-12-01

    Inverse modeling of volcano seismic sources can provide insight into the geometry and dynamics of volcanic conduits. But given the logistical challenges of working on an active volcano, seismic networks are typically deficient in spatial and temporal coverage; this potentially leads to large errors in source models. In addition, uncertainties in the centroid location and moment-tensor components, including volumetric components, are difficult to constrain from the linear inversion results, which leads to a poor understanding of the model space. In this study, we employ a nonlinear inversion using a Monte Carlo scheme with the objective of defining robustly resolved elements of model space. The model space is randomized by centroid location and moment tensor eigenvectors. Point sources densely sample the summit area and moment tensors are constrained to a randomly chosen geometry within the inversion; Green's functions for the random moment tensors are all calculated from modeled single forces, making the nonlinear inversion computationally reasonable. We apply this method to very-long-period (VLP) seismic events that accompany minor eruptions at Fuego volcano, Guatemala. The library of single force Green's functions is computed with a 3D finite-difference modeling algorithm through a homogeneous velocity-density model that includes topography, for a 3D grid of nodes, spaced 40 m apart, within the summit region. The homogenous velocity and density model is justified by long wavelength of VLP data. The nonlinear inversion reveals well resolved model features and informs the interpretation through a better understanding of the possible models. This approach can also be used to evaluate possible station geometries in order to optimize networks prior to deployment.

  17. A tetrahedron-based inhomogeneous Monte Carlo optical simulator.

    PubMed

    Shen, H; Wang, G

    2010-02-21

    Optical imaging has been widely applied in preclinical and clinical applications. Fifteen years ago, an efficient Monte Carlo program 'MCML' was developed for use with multi-layered turbid media and has gained popularity in the field of biophotonics. Currently, there is an increasingly pressing need for simulating tools more powerful than MCML in order to study light propagation phenomena in complex inhomogeneous objects, such as the mouse. Here we report a tetrahedron-based inhomogeneous Monte Carlo optical simulator (TIM-OS) to address this issue. By modeling an object as a tetrahedron-based inhomogeneous finite-element mesh, TIM-OS can determine the photon-triangle interaction recursively and rapidly. In numerical simulation, we have demonstrated the correctness and efficiency of TIM-OS.

  18. Monte Carlo Particle Transport: Algorithm and Performance Overview

    SciTech Connect

    Gentile, N; Procassini, R; Scott, H

    2005-06-02

    Monte Carlo methods are frequently used for neutron and radiation transport. These methods have several advantages, such as relative ease of programming and dealing with complex meshes. Disadvantages include long run times and statistical noise. Monte Carlo photon transport calculations also often suffer from inaccuracies in matter temperature due to the lack of implicitness. In this paper we discuss the Monte Carlo algorithm as it is applied to neutron and photon transport, detail the differences between neutron and photon Monte Carlo, and give an overview of the ways the numerical method has been modified to deal with issues that arise in photon Monte Carlo simulations.

  19. An enhanced Monte Carlo outlier detection method.

    PubMed

    Zhang, Liangxiao; Li, Peiwu; Mao, Jin; Ma, Fei; Ding, Xiaoxia; Zhang, Qi

    2015-09-30

    Outlier detection is crucial in building a highly predictive model. In this study, we proposed an enhanced Monte Carlo outlier detection method by establishing cross-prediction models based on determinate normal samples and analyzing the distribution of prediction errors individually for dubious samples. One simulated and three real datasets were used to illustrate and validate the performance of our method, and the results indicated that this method outperformed Monte Carlo outlier detection in outlier diagnosis. After these outliers were removed, the value of validation by Kovats retention indices and the root mean square error of prediction decreased from 3.195 to 1.655, and the average cross-validation prediction error decreased from 2.0341 to 1.2780. This method helps establish a good model by eliminating outliers. © 2015 Wiley Periodicals, Inc.

  20. Status of Monte Carlo at Los Alamos

    SciTech Connect

    Thompson, W.L.; Cashwell, E.D.

    1980-01-01

    At Los Alamos the early work of Fermi, von Neumann, and Ulam has been developed and supplemented by many followers, notably Cashwell and Everett, and the main product today is the continuous-energy, general-purpose, generalized-geometry, time-dependent, coupled neutron-photon transport code called MCNP. The Los Alamos Monte Carlo research and development effort is concentrated in Group X-6. MCNP treats an arbitrary three-dimensional configuration of arbitrary materials in geometric cells bounded by first- and second-degree surfaces and some fourth-degree surfaces (elliptical tori). Monte Carlo has evolved into perhaps the main method for radiation transport calculations at Los Alamos. MCNP is used in every technical division at the Laboratory by over 130 users about 600 times a month accounting for nearly 200 hours of CDC-7600 time.

  1. Monte Carlo simulations on SIMD computer architectures

    SciTech Connect

    Burmester, C.P.; Gronsky, R.; Wille, L.T.

    1992-03-01

    Algorithmic considerations regarding the implementation of various materials science applications of the Monte Carlo technique to single instruction multiple data (SMM) computer architectures are presented. In particular, implementation of the Ising model with nearest, next nearest, and long range screened Coulomb interactions on the SIMD architecture MasPar MP-1 (DEC mpp-12000) series of massively parallel computers is demonstrated. Methods of code development which optimize processor array use and minimize inter-processor communication are presented including lattice partitioning and the use of processor array spanning tree structures for data reduction. Both geometric and algorithmic parallel approaches are utilized. Benchmarks in terms of Monte Carlo updates per second for the MasPar architecture are presented and compared to values reported in the literature from comparable studies on other architectures.

  2. Status of Monte Carlo at Los Alamos

    SciTech Connect

    Thompson, W.L.; Cashwell, E.D.; Godfrey, T.N.K.; Schrandt, R.G.; Deutsch, O.L.; Booth, T.E.

    1980-05-01

    Four papers were presented by Group X-6 on April 22, 1980, at the Oak Ridge Radiation Shielding Information Center (RSIC) Seminar-Workshop on Theory and Applications of Monte Carlo Methods. These papers are combined into one report for convenience and because they are related to each other. The first paper (by Thompson and Cashwell) is a general survey about X-6 and MCNP and is an introduction to the other three papers. It can also serve as a resume of X-6. The second paper (by Godfrey) explains some of the details of geometry specification in MCNP. The third paper (by Cashwell and Schrandt) illustrates calculating flux at a point with MCNP; in particular, the once-more-collided flux estimator is demonstrated. Finally, the fourth paper (by Thompson, Deutsch, and Booth) is a tutorial on some variance-reduction techniques. It should be required for a fledging Monte Carlo practitioner.

  3. Fission Matrix Capability for MCNP Monte Carlo

    NASA Astrophysics Data System (ADS)

    Brown, Forrest; Carney, Sean; Kiedrowski, Brian; Martin, William

    2014-06-01

    We describe recent experience and results from implementing a fission matrix capability into the MCNP Monte Carlo code. The fission matrix can be used to provide estimates of the fundamental mode fission distribution, the dominance ratio, the eigenvalue spectrum, and higher mode forward and adjoint eigenfunctions of the fission neutron source distribution. It can also be used to accelerate the convergence of the power method iterations and to provide basis functions for higher-order perturbation theory. The higher-mode fission sources can be used in MCNP to determine higher-mode forward fluxes and tallies, and work is underway to provide higher-mode adjoint-weighted fluxes and tallies. Past difficulties and limitations of the fission matrix approach are overcome with a new sparse representation of the matrix, permitting much larger and more accurate fission matrix representations. The new fission matrix capabilities provide a significant advance in the state-of-the-art for Monte Carlo criticality calculations.

  4. Applications of Maxent to quantum Monte Carlo

    SciTech Connect

    Silver, R.N.; Sivia, D.S.; Gubernatis, J.E. ); Jarrell, M. . Dept. of Physics)

    1990-01-01

    We consider the application of maximum entropy methods to the analysis of data produced by computer simulations. The focus is the calculation of the dynamical properties of quantum many-body systems by Monte Carlo methods, which is termed the Analytical Continuation Problem.'' For the Anderson model of dilute magnetic impurities in metals, we obtain spectral functions and transport coefficients which obey Kondo Universality.'' 24 refs., 7 figs.

  5. Inhomogeneous Monte Carlo simulations of dermoscopic spectroscopy

    NASA Astrophysics Data System (ADS)

    Gareau, Daniel S.; Li, Ting; Jacques, Steven; Krueger, James

    2012-03-01

    Clinical skin-lesion diagnosis uses dermoscopy: 10X epiluminescence microscopy. Skin appearance ranges from black to white with shades of blue, red, gray and orange. Color is an important diagnostic criteria for diseases including melanoma. Melanin and blood content and distribution impact the diffuse spectral remittance (300-1000nm). Skin layers: immersion medium, stratum corneum, spinous epidermis, basal epidermis and dermis as well as laterally asymmetric features (eg. melanocytic invasion) were modeled in an inhomogeneous Monte Carlo model.

  6. Recovering intrinsic fluorescence by Monte Carlo modeling.

    PubMed

    Müller, Manfred; Hendriks, Benno H W

    2013-02-01

    We present a novel way to recover intrinsic fluorescence in turbid media based on Monte Carlo generated look-up tables and making use of a diffuse reflectance measurement taken at the same location. The method has been validated on various phantoms with known intrinsic fluorescence and is benchmarked against photon-migration methods. This new method combines more flexibility in the probe design with fast reconstruction and showed similar reconstruction accuracy as found in other reconstruction methods.

  7. Monte Carlo approach to Estrada index

    NASA Astrophysics Data System (ADS)

    Gutman, Ivan; Radenković, Slavko; Graovac, Ante; Plavšić, Dejan

    2007-09-01

    Let G be a graph on n vertices, and let λ1, λ2, …, λn be its eigenvalues. The Estrada index of G is a recently introduced molecular structure descriptor, defined as EE=∑i=1ne. Using a Monte Carlo approach, and treating the graph eigenvalues as random variables, we deduce approximate expressions for EE, in terms of the number of vertices and number of edges, of very high accuracy.

  8. Accelerated Monte Carlo by Embedded Cluster Dynamics

    NASA Astrophysics Data System (ADS)

    Brower, R. C.; Gross, N. A.; Moriarty, K. J. M.

    1991-07-01

    We present an overview of the new methods for embedding Ising spins in continuous fields to achieve accelerated cluster Monte Carlo algorithms. The methods of Brower and Tamayo and Wolff are summarized and variations are suggested for the O( N) models based on multiple embedded Z2 spin components and/or correlated projections. Topological features are discussed for the XY model and numerical simulations presented for d=2, d=3 and mean field theory lattices.

  9. A multiple step random walk Monte Carlo method for heat conduction involving distributed heat sources

    NASA Astrophysics Data System (ADS)

    Naraghi, M. H. N.; Chung, B. T. F.

    1982-06-01

    A multiple step fixed random walk Monte Carlo method for solving heat conduction in solids with distributed internal heat sources is developed. In this method, the probability that a walker reaches a point a few steps away is calculated analytically and is stored in the computer. Instead of moving to the immediate neighboring point the walker is allowed to jump several steps further. The present multiple step random walk technique can be applied to both conventional Monte Carlo and the Exodus methods. Numerical results indicate that the present method compares well with finite difference solutions while the computation speed is much faster than that of single step Exodus and conventional Monte Carlo methods.

  10. Four decades of implicit Monte Carlo

    SciTech Connect

    Wollaber, Allan B.

    2016-02-23

    In 1971, Fleck and Cummings derived a system of equations to enable robust Monte Carlo simulations of time-dependent, thermal radiative transfer problems. Denoted the “Implicit Monte Carlo” (IMC) equations, their solution remains the de facto standard of high-fidelity radiative transfer simulations. Over the course of 44 years, their numerical properties have become better understood, and accuracy enhancements, novel acceleration methods, and variance reduction techniques have been suggested. In this review, we rederive the IMC equations—explicitly highlighting assumptions as they are made—and outfit the equations with a Monte Carlo interpretation. We put the IMC equations in context with other approximate forms of the radiative transfer equations and present a new demonstration of their equivalence to another well-used linearization solved with deterministic transport methods for frequency-independent problems. We discuss physical and numerical limitations of the IMC equations for asymptotically small time steps, stability characteristics and the potential of maximum principle violations for large time steps, and solution behaviors in an asymptotically thick diffusive limit. We provide a new stability analysis for opacities with general monomial dependence on temperature. Here, we consider spatial accuracy limitations of the IMC equations and discussion acceleration and variance reduction techniques.

  11. Four decades of implicit Monte Carlo

    DOE PAGES

    Wollaber, Allan B.

    2016-02-23

    In 1971, Fleck and Cummings derived a system of equations to enable robust Monte Carlo simulations of time-dependent, thermal radiative transfer problems. Denoted the “Implicit Monte Carlo” (IMC) equations, their solution remains the de facto standard of high-fidelity radiative transfer simulations. Over the course of 44 years, their numerical properties have become better understood, and accuracy enhancements, novel acceleration methods, and variance reduction techniques have been suggested. In this review, we rederive the IMC equations—explicitly highlighting assumptions as they are made—and outfit the equations with a Monte Carlo interpretation. We put the IMC equations in context with other approximate formsmore » of the radiative transfer equations and present a new demonstration of their equivalence to another well-used linearization solved with deterministic transport methods for frequency-independent problems. We discuss physical and numerical limitations of the IMC equations for asymptotically small time steps, stability characteristics and the potential of maximum principle violations for large time steps, and solution behaviors in an asymptotically thick diffusive limit. We provide a new stability analysis for opacities with general monomial dependence on temperature. Here, we consider spatial accuracy limitations of the IMC equations and discussion acceleration and variance reduction techniques.« less

  12. Disorder-induced genetic divergence: A Monte Carlo study

    NASA Astrophysics Data System (ADS)

    Schulz, Michael; Pȩkalski, Andrzej

    2002-10-01

    We present a Monte Carlo simulation of a system composed of several populations, each living in a possibly different habitat. We show the influence of landscape disorder on the genetic pool of finite populations. We demonstrate that a strongly disordered environment generates an increase of the genetic distance between the populations on identical island. The distance becomes permanent for infinitely long times. On the contrary, landscapes with weak disorder offer only a temporarily allelic divergence which vanishes in the long time limit. Similarities between these phenomena and the well-known first-order phase transitions in the thermodynamics are analyzed.

  13. Bond-updating mechanism in cluster Monte Carlo calculations

    NASA Astrophysics Data System (ADS)

    Heringa, J. R.; Blöte, H. W. J.

    1994-03-01

    We study a cluster Monte Carlo method with an adjustable parameter: the number of energy levels of a demon mediating the exchange of bond energy with the heat bath. The efficiency of the algorithm in the case of the three-dimensional Ising model is studied as a function of the number of such levels. The optimum is found in the limit of an infinite number of levels, where the method reproduces the Wolff or the Swendsen-Wang algorithm. In this limit the size distribution of flipped clusters approximates a power law more closely than that for a finite number of energy levels.

  14. Monte Carlo simulation of intercalated carbon nanotubes.

    PubMed

    Mykhailenko, Oleksiy; Matsui, Denis; Prylutskyy, Yuriy; Le Normand, Francois; Eklund, Peter; Scharff, Peter

    2007-01-01

    Monte Carlo simulations of the single- and double-walled carbon nanotubes (CNT) intercalated with different metals have been carried out. The interrelation between the length of a CNT, the number and type of metal atoms has also been established. This research is aimed at studying intercalated systems based on CNTs and d-metals such as Fe and Co. Factors influencing the stability of these composites have been determined theoretically by the Monte Carlo method with the Tersoff potential. The modeling of CNTs intercalated with metals by the Monte Carlo method has proved that there is a correlation between the length of a CNT and the number of endo-atoms of specific type. Thus, in the case of a metallic CNT (9,0) with length 17 bands (3.60 nm), in contrast to Co atoms, Fe atoms are extruded out of the CNT if the number of atoms in the CNT is not less than eight. Thus, this paper shows that a CNT of a certain size can be intercalated with no more than eight Fe atoms. The systems investigated are stabilized by coordination of 3d-atoms close to the CNT wall with a radius-vector of (0.18-0.20) nm. Another characteristic feature is that, within the temperature range of (400-700) K, small systems exhibit ground-state stabilization which is not characteristic of the higher ones. The behavior of Fe and Co endo-atoms between the walls of a double-walled carbon nanotube (DW CNT) is explained by a dominating van der Waals interaction between the Co atoms themselves, which is not true for the Fe atoms.

  15. Quantum Monte Carlo for vibrating molecules

    SciTech Connect

    Brown, W.R. |

    1996-08-01

    Quantum Monte Carlo (QMC) has successfully computed the total electronic energies of atoms and molecules. The main goal of this work is to use correlation function quantum Monte Carlo (CFQMC) to compute the vibrational state energies of molecules given a potential energy surface (PES). In CFQMC, an ensemble of random walkers simulate the diffusion and branching processes of the imaginary-time time dependent Schroedinger equation in order to evaluate the matrix elements. The program QMCVIB was written to perform multi-state VMC and CFQMC calculations and employed for several calculations of the H{sub 2}O and C{sub 3} vibrational states, using 7 PES`s, 3 trial wavefunction forms, two methods of non-linear basis function parameter optimization, and on both serial and parallel computers. In order to construct accurate trial wavefunctions different wavefunctions forms were required for H{sub 2}O and C{sub 3}. In order to construct accurate trial wavefunctions for C{sub 3}, the non-linear parameters were optimized with respect to the sum of the energies of several low-lying vibrational states. In order to stabilize the statistical error estimates for C{sub 3} the Monte Carlo data was collected into blocks. Accurate vibrational state energies were computed using both serial and parallel QMCVIB programs. Comparison of vibrational state energies computed from the three C{sub 3} PES`s suggested that a non-linear equilibrium geometry PES is the most accurate and that discrete potential representations may be used to conveniently determine vibrational state energies.

  16. A Monte Carlo approach to water management

    NASA Astrophysics Data System (ADS)

    Koutsoyiannis, D.

    2012-04-01

    Common methods for making optimal decisions in water management problems are insufficient. Linear programming methods are inappropriate because hydrosystems are nonlinear with respect to their dynamics, operation constraints and objectives. Dynamic programming methods are inappropriate because water management problems cannot be divided into sequential stages. Also, these deterministic methods cannot properly deal with the uncertainty of future conditions (inflows, demands, etc.). Even stochastic extensions of these methods (e.g. linear-quadratic-Gaussian control) necessitate such drastic oversimplifications of hydrosystems that may make the obtained results irrelevant to the real world problems. However, a Monte Carlo approach is feasible and can form a general methodology applicable to any type of hydrosystem. This methodology uses stochastic simulation to generate system inputs, either unconditional or conditioned on a prediction, if available, and represents the operation of the entire system through a simulation model as faithful as possible, without demanding a specific mathematical form that would imply oversimplifications. Such representation fully respects the physical constraints, while at the same time it evaluates the system operation constraints and objectives in probabilistic terms, and derives their distribution functions and statistics through Monte Carlo simulation. As the performance criteria of a hydrosystem operation will generally be highly nonlinear and highly nonconvex functions of the control variables, a second Monte Carlo procedure, implementing stochastic optimization, is necessary to optimize system performance and evaluate the control variables of the system. The latter is facilitated if the entire representation is parsimonious, i.e. if the number of control variables is kept at a minimum by involving a suitable system parameterization. The approach is illustrated through three examples for (a) a hypothetical system of two reservoirs

  17. Status of Monte-Carlo Event Generators

    SciTech Connect

    Hoeche, Stefan; /SLAC

    2011-08-11

    Recent progress on general-purpose Monte-Carlo event generators is reviewed with emphasis on the simulation of hard QCD processes and subsequent parton cascades. Describing full final states of high-energy particle collisions in contemporary experiments is an intricate task. Hundreds of particles are typically produced, and the reactions involve both large and small momentum transfer. The high-dimensional phase space makes an exact solution of the problem impossible. Instead, one typically resorts to regarding events as factorized into different steps, ordered descending in the mass scales or invariant momentum transfers which are involved. In this picture, a hard interaction, described through fixed-order perturbation theory, is followed by multiple Bremsstrahlung emissions off initial- and final-state and, finally, by the hadronization process, which binds QCD partons into color-neutral hadrons. Each of these steps can be treated independently, which is the basic concept inherent to general-purpose event generators. Their development is nowadays often focused on an improved description of radiative corrections to hard processes through perturbative QCD. In this context, the concept of jets is introduced, which allows to relate sprays of hadronic particles in detectors to the partons in perturbation theory. In this talk, we briefly review recent progress on perturbative QCD in event generation. The main focus lies on the general-purpose Monte-Carlo programs HERWIG, PYTHIA and SHERPA, which will be the workhorses for LHC phenomenology. A detailed description of the physics models included in these generators can be found in [8]. We also discuss matrix-element generators, which provide the parton-level input for general-purpose Monte Carlo.

  18. Monte Carlo algorithm for free energy calculation.

    PubMed

    Bi, Sheng; Tong, Ning-Hua

    2015-07-01

    We propose a Monte Carlo algorithm for the free energy calculation based on configuration space sampling. An upward or downward temperature scan can be used to produce F(T). We implement this algorithm for the Ising model on a square lattice and triangular lattice. Comparison with the exact free energy shows an excellent agreement. We analyze the properties of this algorithm and compare it with the Wang-Landau algorithm, which samples in energy space. This method is applicable to general classical statistical models. The possibility of extending it to quantum systems is discussed.

  19. MBR Monte Carlo Simulation in PYTHIA8

    NASA Astrophysics Data System (ADS)

    Ciesielski, R.

    We present the MBR (Minimum Bias Rockefeller) Monte Carlo simulation of (anti)proton-proton interactions and its implementation in the PYTHIA8 event generator. We discuss the total, elastic, and total-inelastic cross sections, and three contributions from diffraction dissociation processes that contribute to the latter: single diffraction, double diffraction, and central diffraction or double-Pomeron exchange. The event generation follows a renormalized-Regge-theory model, successfully tested using CDF data. Based on the MBR-enhanced PYTHIA8 simulation, we present cross-section predictions for the LHC and beyond, up to collision energies of 50 TeV.

  20. Monte Carlo procedure for protein design

    NASA Astrophysics Data System (ADS)

    Irbäck, Anders; Peterson, Carsten; Potthast, Frank; Sandelin, Erik

    1998-11-01

    A method for sequence optimization in protein models is presented. The approach, which has inherited its basic philosophy from recent work by Deutsch and Kurosky [Phys. Rev. Lett. 76, 323 (1996)] by maximizing conditional probabilities rather than minimizing energy functions, is based upon a different and very efficient multisequence Monte Carlo scheme. By construction, the method ensures that the designed sequences represent good folders thermodynamically. A bootstrap procedure for the sequence space search is devised making very large chains feasible. The algorithm is successfully explored on the two-dimensional HP model [K. F. Lau and K. A. Dill, Macromolecules 32, 3986 (1989)] with chain lengths N=16, 18, and 32.

  1. Monte Carlo methods to calculate impact probabilities

    NASA Astrophysics Data System (ADS)

    Rickman, H.; Wiśniowski, T.; Wajer, P.; Gabryszewski, R.; Valsecchi, G. B.

    2014-09-01

    Context. Unraveling the events that took place in the solar system during the period known as the late heavy bombardment requires the interpretation of the cratered surfaces of the Moon and terrestrial planets. This, in turn, requires good estimates of the statistical impact probabilities for different source populations of projectiles, a subject that has received relatively little attention, since the works of Öpik (1951, Proc. R. Irish Acad. Sect. A, 54, 165) and Wetherill (1967, J. Geophys. Res., 72, 2429). Aims: We aim to work around the limitations of the Öpik and Wetherill formulae, which are caused by singularities due to zero denominators under special circumstances. Using modern computers, it is possible to make good estimates of impact probabilities by means of Monte Carlo simulations, and in this work, we explore the available options. Methods: We describe three basic methods to derive the average impact probability for a projectile with a given semi-major axis, eccentricity, and inclination with respect to a target planet on an elliptic orbit. One is a numerical averaging of the Wetherill formula; the next is a Monte Carlo super-sizing method using the target's Hill sphere. The third uses extensive minimum orbit intersection distance (MOID) calculations for a Monte Carlo sampling of potentially impacting orbits, along with calculations of the relevant interval for the timing of the encounter allowing collision. Numerical experiments are carried out for an intercomparison of the methods and to scrutinize their behavior near the singularities (zero relative inclination and equal perihelion distances). Results: We find an excellent agreement between all methods in the general case, while there appear large differences in the immediate vicinity of the singularities. With respect to the MOID method, which is the only one that does not involve simplifying assumptions and approximations, the Wetherill averaging impact probability departs by diverging toward

  2. Markov chain Monte Carlo without likelihoods.

    PubMed

    Marjoram, Paul; Molitor, John; Plagnol, Vincent; Tavare, Simon

    2003-12-23

    Many stochastic simulation approaches for generating observations from a posterior distribution depend on knowing a likelihood function. However, for many complex probability models, such likelihoods are either impossible or computationally prohibitive to obtain. Here we present a Markov chain Monte Carlo method for generating observations from a posterior distribution without the use of likelihoods. It can also be used in frequentist applications, in particular for maximum-likelihood estimation. The approach is illustrated by an example of ancestral inference in population genetics. A number of open problems are highlighted in the discussion.

  3. Discovering correlated fermions using quantum Monte Carlo

    NASA Astrophysics Data System (ADS)

    Wagner, Lucas K.; Ceperley, David M.

    2016-09-01

    It has become increasingly feasible to use quantum Monte Carlo (QMC) methods to study correlated fermion systems for realistic Hamiltonians. We give a summary of these techniques targeted at researchers in the field of correlated electrons, focusing on the fundamentals, capabilities, and current status of this technique. The QMC methods often offer the highest accuracy solutions available for systems in the continuum, and, since they address the many-body problem directly, the simulations can be analyzed to obtain insight into the nature of correlated quantum behavior.

  4. Exascale Monte Carlo R&D

    SciTech Connect

    Marcus, Ryan C.

    2012-07-24

    Overview of this presentation is (1) Exascale computing - different technologies, getting there; (2) high-performance proof-of-concept MCMini - features and results; and (3) OpenCL toolkit - Oatmeal (OpenCL Automatic Memory Allocation Library) - purpose and features. Despite driver issues, OpenCL seems like a good, hardware agnostic tool. MCMini demonstrates the possibility for GPGPU-based Monte Carlo methods - it shows great scaling for HPC application and algorithmic equivalence. Oatmeal provides a flexible framework to aid in the development of scientific OpenCL codes.

  5. Quantum Monte Carlo calculations for light nuclei

    SciTech Connect

    Wiringa, R.B.

    1998-08-01

    Quantum Monte Carlo calculations of ground and low-lying excited states for nuclei with A {le} 8 are made using a realistic Hamiltonian that fits NN scattering data. Results for more than 30 different (j{sup {prime}}, T) states, plus isobaric analogs, are obtained and the known excitation spectra are reproduced reasonably well. Various density and momentum distributions and electromagnetic form factors and moments have also been computed. These are the first microscopic calculations that directly produce nuclear shell structure from realistic NN interactions.

  6. Introduction to Cluster Monte Carlo Algorithms

    NASA Astrophysics Data System (ADS)

    Luijten, E.

    This chapter provides an introduction to cluster Monte Carlo algorithms for classical statistical-mechanical systems. A brief review of the conventional Metropolis algorithm is given, followed by a detailed discussion of the lattice cluster algorithm developed by Swendsen and Wang and the single-cluster variant introduced by Wolff. For continuum systems, the geometric cluster algorithm of Dress and Krauth is described. It is shown how their geometric approach can be generalized to incorporate particle interactions beyond hardcore repulsions, thus forging a connection between the lattice and continuum approaches. Several illustrative examples are discussed.

  7. Cluster hybrid Monte Carlo simulation algorithms.

    PubMed

    Plascak, J A; Ferrenberg, Alan M; Landau, D P

    2002-06-01

    We show that addition of Metropolis single spin flips to the Wolff cluster-flipping Monte Carlo procedure leads to a dramatic increase in performance for the spin-1/2 Ising model. We also show that adding Wolff cluster flipping to the Metropolis or heat bath algorithms in systems where just cluster flipping is not immediately obvious (such as the spin-3/2 Ising model) can substantially reduce the statistical errors of the simulations. A further advantage of these methods is that systematic errors introduced by the use of imperfect random-number generation may be largely healed by hybridizing single spin flips with cluster flipping.

  8. Cluster hybrid Monte Carlo simulation algorithms

    NASA Astrophysics Data System (ADS)

    Plascak, J. A.; Ferrenberg, Alan M.; Landau, D. P.

    2002-06-01

    We show that addition of Metropolis single spin flips to the Wolff cluster-flipping Monte Carlo procedure leads to a dramatic increase in performance for the spin-1/2 Ising model. We also show that adding Wolff cluster flipping to the Metropolis or heat bath algorithms in systems where just cluster flipping is not immediately obvious (such as the spin-3/2 Ising model) can substantially reduce the statistical errors of the simulations. A further advantage of these methods is that systematic errors introduced by the use of imperfect random-number generation may be largely healed by hybridizing single spin flips with cluster flipping.

  9. Monte Carlo simulation for the transport beamline

    NASA Astrophysics Data System (ADS)

    Romano, F.; Attili, A.; Cirrone, G. A. P.; Carpinelli, M.; Cuttone, G.; Jia, S. B.; Marchetto, F.; Russo, G.; Schillaci, F.; Scuderi, V.; Tramontana, A.; Varisano, A.

    2013-07-01

    In the framework of the ELIMED project, Monte Carlo (MC) simulations are widely used to study the physical transport of charged particles generated by laser-target interactions and to preliminarily evaluate fluence and dose distributions. An energy selection system and the experimental setup for the TARANIS laser facility in Belfast (UK) have been already simulated with the GEANT4 (GEometry ANd Tracking) MC toolkit. Preliminary results are reported here. Future developments are planned to implement a MC based 3D treatment planning in order to optimize shots number and dose delivery.

  10. State-of-the-art Monte Carlo 1988

    SciTech Connect

    Soran, P.D.

    1988-06-28

    Particle transport calculations in highly dimensional and physically complex geometries, such as detector calibration, radiation shielding, space reactors, and oil-well logging, generally require Monte Carlo transport techniques. Monte Carlo particle transport can be performed on a variety of computers ranging from APOLLOs to VAXs. Some of the hardware and software developments, which now permit Monte Carlo methods to be routinely used, are reviewed in this paper. The development of inexpensive, large, fast computer memory, coupled with fast central processing units, permits Monte Carlo calculations to be performed on workstations, minicomputers, and supercomputers. The Monte Carlo renaissance is further aided by innovations in computer architecture and software development. Advances in vectorization and parallelization architecture have resulted in the development of new algorithms which have greatly reduced processing times. Finally, the renewed interest in Monte Carlo has spawned new variance reduction techniques which are being implemented in large computer codes. 45 refs.

  11. Discrete diffusion Monte Carlo for frequency-dependent radiative transfer

    SciTech Connect

    Densmore, Jeffrey D; Kelly, Thompson G; Urbatish, Todd J

    2010-11-17

    Discrete Diffusion Monte Carlo (DDMC) is a technique for increasing the efficiency of Implicit Monte Carlo radiative-transfer simulations. In this paper, we develop an extension of DDMC for frequency-dependent radiative transfer. We base our new DDMC method on a frequency-integrated diffusion equation for frequencies below a specified threshold. Above this threshold we employ standard Monte Carlo. With a frequency-dependent test problem, we confirm the increased efficiency of our new DDMC technique.

  12. Monte Carlo modeling of spatial coherence: free-space diffraction

    PubMed Central

    Fischer, David G.; Prahl, Scott A.; Duncan, Donald D.

    2008-01-01

    We present a Monte Carlo method for propagating partially coherent fields through complex deterministic optical systems. A Gaussian copula is used to synthesize a random source with an arbitrary spatial coherence function. Physical optics and Monte Carlo predictions of the first- and second-order statistics of the field are shown for coherent and partially coherent sources for free-space propagation, imaging using a binary Fresnel zone plate, and propagation through a limiting aperture. Excellent agreement between the physical optics and Monte Carlo predictions is demonstrated in all cases. Convergence criteria are presented for judging the quality of the Monte Carlo predictions. PMID:18830335

  13. Monte Carlo simulations within avalanche rescue

    NASA Astrophysics Data System (ADS)

    Reiweger, Ingrid; Genswein, Manuel; Schweizer, Jürg

    2016-04-01

    Refining concepts for avalanche rescue involves calculating suitable settings for rescue strategies such as an adequate probing depth for probe line searches or an optimal time for performing resuscitation for a recovered avalanche victim in case of additional burials. In the latter case, treatment decisions have to be made in the context of triage. However, given the low number of incidents it is rarely possible to derive quantitative criteria based on historical statistics in the context of evidence-based medicine. For these rare, but complex rescue scenarios, most of the associated concepts, theories, and processes involve a number of unknown "random" parameters which have to be estimated in order to calculate anything quantitatively. An obvious approach for incorporating a number of random variables and their distributions into a calculation is to perform a Monte Carlo (MC) simulation. We here present Monte Carlo simulations for calculating the most suitable probing depth for probe line searches depending on search area and an optimal resuscitation time in case of multiple avalanche burials. The MC approach reveals, e.g., new optimized values for the duration of resuscitation that differ from previous, mainly case-based assumptions.

  14. Calculating Pi Using the Monte Carlo Method

    NASA Astrophysics Data System (ADS)

    Williamson, Timothy

    2013-11-01

    During the summer of 2012, I had the opportunity to participate in a research experience for teachers at the center for sustainable energy at Notre Dame University (RET @ cSEND) working with Professor John LoSecco on the problem of using antineutrino detection to accurately determine the fuel makeup and operating power of nuclear reactors. During full power operation, a reactor may produce 1021 antineutrinos per second with approximately 100 per day being detected. While becoming familiar with the design and operation of the detectors, and how total antineutrino flux could be obtained from such a small sample, I read about a simulation program called Monte Carlo. Further investigation led me to the Monte Carlo method page of Wikipedia2 where I saw an example of approximating pi using this simulation. Other examples where this method was applied were typically done with computer simulations2 or purely mathematical.3 It is my belief that this method may be easily related to the students by performing the simple activity of sprinkling rice on an arc drawn in a square. The activity that follows was inspired by those simulations and was used by my AP Physics class last year with very good results.

  15. Quantum Monte Carlo methods for nuclear physics

    SciTech Connect

    Carlson, Joseph A.; Gandolfi, Stefano; Pederiva, Francesco; Pieper, Steven C.; Schiavilla, Rocco; Schmidt, K. E,; Wiringa, Robert B.

    2014-10-19

    Quantum Monte Carlo methods have proved very valuable to study the structure and reactions of light nuclei and nucleonic matter starting from realistic nuclear interactions and currents. These ab-initio calculations reproduce many low-lying states, moments and transitions in light nuclei, and simultaneously predict many properties of light nuclei and neutron matter over a rather wide range of energy and momenta. We review the nuclear interactions and currents, and describe the continuum Quantum Monte Carlo methods used in nuclear physics. These methods are similar to those used in condensed matter and electronic structure but naturally include spin-isospin, tensor, spin-orbit, and three-body interactions. We present a variety of results including the low-lying spectra of light nuclei, nuclear form factors, and transition matrix elements. We also describe low-energy scattering techniques, studies of the electroweak response of nuclei relevant in electron and neutrino scattering, and the properties of dense nucleonic matter as found in neutron stars. A coherent picture of nuclear structure and dynamics emerges based upon rather simple but realistic interactions and currents.

  16. Quantum Monte Carlo methods for nuclear physics

    DOE PAGES

    Carlson, Joseph A.; Gandolfi, Stefano; Pederiva, Francesco; ...

    2014-10-19

    Quantum Monte Carlo methods have proved very valuable to study the structure and reactions of light nuclei and nucleonic matter starting from realistic nuclear interactions and currents. These ab-initio calculations reproduce many low-lying states, moments and transitions in light nuclei, and simultaneously predict many properties of light nuclei and neutron matter over a rather wide range of energy and momenta. We review the nuclear interactions and currents, and describe the continuum Quantum Monte Carlo methods used in nuclear physics. These methods are similar to those used in condensed matter and electronic structure but naturally include spin-isospin, tensor, spin-orbit, and three-bodymore » interactions. We present a variety of results including the low-lying spectra of light nuclei, nuclear form factors, and transition matrix elements. We also describe low-energy scattering techniques, studies of the electroweak response of nuclei relevant in electron and neutrino scattering, and the properties of dense nucleonic matter as found in neutron stars. A coherent picture of nuclear structure and dynamics emerges based upon rather simple but realistic interactions and currents.« less

  17. Geometrical Monte Carlo simulation of atmospheric turbulence

    NASA Astrophysics Data System (ADS)

    Yuksel, Demet; Yuksel, Heba

    2013-09-01

    Atmospheric turbulence has a significant impact on the quality of a laser beam propagating through the atmosphere over long distances. Turbulence causes intensity scintillation and beam wander from propagation through turbulent eddies of varying sizes and refractive index. This can severely impair the operation of target designation and Free-Space Optical (FSO) communications systems. In addition, experimenting on an FSO communication system is rather tedious and difficult. The interferences of plentiful elements affect the result and cause the experimental outcomes to have bigger error variance margins than they are supposed to have. Especially when we go into the stronger turbulence regimes the simulation and analysis of the turbulence induced beams require delicate attention. We propose a new geometrical model to assess the phase shift of a laser beam propagating through turbulence. The atmosphere along the laser beam propagation path will be modeled as a spatial distribution of spherical bubbles with refractive index discontinuity calculated from a Gaussian distribution with the mean value being the index of air. For each statistical representation of the atmosphere, the path of rays will be analyzed using geometrical optics. These Monte Carlo techniques will assess the phase shift as a summation of the phases that arrive at the same point at the receiver. Accordingly, there would be dark and bright spots at the receiver that give an idea regarding the intensity pattern without having to solve the wave equation. The Monte Carlo analysis will be compared with the predictions of wave theory.

  18. Scalable Domain Decomposed Monte Carlo Particle Transport

    NASA Astrophysics Data System (ADS)

    O'Brien, Matthew Joseph

    In this dissertation, we present the parallel algorithms necessary to run domain decomposed Monte Carlo particle transport on large numbers of processors (millions of processors). Previous algorithms were not scalable, and the parallel overhead became more computationally costly than the numerical simulation. The main algorithms we consider are: • Domain decomposition of constructive solid geometry: enables extremely large calculations in which the background geometry is too large to fit in the memory of a single computational node. • Load Balancing: keeps the workload per processor as even as possible so the calculation runs efficiently. • Global Particle Find: if particles are on the wrong processor, globally resolve their locations to the correct processor based on particle coordinate and background domain. • Visualizing constructive solid geometry, sourcing particles, deciding that particle streaming communication is completed and spatial redecomposition. These algorithms are some of the most important parallel algorithms required for domain decomposed Monte Carlo particle transport. We demonstrate that our previous algorithms were not scalable, prove that our new algorithms are scalable, and run some of the algorithms up to 2 million MPI processes on the Sequoia supercomputer.

  19. Discrete range clustering using Monte Carlo methods

    NASA Technical Reports Server (NTRS)

    Chatterji, G. B.; Sridhar, B.

    1993-01-01

    For automatic obstacle avoidance guidance during rotorcraft low altitude flight, a reliable model of the nearby environment is needed. Such a model may be constructed by applying surface fitting techniques to the dense range map obtained by active sensing using radars. However, for covertness, passive sensing techniques using electro-optic sensors are desirable. As opposed to the dense range map obtained via active sensing, passive sensing algorithms produce reliable range at sparse locations, and therefore, surface fitting techniques to fill the gaps in the range measurement are not directly applicable. Both for automatic guidance and as a display for aiding the pilot, these discrete ranges need to be grouped into sets which correspond to objects in the nearby environment. The focus of this paper is on using Monte Carlo methods for clustering range points into meaningful groups. One of the aims of the paper is to explore whether simulated annealing methods offer significant advantage over the basic Monte Carlo method for this class of problems. We compare three different approaches and present application results of these algorithms to a laboratory image sequence and a helicopter flight sequence.

  20. Quantum Monte Carlo methods for nuclear physics

    DOE PAGES

    Carlson, J.; Gandolfi, S.; Pederiva, F.; ...

    2015-09-09

    Quantum Monte Carlo methods have proved valuable to study the structure and reactions of light nuclei and nucleonic matter starting from realistic nuclear interactions and currents. These ab-initio calculations reproduce many low-lying states, moments, and transitions in light nuclei, and simultaneously predict many properties of light nuclei and neutron matter over a rather wide range of energy and momenta. The nuclear interactions and currents are reviewed along with a description of the continuum quantum Monte Carlo methods used in nuclear physics. These methods are similar to those used in condensed matter and electronic structure but naturally include spin-isospin, tensor, spin-orbit,more » and three-body interactions. A variety of results are presented, including the low-lying spectra of light nuclei, nuclear form factors, and transition matrix elements. Low-energy scattering techniques, studies of the electroweak response of nuclei relevant in electron and neutrino scattering, and the properties of dense nucleonic matter as found in neutron stars are also described. Furthermore, a coherent picture of nuclear structure and dynamics emerges based upon rather simple but realistic interactions and currents.« less

  1. CosmoMC: Cosmological MonteCarlo

    NASA Astrophysics Data System (ADS)

    Lewis, Antony; Bridle, Sarah

    2011-06-01

    We present a fast Markov Chain Monte-Carlo exploration of cosmological parameter space. We perform a joint analysis of results from recent CMB experiments and provide parameter constraints, including sigma_8, from the CMB independent of other data. We next combine data from the CMB, HST Key Project, 2dF galaxy redshift survey, supernovae Ia and big-bang nucleosynthesis. The Monte Carlo method allows the rapid investigation of a large number of parameters, and we present results from 6 and 9 parameter analyses of flat models, and an 11 parameter analysis of non-flat models. Our results include constraints on the neutrino mass (m_nu < 0.3eV), equation of state of the dark energy, and the tensor amplitude, as well as demonstrating the effect of additional parameters on the base parameter constraints. In a series of appendices we describe the many uses of importance sampling, including computing results from new data and accuracy correction of results generated from an approximate method. We also discuss the different ways of converting parameter samples to parameter constraints, the effect of the prior, assess the goodness of fit and consistency, and describe the use of analytic marginalization over normalization parameters.

  2. Quantum Monte Carlo methods for nuclear physics

    SciTech Connect

    Carlson, J.; Gandolfi, S.; Pederiva, F.; Pieper, Steven C.; Schiavilla, R.; Schmidt, K. E.; Wiringa, R. B.

    2015-09-09

    Quantum Monte Carlo methods have proved valuable to study the structure and reactions of light nuclei and nucleonic matter starting from realistic nuclear interactions and currents. These ab-initio calculations reproduce many low-lying states, moments, and transitions in light nuclei, and simultaneously predict many properties of light nuclei and neutron matter over a rather wide range of energy and momenta. The nuclear interactions and currents are reviewed along with a description of the continuum quantum Monte Carlo methods used in nuclear physics. These methods are similar to those used in condensed matter and electronic structure but naturally include spin-isospin, tensor, spin-orbit, and three-body interactions. A variety of results are presented, including the low-lying spectra of light nuclei, nuclear form factors, and transition matrix elements. Low-energy scattering techniques, studies of the electroweak response of nuclei relevant in electron and neutrino scattering, and the properties of dense nucleonic matter as found in neutron stars are also described. Furthermore, a coherent picture of nuclear structure and dynamics emerges based upon rather simple but realistic interactions and currents.

  3. Quantum Monte Carlo methods for nuclear physics

    NASA Astrophysics Data System (ADS)

    Carlson, J.; Gandolfi, S.; Pederiva, F.; Pieper, Steven C.; Schiavilla, R.; Schmidt, K. E.; Wiringa, R. B.

    2015-07-01

    Quantum Monte Carlo methods have proved valuable to study the structure and reactions of light nuclei and nucleonic matter starting from realistic nuclear interactions and currents. These ab initio calculations reproduce many low-lying states, moments, and transitions in light nuclei, and simultaneously predict many properties of light nuclei and neutron matter over a rather wide range of energy and momenta. The nuclear interactions and currents are reviewed along with a description of the continuum quantum Monte Carlo methods used in nuclear physics. These methods are similar to those used in condensed matter and electronic structure but naturally include spin-isospin, tensor, spin-orbit, and three-body interactions. A variety of results are presented, including the low-lying spectra of light nuclei, nuclear form factors, and transition matrix elements. Low-energy scattering techniques, studies of the electroweak response of nuclei relevant in electron and neutrino scattering, and the properties of dense nucleonic matter as found in neutron stars are also described. A coherent picture of nuclear structure and dynamics emerges based upon rather simple but realistic interactions and currents.

  4. Quantum Monte Carlo for atoms and molecules

    SciTech Connect

    Barnett, R.N.

    1989-11-01

    The diffusion quantum Monte Carlo with fixed nodes (QMC) approach has been employed in studying energy-eigenstates for 1--4 electron systems. Previous work employing the diffusion QMC technique yielded energies of high quality for H{sub 2}, LiH, Li{sub 2}, and H{sub 2}O. Here, the range of calculations with this new approach has been extended to include additional first-row atoms and molecules. In addition, improvements in the previously computed fixed-node energies of LiH, Li{sub 2}, and H{sub 2}O have been obtained using more accurate trial functions. All computations were performed within, but are not limited to, the Born-Oppenheimer approximation. In our computations, the effects of variation of Monte Carlo parameters on the QMC solution of the Schroedinger equation were studied extensively. These parameters include the time step, renormalization time and nodal structure. These studies have been very useful in determining which choices of such parameters will yield accurate QMC energies most efficiently. Generally, very accurate energies (90--100% of the correlation energy is obtained) have been computed with single-determinant trail functions multiplied by simple correlation functions. Improvements in accuracy should be readily obtained using more complex trial functions.

  5. THE MCNPX MONTE CARLO RADIATION TRANSPORT CODE

    SciTech Connect

    WATERS, LAURIE S.; MCKINNEY, GREGG W.; DURKEE, JOE W.; FENSIN, MICHAEL L.; JAMES, MICHAEL R.; JOHNS, RUSSELL C.; PELOWITZ, DENISE B.

    2007-01-10

    MCNPX (Monte Carlo N-Particle eXtended) is a general-purpose Monte Carlo radiation transport code with three-dimensional geometry and continuous-energy transport of 34 particles and light ions. It contains flexible source and tally options, interactive graphics, and support for both sequential and multi-processing computer platforms. MCNPX is based on MCNP4B, and has been upgraded to most MCNP5 capabilities. MCNP is a highly stable code tracking neutrons, photons and electrons, and using evaluated nuclear data libraries for low-energy interaction probabilities. MCNPX has extended this base to a comprehensive set of particles and light ions, with heavy ion transport in development. Models have been included to calculate interaction probabilities when libraries are not available. Recent additions focus on the time evolution of residual nuclei decay, allowing calculation of transmutation and delayed particle emission. MCNPX is now a code of great dynamic range, and the excellent neutronics capabilities allow new opportunities to simulate devices of interest to experimental particle physics; particularly calorimetry. This paper describes the capabilities of the current MCNPX version 2.6.C, and also discusses ongoing code development.

  6. Chemical application of diffusion quantum Monte Carlo

    NASA Astrophysics Data System (ADS)

    Reynolds, P. J.; Lester, W. A., Jr.

    1983-10-01

    The diffusion quantum Monte Carlo (QMC) method gives a stochastic solution to the Schroedinger equation. As an example the singlet-triplet splitting of the energy of the methylene molecule CH2 is given. The QMC algorithm was implemented on the CYBER 205, first as a direct transcription of the algorithm running on our VAX 11/780, and second by explicitly writing vector code for all loops longer than a crossover length C. The speed of the codes relative to one another as a function of C, and relative to the VAX is discussed. Since CH2 has only eight electrons, most of the loops in this application are fairly short. The longest inner loops run over the set of atomic basis functions. The CPU time dependence obtained versus the number of basis functions is discussed and compared with that obtained from traditional quantum chemistry codes and that obtained from traditional computer architectures. Finally, preliminary work on restructuring the algorithm to compute the separate Monte Carlo realizations in parallel is discussed.

  7. Quantum Monte Carlo Endstation for Petascale Computing

    SciTech Connect

    Lubos Mitas

    2011-01-26

    NCSU research group has been focused on accomplising the key goals of this initiative: establishing new generation of quantum Monte Carlo (QMC) computational tools as a part of Endstation petaflop initiative for use at the DOE ORNL computational facilities and for use by computational electronic structure community at large; carrying out high accuracy quantum Monte Carlo demonstration projects in application of these tools to the forefront electronic structure problems in molecular and solid systems; expanding the impact of QMC methods and approaches; explaining and enhancing the impact of these advanced computational approaches. In particular, we have developed quantum Monte Carlo code (QWalk, www.qwalk.org) which was significantly expanded and optimized using funds from this support and at present became an actively used tool in the petascale regime by ORNL researchers and beyond. These developments have been built upon efforts undertaken by the PI's group and collaborators over the period of the last decade. The code was optimized and tested extensively on a number of parallel architectures including petaflop ORNL Jaguar machine. We have developed and redesigned a number of code modules such as evaluation of wave functions and orbitals, calculations of pfaffians and introduction of backflow coordinates together with overall organization of the code and random walker distribution over multicore architectures. We have addressed several bottlenecks such as load balancing and verified efficiency and accuracy of the calculations with the other groups of the Endstation team. The QWalk package contains about 50,000 lines of high quality object-oriented C++ and includes also interfaces to data files from other conventional electronic structure codes such as Gamess, Gaussian, Crystal and others. This grant supported PI for one month during summers, a full-time postdoc and partially three graduate students over the period of the grant duration, it has resulted in 13

  8. Residual entropy of ices and clathrates from Monte Carlo simulation

    SciTech Connect

    Kolafa, Jiří

    2014-05-28

    We calculated the residual entropy of ices (Ih, Ic, III, V, VI) and clathrates (I, II, H), assuming the same energy of all configurations satisfying the Bernal–Fowler ice rules. The Metropolis Monte Carlo simulations in the range of temperatures from infinity to a size-dependent threshold were followed by the thermodynamic integration. Convergence of the simulation and the finite-size effects were analyzed using the quasichemical approximation and the Debye–Hückel theory applied to the Bjerrum defects. The leading finite-size error terms, ln N/N, 1/N, and for the two-dimensional square ice model also 1/N{sup 3/2}, were used for an extrapolation to the thermodynamic limit. Finally, we discuss the influence of unequal energies of proton configurations.

  9. Monte Carlo studies of model Langmuir monolayers.

    PubMed

    Opps, S B; Yang, B; Gray, C G; Sullivan, D E

    2001-04-01

    This paper examines some of the basic properties of a model Langmuir monolayer, consisting of surfactant molecules deposited onto a water subphase. The surfactants are modeled as rigid rods composed of a head and tail segment of diameters sigma(hh) and sigma(tt), respectively. The tails consist of n(t) approximately 4-7 effective monomers representing methylene groups. These rigid rods interact via site-site Lennard-Jones potentials with different interaction parameters for the tail-tail, head-tail, and head-head interactions. In a previous paper, we studied the ground-state properties of this system using a Landau approach. In the present paper, Monte Carlo simulations were performed in the canonical ensemble to elucidate the finite-temperature behavior of this system. Simulation techniques, incorporating a system of dynamic filters, allow us to decrease CPU time with negligible statistical error. This paper focuses on several of the key parameters, such as density, head-tail diameter mismatch, and chain length, responsible for driving transitions from uniformly tilted to untilted phases and between different tilt-ordered phases. Upon varying the density of the system, with sigma(hh)=sigma(tt), we observe a transition from a tilted (NNN)-condensed phase to an untilted-liquid phase and, upon comparison with recent experiments with fatty acid-alcohol and fatty acid-ester mixtures [M. C. Shih, M. K. Durbin, A. Malik, P. Zschack, and P. Dutta, J. Chem. Phys. 101, 9132 (1994); E. Teer, C. M. Knobler, C. Lautz, S. Wurlitzer, J. Kildae, and T. M. Fischer, J. Chem. Phys. 106, 1913 (1997)], we identify this as the L'(2)/Ov-L1 phase boundary. By varying the head-tail diameter ratio, we observe a decrease in T(c) with increasing mismatch. However, as the chain length was increased we observed that the transition temperatures increased and differences in T(c) due to head-tail diameter mismatch were diminished. In most of the present research, the water was treated as a hard

  10. Monte Carlo studies of model Langmuir monolayers

    NASA Astrophysics Data System (ADS)

    Opps, S. B.; Yang, B.; Gray, C. G.; Sullivan, D. E.

    2001-04-01

    This paper examines some of the basic properties of a model Langmuir monolayer, consisting of surfactant molecules deposited onto a water subphase. The surfactants are modeled as rigid rods composed of a head and tail segment of diameters σhh and σtt, respectively. The tails consist of nt~4-7 effective monomers representing methylene groups. These rigid rods interact via site-site Lennard-Jones potentials with different interaction parameters for the tail-tail, head-tail, and head-head interactions. In a previous paper, we studied the ground-state properties of this system using a Landau approach. In the present paper, Monte Carlo simulations were performed in the canonical ensemble to elucidate the finite-temperature behavior of this system. Simulation techniques, incorporating a system of dynamic filters, allow us to decrease CPU time with negligible statistical error. This paper focuses on several of the key parameters, such as density, head-tail diameter mismatch, and chain length, responsible for driving transitions from uniformly tilted to untilted phases and between different tilt-ordered phases. Upon varying the density of the system, with σhh=σtt, we observe a transition from a tilted (NNN)-condensed phase to an untilted-liquid phase and, upon comparison with recent experiments with fatty acid-alcohol and fatty acid-ester mixtures [M. C. Shih, M. K. Durbin, A. Malik, P. Zschack, and P. Dutta, J. Chem. Phys. 101, 9132 (1994); E. Teer, C. M. Knobler, C. Lautz, S. Wurlitzer, J. Kildae, and T. M. Fischer, J. Chem. Phys. 106, 1913 (1997)], we identify this as the L'2/Ov-L1 phase boundary. By varying the head-tail diameter ratio, we observe a decrease in Tc with increasing mismatch. However, as the chain length was increased we observed that the transition temperatures increased and differences in Tc due to head-tail diameter mismatch were diminished. In most of the present research, the water was treated as a hard surface, whereby the surfactants are only

  11. Monte Carlo-Minimization and Monte Carlo Recursion Approaches to Structure and Free Energy.

    NASA Astrophysics Data System (ADS)

    Li, Zhenqin

    1990-08-01

    Biological systems are intrinsically "complex", involving many degrees of freedom, heterogeneity, and strong interactions among components. For the simplest of biological substances, e.g., biomolecules, which obey the laws of thermodynamics, we may attempt a statistical mechanical investigational approach. Even for these simplest many -body systems, assuming microscopic interactions are completely known, current computational methods in characterizing the overall structure and free energy face the fundamental challenge of an exponential amount of computation, with the rise in the number of degrees of freedom. As an attempt to surmount such problems, two computational procedures, the Monte Carlo-minimization and Monte Carlo recursion methods, have been developed as general approaches to the determination of structure and free energy of a complex thermodynamic system. We describe, in Chapter 2, the Monte Carlo-minimization method, which attempts to simulate natural protein folding processes and to overcome the multiple-minima problem. The Monte Carlo-minimization procedure has been applied to a pentapeptide, Met-enkephalin, leading consistently to the lowest energy structure, which is most likely to be the global minimum structure for Met-enkephalin in the absence of water, given the ECEPP energy parameters. In Chapter 3 of this thesis, we develop a Monte Carlo recursion method to compute the free energy of a given physical system with known interactions, which has been applied to a 32-particle Lennard-Jones fluid. In Chapter 4, we describe an efficient implementation of the recursion procedure, for the computation of the free energy of liquid water, with both MCY and TIP4P potential parameters for water. As a further demonstration of the power of the recursion method for calculating free energy, a general formalism of cluster formation from monatomic vapor is developed in Chapter 5. The Gibbs free energy of constrained clusters can be computed efficiently using the

  12. Monte Carlo Simulation of Endlinking Oligomers

    NASA Technical Reports Server (NTRS)

    Hinkley, Jeffrey A.; Young, Jennifer A.

    1998-01-01

    This report describes initial efforts to model the endlinking reaction of phenylethynyl-terminated oligomers. Several different molecular weights were simulated using the Bond Fluctuation Monte Carlo technique on a 20 x 20 x 20 unit lattice with periodic boundary conditions. After a monodisperse "melt" was equilibrated, chain ends were linked whenever they came within the allowed bond distance. Ends remained reactive throughout, so that multiple links were permitted. Even under these very liberal crosslinking assumptions, geometrical factors limited the degree of crosslinking. Average crosslink functionalities were 2.3 to 2.6; surprisingly, they did not depend strongly on the chain length. These results agreed well with the degrees of crosslinking inferred from experiment in a cured phenylethynyl-terminated polyimide oligomer.

  13. Exploring theory space with Monte Carlo reweighting

    DOE PAGES

    Gainer, James S.; Lykken, Joseph; Matchev, Konstantin T.; ...

    2014-10-13

    Theories of new physics often involve a large number of unknown parameters which need to be scanned. Additionally, a putative signal in a particular channel may be due to a variety of distinct models of new physics. This makes experimental attempts to constrain the parameter space of motivated new physics models with a high degree of generality quite challenging. We describe how the reweighting of events may allow this challenge to be met, as fully simulated Monte Carlo samples generated for arbitrary benchmark models can be effectively re-used. Specifically, we suggest procedures that allow more efficient collaboration between theorists andmore » experimentalists in exploring large theory parameter spaces in a rigorous way at the LHC.« less

  14. Chemical application of diffusion quantum Monte Carlo

    NASA Technical Reports Server (NTRS)

    Reynolds, P. J.; Lester, W. A., Jr.

    1984-01-01

    The diffusion quantum Monte Carlo (QMC) method gives a stochastic solution to the Schroedinger equation. This approach is receiving increasing attention in chemical applications as a result of its high accuracy. However, reducing statistical uncertainty remains a priority because chemical effects are often obtained as small differences of large numbers. As an example, the single-triplet splitting of the energy of the methylene molecule CH sub 2 is given. The QMC algorithm was implemented on the CYBER 205, first as a direct transcription of the algorithm running on the VAX 11/780, and second by explicitly writing vector code for all loops longer than a crossover length C. The speed of the codes relative to one another as a function of C, and relative to the VAX, are discussed. The computational time dependence obtained versus the number of basis functions is discussed and this is compared with that obtained from traditional quantum chemistry codes and that obtained from traditional computer architectures.

  15. Monte Carlo calculation for microplanar beam radiography.

    PubMed

    Company, F Z; Allen, B J; Mino, C

    2000-09-01

    In radiography the scattered radiation from the off-target region decreases the contrast of the target image. We propose that a bundle of collimated, closely spaced, microplanar beams can reduce the scattered radiation and eliminate the effect of secondary electron dose, thus increasing the image dose contrast in the detector. The lateral and depth dose distributions of 20-200 keV microplanar beams are investigated using the EGS4 Monte Carlo code to calculate the depth doses and dose profiles in a 6 cm x 6 cm x 6 cm tissue phantom. The maximum dose on the primary beam axis (peak) and the minimum inter-beam scattered dose (valley) are compared at different photon energies and the optimum energy range for microbeam radiography is found. Results show that a bundle of closely spaced microplanar beams can give superior contrast imaging to a single macrobeam of the same overall area.

  16. Lunar Regolith Albedos Using Monte Carlos

    NASA Technical Reports Server (NTRS)

    Wilson, T. L.; Andersen, V.; Pinsky, L. S.

    2003-01-01

    The analysis of planetary regoliths for their backscatter albedos produced by cosmic rays (CRs) is important for space exploration and its potential contributions to science investigations in fundamental physics and astrophysics. Albedos affect all such experiments and the personnel that operate them. Groups have analyzed the production rates of various particles and elemental species by planetary surfaces when bombarded with Galactic CR fluxes, both theoretically and by means of various transport codes, some of which have emphasized neutrons. Here we report on the preliminary results of our current Monte Carlo investigation into the production of charged particles, neutrons, and neutrinos by the lunar surface using FLUKA. In contrast to previous work, the effects of charm are now included.

  17. Accuracy control in Monte Carlo radiative calculations

    NASA Technical Reports Server (NTRS)

    Almazan, P. Planas

    1993-01-01

    The general accuracy law that rules the Monte Carlo, ray-tracing algorithms used commonly for the calculation of the radiative entities in the thermal analysis of spacecraft are presented. These entities involve transfer of radiative energy either from a single source to a target (e.g., the configuration factors). or from several sources to a target (e.g., the absorbed heat fluxes). In fact, the former is just a particular case of the latter. The accuracy model is later applied to the calculation of some specific radiative entities. Furthermore, some issues related to the implementation of such a model in a software tool are discussed. Although only the relative error is considered through the discussion, similar results can be derived for the absolute error.

  18. Noncovalent Interactions by Quantum Monte Carlo.

    PubMed

    Dubecký, Matúš; Mitas, Lubos; Jurečka, Petr

    2016-05-11

    Quantum Monte Carlo (QMC) is a family of stochastic methods for solving quantum many-body problems such as the stationary Schrödinger equation. The review introduces basic notions of electronic structure QMC based on random walks in real space as well as its advances and adaptations to systems with noncovalent interactions. Specific issues such as fixed-node error cancellation, construction of trial wave functions, and efficiency considerations that allow for benchmark quality QMC energy differences are described in detail. Comprehensive overview of articles covers QMC applications to systems with noncovalent interactions over the last three decades. The current status of QMC with regard to efficiency, applicability, and usability by nonexperts together with further considerations about QMC developments, limitations, and unsolved challenges are discussed as well.

  19. Hybrid algorithms in quantum Monte Carlo

    NASA Astrophysics Data System (ADS)

    Kim, Jeongnim; Esler, Kenneth P.; McMinis, Jeremy; Morales, Miguel A.; Clark, Bryan K.; Shulenburger, Luke; Ceperley, David M.

    2012-12-01

    With advances in algorithms and growing computing powers, quantum Monte Carlo (QMC) methods have become a leading contender for high accuracy calculations for the electronic structure of realistic systems. The performance gain on recent HPC systems is largely driven by increasing parallelism: the number of compute cores of a SMP and the number of SMPs have been going up, as the Top500 list attests. However, the available memory as well as the communication and memory bandwidth per element has not kept pace with the increasing parallelism. This severely limits the applicability of QMC and the problem size it can handle. OpenMP/MPI hybrid programming provides applications with simple but effective solutions to overcome efficiency and scalability bottlenecks on large-scale clusters based on multi/many-core SMPs. We discuss the design and implementation of hybrid methods in QMCPACK and analyze its performance on current HPC platforms characterized by various memory and communication hierarchies.

  20. Monte Carlo applications to acoustical field solutions

    NASA Technical Reports Server (NTRS)

    Haviland, J. K.; Thanedar, B. D.

    1973-01-01

    The Monte Carlo technique is proposed for the determination of the acoustical pressure-time history at chosen points in a partial enclosure, the central idea of this technique being the tracing of acoustical rays. A statistical model is formulated and an algorithm for pressure is developed, the conformity of which is examined by two approaches and is shown to give the known results. The concepts that are developed are applied to the determination of the transient field due to a sound source in a homogeneous medium in a rectangular enclosure with perfect reflecting walls, and the results are compared with those presented by Mintzer based on the Laplace transform approach, as well as with a normal mode solution.

  1. Monte Carlo modeling and meteor showers

    NASA Technical Reports Server (NTRS)

    Kulikova, N. V.

    1987-01-01

    Prediction of short lived increases in the cosmic dust influx, the concentration in lower thermosphere of atoms and ions of meteor origin and the determination of the frequency of micrometeor impacts on spacecraft are all of scientific and practical interest and all require adequate models of meteor showers at an early stage of their existence. A Monte Carlo model of meteor matter ejection from a parent body at any point of space was worked out by other researchers. This scheme is described. According to the scheme, the formation of ten well known meteor streams was simulated and the possibility of genetic affinity of each of them with the most probable parent comet was analyzed. Some of the results are presented.

  2. Green's function Monte Carlo in nuclear physics

    SciTech Connect

    Carlson, J.

    1990-01-01

    We review the status of Green's Function Monte Carlo (GFMC) methods as applied to problems in nuclear physics. New methods have been developed to handle the spin and isospin degrees of freedom that are a vital part of any realistic nuclear physics problem, whether at the level of quarks or nucleons. We discuss these methods and then summarize results obtained recently for light nuclei, including ground state energies, three-body forces, charge form factors and the coulomb sum. As an illustration of the applicability of GFMC to quark models, we also consider the possible existence of bound exotic multi-quark states within the framework of flux-tube quark models. 44 refs., 8 figs., 1 tab.

  3. Resist develop prediction by Monte Carlo simulation

    NASA Astrophysics Data System (ADS)

    Sohn, Dong-Soo; Jeon, Kyoung-Ah; Sohn, Young-Soo; Oh, Hye-Keun

    2002-07-01

    Various resist develop models have been suggested to express the phenomena from the pioneering work of Dill's model in 1975 to the recent Shipley's enhanced notch model. The statistical Monte Carlo method can be applied to the process such as development and post exposure bake. The motions of developer during development process were traced by using this method. We have considered that the surface edge roughness of the resist depends on the weight percentage of protected and de-protected polymer in the resist. The results are well agreed with other papers. This study can be helpful for the developing of new photoresist and developer that can be used to pattern the device features smaller than 100 nm.

  4. Parallel tempering Monte Carlo in LAMMPS.

    SciTech Connect

    Rintoul, Mark Daniel; Plimpton, Steven James; Sears, Mark P.

    2003-11-01

    We present here the details of the implementation of the parallel tempering Monte Carlo technique into a LAMMPS, a heavily used massively parallel molecular dynamics code at Sandia. This technique allows for many replicas of a system to be run at different simulation temperatures. At various points in the simulation, configurations can be swapped between different temperature environments and then continued. This allows for large regions of energy space to be sampled very quickly, and allows for minimum energy configurations to emerge in very complex systems, such as large biomolecular systems. By including this algorithm into an existing code, we immediately gain all of the previous work that had been put into LAMMPS, and allow this technique to quickly be available to the entire Sandia and international LAMMPS community. Finally, we present an example of this code applied to folding a small protein.

  5. Monte Carlo simulations of medical imaging modalities

    SciTech Connect

    Estes, G.P.

    1998-09-01

    Because continuous-energy Monte Carlo radiation transport calculations can be nearly exact simulations of physical reality (within data limitations, geometric approximations, transport algorithms, etc.), it follows that one should be able to closely approximate the results of many experiments from first-principles computations. This line of reasoning has led to various MCNP studies that involve simulations of medical imaging modalities and other visualization methods such as radiography, Anger camera, computerized tomography (CT) scans, and SABRINA particle track visualization. It is the intent of this paper to summarize some of these imaging simulations in the hope of stimulating further work, especially as computer power increases. Improved interpretation and prediction of medical images should ultimately lead to enhanced medical treatments. It is also reasonable to assume that such computations could be used to design new or more effective imaging instruments.

  6. Markov Chain Monte Carlo from Lagrangian Dynamics

    PubMed Central

    Lan, Shiwei; Stathopoulos, Vasileios; Shahbaba, Babak; Girolami, Mark

    2014-01-01

    Hamiltonian Monte Carlo (HMC) improves the computational e ciency of the Metropolis-Hastings algorithm by reducing its random walk behavior. Riemannian HMC (RHMC) further improves the performance of HMC by exploiting the geometric properties of the parameter space. However, the geometric integrator used for RHMC involves implicit equations that require fixed-point iterations. In some cases, the computational overhead for solving implicit equations undermines RHMC's benefits. In an attempt to circumvent this problem, we propose an explicit integrator that replaces the momentum variable in RHMC by velocity. We show that the resulting transformation is equivalent to transforming Riemannian Hamiltonian dynamics to Lagrangian dynamics. Experimental results suggests that our method improves RHMC's overall computational e ciency in the cases considered. All computer programs and data sets are available online (http://www.ics.uci.edu/~babaks/Site/Codes.html) in order to allow replication of the results reported in this paper. PMID:26240515

  7. Exploring theory space with Monte Carlo reweighting

    SciTech Connect

    Gainer, James S.; Lykken, Joseph; Matchev, Konstantin T.; Mrenna, Stephen; Park, Myeonghun

    2014-10-13

    Theories of new physics often involve a large number of unknown parameters which need to be scanned. Additionally, a putative signal in a particular channel may be due to a variety of distinct models of new physics. This makes experimental attempts to constrain the parameter space of motivated new physics models with a high degree of generality quite challenging. We describe how the reweighting of events may allow this challenge to be met, as fully simulated Monte Carlo samples generated for arbitrary benchmark models can be effectively re-used. Specifically, we suggest procedures that allow more efficient collaboration between theorists and experimentalists in exploring large theory parameter spaces in a rigorous way at the LHC.

  8. The Monte Carlo Method. Popular Lectures in Mathematics.

    ERIC Educational Resources Information Center

    Sobol', I. M.

    The Monte Carlo Method is a method of approximately solving mathematical and physical problems by the simulation of random quantities. The principal goal of this booklet is to suggest to specialists in all areas that they will encounter problems which can be solved by the Monte Carlo Method. Part I of the booklet discusses the simulation of random…

  9. Economic Risk Analysis: Using Analytical and Monte Carlo Techniques.

    ERIC Educational Resources Information Center

    O'Donnell, Brendan R.; Hickner, Michael A.; Barna, Bruce A.

    2002-01-01

    Describes the development and instructional use of a Microsoft Excel spreadsheet template that facilitates analytical and Monte Carlo risk analysis of investment decisions. Discusses a variety of risk assessment methods followed by applications of the analytical and Monte Carlo methods. Uses a case study to illustrate use of the spreadsheet tool…

  10. A Primer in Monte Carlo Integration Using Mathcad

    ERIC Educational Resources Information Center

    Hoyer, Chad E.; Kegerreis, Jeb S.

    2013-01-01

    The essentials of Monte Carlo integration are presented for use in an upper-level physical chemistry setting. A Mathcad document that aids in the dissemination and utilization of this information is described and is available in the Supporting Information. A brief outline of Monte Carlo integration is given, along with ideas and pedagogy for…

  11. Accelerated GPU based SPECT Monte Carlo simulations

    NASA Astrophysics Data System (ADS)

    Garcia, Marie-Paule; Bert, Julien; Benoit, Didier; Bardiès, Manuel; Visvikis, Dimitris

    2016-06-01

    Monte Carlo (MC) modelling is widely used in the field of single photon emission computed tomography (SPECT) as it is a reliable technique to simulate very high quality scans. This technique provides very accurate modelling of the radiation transport and particle interactions in a heterogeneous medium. Various MC codes exist for nuclear medicine imaging simulations. Recently, new strategies exploiting the computing capabilities of graphical processing units (GPU) have been proposed. This work aims at evaluating the accuracy of such GPU implementation strategies in comparison to standard MC codes in the context of SPECT imaging. GATE was considered the reference MC toolkit and used to evaluate the performance of newly developed GPU Geant4-based Monte Carlo simulation (GGEMS) modules for SPECT imaging. Radioisotopes with different photon energies were used with these various CPU and GPU Geant4-based MC codes in order to assess the best strategy for each configuration. Three different isotopes were considered: 99m Tc, 111In and 131I, using a low energy high resolution (LEHR) collimator, a medium energy general purpose (MEGP) collimator and a high energy general purpose (HEGP) collimator respectively. Point source, uniform source, cylindrical phantom and anthropomorphic phantom acquisitions were simulated using a model of the GE infinia II 3/8" gamma camera. Both simulation platforms yielded a similar system sensitivity and image statistical quality for the various combinations. The overall acceleration factor between GATE and GGEMS platform derived from the same cylindrical phantom acquisition was between 18 and 27 for the different radioisotopes. Besides, a full MC simulation using an anthropomorphic phantom showed the full potential of the GGEMS platform, with a resulting acceleration factor up to 71. The good agreement with reference codes and the acceleration factors obtained support the use of GPU implementation strategies for improving computational efficiency

  12. Monte Carlo scatter correction for SPECT

    NASA Astrophysics Data System (ADS)

    Liu, Zemei

    The goal of this dissertation is to present a quantitatively accurate and computationally fast scatter correction method that is robust and easily accessible for routine applications in SPECT imaging. A Monte Carlo based scatter estimation method is investigated and developed further. The Monte Carlo simulation program SIMIND (Simulating Medical Imaging Nuclear Detectors), was specifically developed to simulate clinical SPECT systems. The SIMIND scatter estimation (SSE) method was developed further using a multithreading technique to distribute the scatter estimation task across multiple threads running concurrently on multi-core CPU's to accelerate the scatter estimation process. An analytical collimator that ensures less noise was used during SSE. The research includes the addition to SIMIND of charge transport modeling in cadmium zinc telluride (CZT) detectors. Phenomena associated with radiation-induced charge transport including charge trapping, charge diffusion, charge sharing between neighboring detector pixels, as well as uncertainties in the detection process are addressed. Experimental measurements and simulation studies were designed for scintillation crystal based SPECT and CZT based SPECT systems to verify and evaluate the expanded SSE method. Jaszczak Deluxe and Anthropomorphic Torso Phantoms (Data Spectrum Corporation, Hillsborough, NC, USA) were used for experimental measurements and digital versions of the same phantoms employed during simulations to mimic experimental acquisitions. This study design enabled easy comparison of experimental and simulated data. The results have consistently shown that the SSE method performed similarly or better than the triple energy window (TEW) and effective scatter source estimation (ESSE) methods for experiments on all the clinical SPECT systems. The SSE method is proven to be a viable method for scatter estimation for routine clinical use.

  13. Accelerated GPU based SPECT Monte Carlo simulations.

    PubMed

    Garcia, Marie-Paule; Bert, Julien; Benoit, Didier; Bardiès, Manuel; Visvikis, Dimitris

    2016-06-07

    Monte Carlo (MC) modelling is widely used in the field of single photon emission computed tomography (SPECT) as it is a reliable technique to simulate very high quality scans. This technique provides very accurate modelling of the radiation transport and particle interactions in a heterogeneous medium. Various MC codes exist for nuclear medicine imaging simulations. Recently, new strategies exploiting the computing capabilities of graphical processing units (GPU) have been proposed. This work aims at evaluating the accuracy of such GPU implementation strategies in comparison to standard MC codes in the context of SPECT imaging. GATE was considered the reference MC toolkit and used to evaluate the performance of newly developed GPU Geant4-based Monte Carlo simulation (GGEMS) modules for SPECT imaging. Radioisotopes with different photon energies were used with these various CPU and GPU Geant4-based MC codes in order to assess the best strategy for each configuration. Three different isotopes were considered: (99m) Tc, (111)In and (131)I, using a low energy high resolution (LEHR) collimator, a medium energy general purpose (MEGP) collimator and a high energy general purpose (HEGP) collimator respectively. Point source, uniform source, cylindrical phantom and anthropomorphic phantom acquisitions were simulated using a model of the GE infinia II 3/8" gamma camera. Both simulation platforms yielded a similar system sensitivity and image statistical quality for the various combinations. The overall acceleration factor between GATE and GGEMS platform derived from the same cylindrical phantom acquisition was between 18 and 27 for the different radioisotopes. Besides, a full MC simulation using an anthropomorphic phantom showed the full potential of the GGEMS platform, with a resulting acceleration factor up to 71. The good agreement with reference codes and the acceleration factors obtained support the use of GPU implementation strategies for improving computational

  14. Parton shower Monte Carlo event generators

    NASA Astrophysics Data System (ADS)

    Webber, Bryan

    2011-12-01

    A parton shower Monte Carlo event generator is a computer program designed to simulate the final states of high-energy collisions in full detail down to the level of individual stable particles. The aim is to generate a large number of simulated collision events, each consisting of a list of final-state particles and their momenta, such that the probability to produce an event with a given list is proportional (approximately) to the probability that the corresponding actual event is produced in the real world. The Monte Carlo method makes use of pseudorandom numbers to simulate the event-to-event fluctuations intrinsic to quantum processes. The simulation normally begins with a hard subprocess, shown as a black blob in Figure 1, in which constituents of the colliding particles interact at a high momentum scale to produce a few outgoing fundamental objects: Standard Model quarks, leptons and/or gauge or Higgs bosons, or hypothetical particles of some new theory. The partons (quarks and gluons) involved, as well as any new particles with colour, radiate virtual gluons, which can themselves emit further gluons or produce quark-antiquark pairs, leading to the formation of parton showers (brown). During parton showering the interaction scale falls and the strong interaction coupling rises, eventually triggering the process of hadronization (yellow), in which the partons are bound into colourless hadrons. On the same scale, the initial-state partons in hadronic collisions are confined in the incoming hadrons. In hadron-hadron collisions, the other constituent partons of the incoming hadrons undergo multiple interactions which produce the underlying event (green). Many of the produced hadrons are unstable, so the final stage of event generation is the simulation of the hadron decays.

  15. Fission Matrix Capability for MCNP Monte Carlo

    SciTech Connect

    Carney, Sean E.; Brown, Forrest B.; Kiedrowski, Brian C.; Martin, William R.

    2012-09-05

    In a Monte Carlo criticality calculation, before the tallying of quantities can begin, a converged fission source (the fundamental eigenvector of the fission kernel) is required. Tallies of interest may include powers, absorption rates, leakage rates, or the multiplication factor (the fundamental eigenvalue of the fission kernel, k{sub eff}). Just as in the power iteration method of linear algebra, if the dominance ratio (the ratio of the first and zeroth eigenvalues) is high, many iterations of neutron history simulations are required to isolate the fundamental mode of the problem. Optically large systems have large dominance ratios, and systems containing poor neutron communication between regions are also slow to converge. The fission matrix method, implemented into MCNP[1], addresses these problems. When Monte Carlo random walk from a source is executed, the fission kernel is stochastically applied to the source. Random numbers are used for: distances to collision, reaction types, scattering physics, fission reactions, etc. This method is used because the fission kernel is a complex, 7-dimensional operator that is not explicitly known. Deterministic methods use approximations/discretization in energy, space, and direction to the kernel. Consequently, they are faster. Monte Carlo directly simulates the physics, which necessitates the use of random sampling. Because of this statistical noise, common convergence acceleration methods used in deterministic methods do not work. In the fission matrix method, we are using the random walk information not only to build the next-iteration fission source, but also a spatially-averaged fission kernel. Just like in deterministic methods, this involves approximation and discretization. The approximation is the tallying of the spatially-discretized fission kernel with an incorrect fission source. We address this by making the spatial mesh fine enough that this error is negligible. As a consequence of discretization we get a

  16. Vectorized Monte Carlo methods for reactor lattice analysis

    NASA Technical Reports Server (NTRS)

    Brown, F. B.

    1984-01-01

    Some of the new computational methods and equivalent mathematical representations of physics models used in the MCV code, a vectorized continuous-enery Monte Carlo code for use on the CYBER-205 computer are discussed. While the principal application of MCV is the neutronics analysis of repeating reactor lattices, the new methods used in MCV should be generally useful for vectorizing Monte Carlo for other applications. For background, a brief overview of the vector processing features of the CYBER-205 is included, followed by a discussion of the fundamentals of Monte Carlo vectorization. The physics models used in the MCV vectorized Monte Carlo code are then summarized. The new methods used in scattering analysis are presented along with details of several key, highly specialized computational routines. Finally, speedups relative to CDC-7600 scalar Monte Carlo are discussed.

  17. Reconstruction of Monte Carlo replicas from Hessian parton distributions

    NASA Astrophysics Data System (ADS)

    Hou, Tie-Jiun; Gao, Jun; Huston, Joey; Nadolsky, Pavel; Schmidt, Carl; Stump, Daniel; Wang, Bo-Ting; Xie, Ke Ping; Dulat, Sayipjamal; Pumplin, Jon; Yuan, C. P.

    2017-03-01

    We explore connections between two common methods for quantifying the uncertainty in parton distribution functions (PDFs), based on the Hessian error matrix and Monte-Carlo sampling. CT14 parton distributions in the Hessian representation are converted into Monte-Carlo replicas by a numerical method that reproduces important properties of CT14 Hessian PDFs: the asymmetry of CT14 uncertainties and positivity of individual parton distributions. The ensembles of CT14 Monte-Carlo replicas constructed this way at NNLO and NLO are suitable for various collider applications, such as cross section reweighting. Master formulas for computation of asymmetric standard deviations in the Monte-Carlo representation are derived. A correction is proposed to address a bias in asymmetric uncertainties introduced by the Taylor series approximation. A numerical program is made available for conversion of Hessian PDFs into Monte-Carlo replicas according to normal, log-normal, and Watt-Thorne sampling procedures.

  18. A pure-sampling quantum Monte Carlo algorithm.

    PubMed

    Ospadov, Egor; Rothstein, Stuart M

    2015-01-14

    The objective of pure-sampling quantum Monte Carlo is to calculate physical properties that are independent of the importance sampling function being employed in the calculation, save for the mismatch of its nodal hypersurface with that of the exact wave function. To achieve this objective, we report a pure-sampling algorithm that combines features of forward walking methods of pure-sampling and reptation quantum Monte Carlo (RQMC). The new algorithm accurately samples properties from the mixed and pure distributions simultaneously in runs performed at a single set of time-steps, over which extrapolation to zero time-step is performed. In a detailed comparison, we found RQMC to be less efficient. It requires different sets of time-steps to accurately determine the energy and other properties, such as the dipole moment. We implement our algorithm by systematically increasing an algorithmic parameter until the properties converge to statistically equivalent values. As a proof in principle, we calculated the fixed-node energy, static α polarizability, and other one-electron expectation values for the ground-states of LiH and water molecules. These quantities are free from importance sampling bias, population control bias, time-step bias, extrapolation-model bias, and the finite-field approximation. We found excellent agreement with the accepted values for the energy and a variety of other properties for those systems.

  19. Monte Carlo simulation of classical spin models with chaotic billiards.

    PubMed

    Suzuki, Hideyuki

    2013-11-01

    It has recently been shown that the computing abilities of Boltzmann machines, or Ising spin-glass models, can be implemented by chaotic billiard dynamics without any use of random numbers. In this paper, we further numerically investigate the capabilities of the chaotic billiard dynamics as a deterministic alternative to random Monte Carlo methods by applying it to classical spin models in statistical physics. First, we verify that the billiard dynamics can yield samples that converge to the true distribution of the Ising model on a small lattice, and we show that it appears to have the same convergence rate as random Monte Carlo sampling. Second, we apply the billiard dynamics to finite-size scaling analysis of the critical behavior of the Ising model and show that the phase-transition point and the critical exponents are correctly obtained. Third, we extend the billiard dynamics to spins that take more than two states and show that it can be applied successfully to the Potts model. We also discuss the possibility of extensions to continuous-valued models such as the XY model.

  20. Infinite variance in fermion quantum Monte Carlo calculations

    NASA Astrophysics Data System (ADS)

    Shi, Hao; Zhang, Shiwei

    2016-03-01

    For important classes of many-fermion problems, quantum Monte Carlo (QMC) methods allow exact calculations of ground-state and finite-temperature properties without the sign problem. The list spans condensed matter, nuclear physics, and high-energy physics, including the half-filled repulsive Hubbard model, the spin-balanced atomic Fermi gas, and lattice quantum chromodynamics calculations at zero density with Wilson Fermions, and is growing rapidly as a number of problems have been discovered recently to be free of the sign problem. In these situations, QMC calculations are relied on to provide definitive answers. Their results are instrumental to our ability to understand and compute properties in fundamental models important to multiple subareas in quantum physics. It is shown, however, that the most commonly employed algorithms in such situations have an infinite variance problem. A diverging variance causes the estimated Monte Carlo statistical error bar to be incorrect, which can render the results of the calculation unreliable or meaningless. We discuss how to identify the infinite variance problem. An approach is then proposed to solve the problem. The solution does not require major modifications to standard algorithms, adding a "bridge link" to the imaginary-time path integral. The general idea is applicable to a variety of situations where the infinite variance problem may be present. Illustrative results are presented for the ground state of the Hubbard model at half-filling.

  1. Infinite variance in fermion quantum Monte Carlo calculations.

    PubMed

    Shi, Hao; Zhang, Shiwei

    2016-03-01

    For important classes of many-fermion problems, quantum Monte Carlo (QMC) methods allow exact calculations of ground-state and finite-temperature properties without the sign problem. The list spans condensed matter, nuclear physics, and high-energy physics, including the half-filled repulsive Hubbard model, the spin-balanced atomic Fermi gas, and lattice quantum chromodynamics calculations at zero density with Wilson Fermions, and is growing rapidly as a number of problems have been discovered recently to be free of the sign problem. In these situations, QMC calculations are relied on to provide definitive answers. Their results are instrumental to our ability to understand and compute properties in fundamental models important to multiple subareas in quantum physics. It is shown, however, that the most commonly employed algorithms in such situations have an infinite variance problem. A diverging variance causes the estimated Monte Carlo statistical error bar to be incorrect, which can render the results of the calculation unreliable or meaningless. We discuss how to identify the infinite variance problem. An approach is then proposed to solve the problem. The solution does not require major modifications to standard algorithms, adding a "bridge link" to the imaginary-time path integral. The general idea is applicable to a variety of situations where the infinite variance problem may be present. Illustrative results are presented for the ground state of the Hubbard model at half-filling.

  2. A pure-sampling quantum Monte Carlo algorithm

    SciTech Connect

    Ospadov, Egor; Rothstein, Stuart M.

    2015-01-14

    The objective of pure-sampling quantum Monte Carlo is to calculate physical properties that are independent of the importance sampling function being employed in the calculation, save for the mismatch of its nodal hypersurface with that of the exact wave function. To achieve this objective, we report a pure-sampling algorithm that combines features of forward walking methods of pure-sampling and reptation quantum Monte Carlo (RQMC). The new algorithm accurately samples properties from the mixed and pure distributions simultaneously in runs performed at a single set of time-steps, over which extrapolation to zero time-step is performed. In a detailed comparison, we found RQMC to be less efficient. It requires different sets of time-steps to accurately determine the energy and other properties, such as the dipole moment. We implement our algorithm by systematically increasing an algorithmic parameter until the properties converge to statistically equivalent values. As a proof in principle, we calculated the fixed-node energy, static α polarizability, and other one-electron expectation values for the ground-states of LiH and water molecules. These quantities are free from importance sampling bias, population control bias, time-step bias, extrapolation-model bias, and the finite-field approximation. We found excellent agreement with the accepted values for the energy and a variety of other properties for those systems.

  3. Monte Carlo simulation of classical spin models with chaotic billiards

    NASA Astrophysics Data System (ADS)

    Suzuki, Hideyuki

    2013-11-01

    It has recently been shown that the computing abilities of Boltzmann machines, or Ising spin-glass models, can be implemented by chaotic billiard dynamics without any use of random numbers. In this paper, we further numerically investigate the capabilities of the chaotic billiard dynamics as a deterministic alternative to random Monte Carlo methods by applying it to classical spin models in statistical physics. First, we verify that the billiard dynamics can yield samples that converge to the true distribution of the Ising model on a small lattice, and we show that it appears to have the same convergence rate as random Monte Carlo sampling. Second, we apply the billiard dynamics to finite-size scaling analysis of the critical behavior of the Ising model and show that the phase-transition point and the critical exponents are correctly obtained. Third, we extend the billiard dynamics to spins that take more than two states and show that it can be applied successfully to the Potts model. We also discuss the possibility of extensions to continuous-valued models such as the XY model.

  4. Iterative acceleration methods for Monte Carlo and deterministic criticality calculations

    SciTech Connect

    Urbatsch, T.J.

    1995-11-01

    If you have ever given up on a nuclear criticality calculation and terminated it because it took so long to converge, you might find this thesis of interest. The author develops three methods for improving the fission source convergence in nuclear criticality calculations for physical systems with high dominance ratios for which convergence is slow. The Fission Matrix Acceleration Method and the Fission Diffusion Synthetic Acceleration (FDSA) Method are acceleration methods that speed fission source convergence for both Monte Carlo and deterministic methods. The third method is a hybrid Monte Carlo method that also converges for difficult problems where the unaccelerated Monte Carlo method fails. The author tested the feasibility of all three methods in a test bed consisting of idealized problems. He has successfully accelerated fission source convergence in both deterministic and Monte Carlo criticality calculations. By filtering statistical noise, he has incorporated deterministic attributes into the Monte Carlo calculations in order to speed their source convergence. He has used both the fission matrix and a diffusion approximation to perform unbiased accelerations. The Fission Matrix Acceleration method has been implemented in the production code MCNP and successfully applied to a real problem. When the unaccelerated calculations are unable to converge to the correct solution, they cannot be accelerated in an unbiased fashion. A Hybrid Monte Carlo method weds Monte Carlo and a modified diffusion calculation to overcome these deficiencies. The Hybrid method additionally possesses reduced statistical errors.

  5. Particle acceleration at shocks - A Monte Carlo method

    NASA Technical Reports Server (NTRS)

    Kirk, J. G.; Schneider, P.

    1987-01-01

    A Monte Carlo method is presented for the problem of acceleration of test particles at relativistic shocks. The particles are assumed to diffuse in pitch angle as a result of scattering off magnetic irregularities frozen into the fluid. Several tests are performed using the analytic results available for both relativistic and nonrelativistic shock speeds. The acceleration at relativistic shocks under the influence of radiation losses is investigated, including the effects of a momentum dependence in the diffusion coefficient. The results demonstrate the usefulness of the technique in those situations in which the diffusion approximation cannot be employed, such as when relativistic bulk motion is considered, when particles are permitted to escape at the boundaries, and when the effects of the finite length of the particle mean free path are important.

  6. Bold diagrammatic Monte Carlo method applied to fermionized frustrated spins.

    PubMed

    Kulagin, S A; Prokof'ev, N; Starykh, O A; Svistunov, B; Varney, C N

    2013-02-15

    We demonstrate, by considering the triangular lattice spin-1/2 Heisenberg model, that Monte Carlo sampling of skeleton Feynman diagrams within the fermionization framework offers a universal first-principles tool for strongly correlated lattice quantum systems. We observe the fermionic sign blessing--cancellation of higher order diagrams leading to a finite convergence radius of the series. We calculate the magnetic susceptibility of the triangular-lattice quantum antiferromagnet in the correlated paramagnet regime and reveal a surprisingly accurate microscopic correspondence with its classical counterpart at all accessible temperatures. The extrapolation of the observed relation to zero temperature suggests the absence of the magnetic order in the ground state. We critically examine the implications of this unusual scenario.

  7. Markov chain Monte Carlo simulation for Bayesian Hidden Markov Models

    NASA Astrophysics Data System (ADS)

    Chan, Lay Guat; Ibrahim, Adriana Irawati Nur Binti

    2016-10-01

    A hidden Markov model (HMM) is a mixture model which has a Markov chain with finite states as its mixing distribution. HMMs have been applied to a variety of fields, such as speech and face recognitions. The main purpose of this study is to investigate the Bayesian approach to HMMs. Using this approach, we can simulate from the parameters' posterior distribution using some Markov chain Monte Carlo (MCMC) sampling methods. HMMs seem to be useful, but there are some limitations. Therefore, by using the Mixture of Dirichlet processes Hidden Markov Model (MDPHMM) based on Yau et. al (2011), we hope to overcome these limitations. We shall conduct a simulation study using MCMC methods to investigate the performance of this model.

  8. Monte Carlo simulations of kagome lattices with magnetic dipolar interactions

    NASA Astrophysics Data System (ADS)

    Plumer, Martin; Holden, Mark; Way, Andrew; Saika-Voivod, Ivan; Southern, Byron

    Monte Carlo simulations of classical spins on the two-dimensional kagome lattice with only dipolar interactions are presented. In addition to revealing the sixfold-degenerate ground state, the nature of the finite-temperature phase transition to long-range magnetic order is discussed. Low-temperature states consisting of mixtures of degenerate ground-state configurations separated by domain walls can be explained as a result of competing exchange-like and shape-anisotropy-like terms in the dipolar coupling. Fluctuations between pairs of degenerate spin configurations are found to persist well into the ordered state as the temperature is lowered until locking in to a low-energy state. Results suggest that the system undergoes a continuous phase transition at T ~ 0 . 43 in agreement with previous MC simulations but the nature of the ordering process differs. Preliminary results which extend this analysis to the 3D fcc ABC-stacked kagome systems will be presented.

  9. A study of the XY model by the Monte Carlo method

    NASA Technical Reports Server (NTRS)

    Suranyi, Peter; Harten, Paul

    1987-01-01

    The massively parallel processor is used to perform Monte Carlo simulations for the two dimensional XY model on lattices of sizes up to 128 x 128. A parallel random number generator was constructed, finite size effects were studied, and run times were compared with those on a CRAY X-MP supercomputer.

  10. Monte Carlo Study of One-Dimensional Ising Models with Long-Range Interactions

    NASA Astrophysics Data System (ADS)

    Tomita, Yusuke

    2009-01-01

    Recently, Fukui and Todo have proposed a new effective Monte Carlo algorithm for long-range interacting systems. Using the algorithm with the nonequilibrium relaxation method, we investigated long-range interacting one-dimensional Ising models both ferromagnetic and antiferromagnetic with the nearest-neighbor ferromagnetic interaction. For the antiferromagnetic model, we found the systems are paramagnetic at finite temperatures.

  11. Complete Monte Carlo Simulation of Neutron Scattering Experiments

    NASA Astrophysics Data System (ADS)

    Drosg, M.

    2011-12-01

    In the far past, it was not possible to accurately correct for the finite geometry and the finite sample size of a neutron scattering set-up. The limited calculation power of the ancient computers as well as the lack of powerful Monte Carlo codes and the limitation in the data base available then prevented a complete simulation of the actual experiment. Using e.g. the Monte Carlo neutron transport code MCNPX [1], neutron scattering experiments can be simulated almost completely with a high degree of precision using a modern PC, which has a computing power that is ten thousand times that of a super computer of the early 1970s. Thus, (better) corrections can also be obtained easily for previous published data provided that these experiments are sufficiently well documented. Better knowledge of reference data (e.g. atomic mass, relativistic correction, and monitor cross sections) further contributes to data improvement. Elastic neutron scattering experiments from liquid samples of the helium isotopes performed around 1970 at LANL happen to be very well documented. Considering that the cryogenic targets are expensive and complicated, it is certainly worthwhile to improve these data by correcting them using this comparatively straightforward method. As two thirds of all differential scattering cross section data of 3He(n,n)3He are connected to the LANL data, it became necessary to correct the dependent data measured in Karlsruhe, Germany, as well. A thorough simulation of both the LANL experiments and the Karlsruhe experiment is presented, starting from the neutron production, followed by the interaction in the air, the interaction with the cryostat structure, and finally the scattering medium itself. In addition, scattering from the hydrogen reference sample was simulated. For the LANL data, the multiple scattering corrections are smaller by a factor of five at least, making this work relevant. Even more important are the corrections to the Karlsruhe data due to the

  12. Atomistic Monte Carlo Simulation of Lipid Membranes

    PubMed Central

    Wüstner, Daniel; Sklenar, Heinz

    2014-01-01

    Biological membranes are complex assemblies of many different molecules of which analysis demands a variety of experimental and computational approaches. In this article, we explain challenges and advantages of atomistic Monte Carlo (MC) simulation of lipid membranes. We provide an introduction into the various move sets that are implemented in current MC methods for efficient conformational sampling of lipids and other molecules. In the second part, we demonstrate for a concrete example, how an atomistic local-move set can be implemented for MC simulations of phospholipid monomers and bilayer patches. We use our recently devised chain breakage/closure (CBC) local move set in the bond-/torsion angle space with the constant-bond-length approximation (CBLA) for the phospholipid dipalmitoylphosphatidylcholine (DPPC). We demonstrate rapid conformational equilibration for a single DPPC molecule, as assessed by calculation of molecular energies and entropies. We also show transition from a crystalline-like to a fluid DPPC bilayer by the CBC local-move MC method, as indicated by the electron density profile, head group orientation, area per lipid, and whole-lipid displacements. We discuss the potential of local-move MC methods in combination with molecular dynamics simulations, for example, for studying multi-component lipid membranes containing cholesterol. PMID:24469314

  13. Monte Carlo simulation of chromatin stretching

    NASA Astrophysics Data System (ADS)

    Aumann, Frank; Lankas, Filip; Caudron, Maïwen; Langowski, Jörg

    2006-04-01

    We present Monte Carlo (MC) simulations of the stretching of a single 30nm chromatin fiber. The model approximates the DNA by a flexible polymer chain with Debye-Hückel electrostatics and uses a two-angle zigzag model for the geometry of the linker DNA connecting the nucleosomes. The latter are represented by flat disks interacting via an attractive Gay-Berne potential. Our results show that the stiffness of the chromatin fiber strongly depends on the linker DNA length. Furthermore, changing the twisting angle between nucleosomes from 90° to 130° increases the stiffness significantly. An increase in the opening angle from 22° to 34° leads to softer fibers for small linker lengths. We observe that fibers containing a linker histone at each nucleosome are stiffer compared to those without the linker histone. The simulated persistence lengths and elastic moduli agree with experimental data. Finally, we show that the chromatin fiber does not behave as an isotropic elastic rod, but its rigidity depends on the direction of deformation: Chromatin is much more resistant to stretching than to bending.

  14. Monte Carlo simulations of Protein Adsorption

    NASA Astrophysics Data System (ADS)

    Sharma, Sumit; Kumar, Sanat K.; Belfort, Georges

    2008-03-01

    Amyloidogenic diseases, such as, Alzheimer's are caused by adsorption and aggregation of partially unfolded proteins. Adsorption of proteins is a concern in design of biomedical devices, such as dialysis membranes. Protein adsorption is often accompanied by conformational rearrangements in protein molecules. Such conformational rearrangements are thought to affect many properties of adsorbed protein molecules such as their adhesion strength to the surface, biological activity, and aggregation tendency. It has been experimentally shown that many naturally occurring proteins, upon adsorption to hydrophobic surfaces, undergo a helix to sheet or random coil secondary structural rearrangement. However, to better understand the equilibrium structural complexities of this phenomenon, we have performed Monte Carlo (MC) simulations of adsorption of a four helix bundle, modeled as a lattice protein, and studied the adsorption behavior and equilibrium protein conformations at different temperatures and degrees of surface hydrophobicity. To study the free energy and entropic effects on adsorption, Canonical ensemble MC simulations have been combined with Weighted Histogram Analysis Method(WHAM). Conformational transitions of proteins on surfaces will be discussed as a function of surface hydrophobicity and compared to analogous bulk transitions.

  15. Finding Planet Nine: a Monte Carlo approach

    NASA Astrophysics Data System (ADS)

    de la Fuente Marcos, C.; de la Fuente Marcos, R.

    2016-06-01

    Planet Nine is a hypothetical planet located well beyond Pluto that has been proposed in an attempt to explain the observed clustering in physical space of the perihelia of six extreme trans-Neptunian objects or ETNOs. The predicted approximate values of its orbital elements include a semimajor axis of 700 au, an eccentricity of 0.6, an inclination of 30°, and an argument of perihelion of 150°. Searching for this putative planet is already under way. Here, we use a Monte Carlo approach to create a synthetic population of Planet Nine orbits and study its visibility statistically in terms of various parameters and focusing on the aphelion configuration. Our analysis shows that, if Planet Nine exists and is at aphelion, it might be found projected against one out of the four specific areas in the sky. Each area is linked to a particular value of the longitude of the ascending node and two of them are compatible with an apsidal anti-alignment scenario. In addition and after studying the current statistics of ETNOs, a cautionary note on the robustness of the perihelia clustering is presented.

  16. Classical Trajectory and Monte Carlo Techniques

    NASA Astrophysics Data System (ADS)

    Olson, Ronald

    The classical trajectory Monte Carlo (CTMC) method originated with Hirschfelder, who studied the H + D2 exchange reaction using a mechanical calculator [58.1]. With the availability of computers, the CTMC method was actively applied to a large number of chemical systems to determine reaction rates, and final state vibrational and rotational populations (see, e.g., Karplus et al. [58.2]). For atomic physics problems, a major step was introduced by Abrines and Percival [58.3] who employed Kepler's equations and the Bohr-Sommerfield model for atomic hydrogen to investigate electron capture and ionization for intermediate velocity collisions of H+ + H. An excellent description is given by Percival and Richards [58.4]. The CTMC method has a wide range of applicability to strongly-coupled systems, such as collisions by multiply-charged ions [58.5]. In such systems, perturbation methods fail, and basis set limitations of coupled-channel molecular- and atomic-orbital techniques have difficulty in representing the multitude of activeexcitation, electron capture, and ionization channels. Vector- and parallel-processors now allow increasingly detailed study of the dynamics of the heavy projectile and target, along with the active electrons.

  17. Markov Chain Monte Carlo and Irreversibility

    NASA Astrophysics Data System (ADS)

    Ottobre, Michela

    2016-06-01

    Markov Chain Monte Carlo (MCMC) methods are statistical methods designed to sample from a given measure π by constructing a Markov chain that has π as invariant measure and that converges to π. Most MCMC algorithms make use of chains that satisfy the detailed balance condition with respect to π; such chains are therefore reversible. On the other hand, recent work [18, 21, 28, 29] has stressed several advantages of using irreversible processes for sampling. Roughly speaking, irreversible diffusions converge to equilibrium faster (and lead to smaller asymptotic variance as well). In this paper we discuss some of the recent progress in the study of nonreversible MCMC methods. In particular: i) we explain some of the difficulties that arise in the analysis of nonreversible processes and we discuss some analytical methods to approach the study of continuous-time irreversible diffusions; ii) most of the rigorous results on irreversible diffusions are available for continuous-time processes; however, for computational purposes one needs to discretize such dynamics. It is well known that the resulting discretized chain will not, in general, retain all the good properties of the process that it is obtained from. In particular, if we want to preserve the invariance of the target measure, the chain might no longer be reversible. Therefore iii) we conclude by presenting an MCMC algorithm, the SOL-HMC algorithm [23], which results from a nonreversible discretization of a nonreversible dynamics.

  18. Commensurabilities between ETNOs: a Monte Carlo survey

    NASA Astrophysics Data System (ADS)

    de la Fuente Marcos, C.; de la Fuente Marcos, R.

    2016-07-01

    Many asteroids in the main and trans-Neptunian belts are trapped in mean motion resonances with Jupiter and Neptune, respectively. As a side effect, they experience accidental commensurabilities among themselves. These commensurabilities define characteristic patterns that can be used to trace the source of the observed resonant behaviour. Here, we explore systematically the existence of commensurabilities between the known ETNOs using their heliocentric and barycentric semimajor axes, their uncertainties, and Monte Carlo techniques. We find that the commensurability patterns present in the known ETNO population resemble those found in the main and trans-Neptunian belts. Although based on small number statistics, such patterns can only be properly explained if most, if not all, of the known ETNOs are subjected to the resonant gravitational perturbations of yet undetected trans-Plutonian planets. We show explicitly that some of the statistically significant commensurabilities are compatible with the Planet Nine hypothesis; in particular, a number of objects may be trapped in the 5:3 and 3:1 mean motion resonances with a putative Planet Nine with semimajor axis ˜700 au.

  19. Algorithmic differentiation and the calculation of forces by quantum Monte Carlo.

    PubMed

    Sorella, Sandro; Capriotti, Luca

    2010-12-21

    We describe an efficient algorithm to compute forces in quantum Monte Carlo using adjoint algorithmic differentiation. This allows us to apply the space warp coordinate transformation in differential form, and compute all the 3M force components of a system with M atoms with a computational effort comparable with the one to obtain the total energy. Few examples illustrating the method for an electronic system containing several water molecules are presented. With the present technique, the calculation of finite-temperature thermodynamic properties of materials with quantum Monte Carlo will be feasible in the near future.

  20. Combinatorial geometry domain decomposition strategies for Monte Carlo simulations

    SciTech Connect

    Li, G.; Zhang, B.; Deng, L.; Mo, Z.; Liu, Z.; Shangguan, D.; Ma, Y.; Li, S.; Hu, Z.

    2013-07-01

    Analysis and modeling of nuclear reactors can lead to memory overload for a single core processor when it comes to refined modeling. A method to solve this problem is called 'domain decomposition'. In the current work, domain decomposition algorithms for a combinatorial geometry Monte Carlo transport code are developed on the JCOGIN (J Combinatorial Geometry Monte Carlo transport INfrastructure). Tree-based decomposition and asynchronous communication of particle information between domains are described in the paper. Combination of domain decomposition and domain replication (particle parallelism) is demonstrated and compared with that of MERCURY code. A full-core reactor model is simulated to verify the domain decomposition algorithms using the Monte Carlo particle transport code JMCT (J Monte Carlo Transport Code), which has being developed on the JCOGIN infrastructure. Besides, influences of the domain decomposition algorithms to tally variances are discussed. (authors)

  1. Bayesian phylogeny analysis via stochastic approximation Monte Carlo.

    PubMed

    Cheon, Sooyoung; Liang, Faming

    2009-11-01

    Monte Carlo methods have received much attention in the recent literature of phylogeny analysis. However, the conventional Markov chain Monte Carlo algorithms, such as the Metropolis-Hastings algorithm, tend to get trapped in a local mode in simulating from the posterior distribution of phylogenetic trees, rendering the inference ineffective. In this paper, we apply an advanced Monte Carlo algorithm, the stochastic approximation Monte Carlo algorithm, to Bayesian phylogeny analysis. Our method is compared with two popular Bayesian phylogeny software, BAMBE and MrBayes, on simulated and real datasets. The numerical results indicate that our method outperforms BAMBE and MrBayes. Among the three methods, SAMC produces the consensus trees which have the highest similarity to the true trees, and the model parameter estimates which have the smallest mean square errors, but costs the least CPU time.

  2. Monte Carlo variance reduction approaches for non-Boltzmann tallies

    SciTech Connect

    Booth, T.E.

    1992-12-01

    Quantities that depend on the collective effects of groups of particles cannot be obtained from the standard Boltzmann transport equation. Monte Carlo estimates of these quantities are called non-Boltzmann tallies and have become increasingly important recently. Standard Monte Carlo variance reduction techniques were designed for tallies based on individual particles rather than groups of particles. Experience with non-Boltzmann tallies and analog Monte Carlo has demonstrated the severe limitations of analog Monte Carlo for many non-Boltzmann tallies. In fact, many calculations absolutely require variance reduction methods to achieve practical computation times. Three different approaches to variance reduction for non-Boltzmann tallies are described and shown to be unbiased. The advantages and disadvantages of each of the approaches are discussed.

  3. OBJECT KINETIC MONTE CARLO SIMULATIONS OF CASCADE ANNEALING IN TUNGSTEN

    SciTech Connect

    Nandipati, Giridhar; Setyawan, Wahyu; Heinisch, Howard L.; Roche, Kenneth J.; Kurtz, Richard J.; Wirth, Brian D.

    2014-03-31

    The objective of this work is to study the annealing of primary cascade damage created by primary knock-on atoms (PKAs) of various energies, at various temperatures in bulk tungsten using the object kinetic Monte Carlo (OKMC) method.

  4. Monte Carlo simulations: Hidden errors from ``good'' random number generators

    NASA Astrophysics Data System (ADS)

    Ferrenberg, Alan M.; Landau, D. P.; Wong, Y. Joanna

    1992-12-01

    The Wolff algorithm is now accepted as the best cluster-flipping Monte Carlo algorithm for beating ``critical slowing down.'' We show how this method can yield incorrect answers due to subtle correlations in ``high quality'' random number generators.

  5. Monte Carlo next-event estimates from thermal collisions

    SciTech Connect

    Hendricks, J.S.; Prael, R.E.

    1990-01-01

    A new approximate method has been developed by Richard E. Prael to allow S({alpha},{beta}) thermal collision contributions to next-event estimators in Monte Carlo calculations. The new technique is generally applicable to next-event estimator contributions from any discrete probability distribution. The method has been incorporated into Version 4 of the production Monte Carlo neutron and photon radiation transport code MCNP. 9 refs.

  6. Multiscale Monte Carlo equilibration: Pure Yang-Mills theory

    NASA Astrophysics Data System (ADS)

    Endres, Michael G.; Brower, Richard C.; Detmold, William; Orginos, Kostas; Pochinsky, Andrew V.

    2015-12-01

    We present a multiscale thermalization algorithm for lattice gauge theory, which enables efficient parallel generation of uncorrelated gauge field configurations. The algorithm combines standard Monte Carlo techniques with ideas drawn from real space renormalization group and multigrid methods. We demonstrate the viability of the algorithm for pure Yang-Mills gauge theory for both heat bath and hybrid Monte Carlo evolution, and show that it ameliorates the problem of topological freezing up to controllable lattice spacing artifacts.

  7. Development of Monte Carlo Capability for Orion Parachute Simulations

    NASA Technical Reports Server (NTRS)

    Moore, James W.

    2011-01-01

    Parachute test programs employ Monte Carlo simulation techniques to plan testing and make critical decisions related to parachute loads, rate-of-descent, or other parameters. This paper describes the development and use of a MATLAB-based Monte Carlo tool for three parachute drop test simulations currently used by NASA. The Decelerator System Simulation (DSS) is a legacy 6 Degree-of-Freedom (DOF) simulation used to predict parachute loads and descent trajectories. The Decelerator System Simulation Application (DSSA) is a 6-DOF simulation that is well suited for modeling aircraft extraction and descent of pallet-like test vehicles. The Drop Test Vehicle Simulation (DTVSim) is a 2-DOF trajectory simulation that is convenient for quick turn-around analysis tasks. These three tools have significantly different software architectures and do not share common input files or output data structures. Separate Monte Carlo tools were initially developed for each simulation. A recently-developed simulation output structure enables the use of the more sophisticated DSSA Monte Carlo tool with any of the core-simulations. The task of configuring the inputs for the nominal simulation is left to the existing tools. Once the nominal simulation is configured, the Monte Carlo tool perturbs the input set according to dispersion rules created by the analyst. These rules define the statistical distribution and parameters to be applied to each simulation input. Individual dispersed parameters are combined to create a dispersed set of simulation inputs. The Monte Carlo tool repeatedly executes the core-simulation with the dispersed inputs and stores the results for analysis. The analyst may define conditions on one or more output parameters at which to collect data slices. The tool provides a versatile interface for reviewing output of large Monte Carlo data sets while preserving the capability for detailed examination of individual dispersed trajectories. The Monte Carlo tool described in

  8. Improved Collision Modeling for Direct Simulation Monte Carlo Methods

    DTIC Science & Technology

    2011-03-01

    number is a measure of the rarefaction of a gas , and will be explained more thoroughly in the following chap- ter. Continuum solvers that use Navier...Limits on Mathematical Models [4] Kn=0.1, and the flow can be considered rarefied above that value. Direct Simulation Monte Carlo (DSMC) is a stochastic...method which utilizes the Monte Carlo statistical model to simulate gas behavior, which is very useful for these rarefied atmosphere hypersonic

  9. Study of the Transition Flow Regime using Monte Carlo Methods

    NASA Technical Reports Server (NTRS)

    Hassan, H. A.

    1999-01-01

    This NASA Cooperative Agreement presents a study of the Transition Flow Regime Using Monte Carlo Methods. The topics included in this final report are: 1) New Direct Simulation Monte Carlo (DSMC) procedures; 2) The DS3W and DS2A Programs; 3) Papers presented; 4) Miscellaneous Applications and Program Modifications; 5) Solution of Transitional Wake Flows at Mach 10; and 6) Turbulence Modeling of Shock-Dominated Fows with a k-Enstrophy Formulation.

  10. Confidence and efficiency scaling in variational quantum Monte Carlo calculations

    NASA Astrophysics Data System (ADS)

    Delyon, F.; Bernu, B.; Holzmann, Markus

    2017-02-01

    Based on the central limit theorem, we discuss the problem of evaluation of the statistical error of Monte Carlo calculations using a time-discretized diffusion process. We present a robust and practical method to determine the effective variance of general observables and show how to verify the equilibrium hypothesis by the Kolmogorov-Smirnov test. We then derive scaling laws of the efficiency illustrated by variational Monte Carlo calculations on the two-dimensional electron gas.

  11. CosmoPMC: Cosmology sampling with Population Monte Carlo

    NASA Astrophysics Data System (ADS)

    Kilbinger, Martin; Benabed, Karim; Cappé, Olivier; Coupon, Jean; Cardoso, Jean-François; Fort, Gersende; McCracken, Henry Joy; Prunet, Simon; Robert, Christian P.; Wraith, Darren

    2012-12-01

    CosmoPMC is a Monte-Carlo sampling method to explore the likelihood of various cosmological probes. The sampling engine is implemented with the package pmclib. It is called Population MonteCarlo (PMC), which is a novel technique to sample from the posterior. PMC is an adaptive importance sampling method which iteratively improves the proposal to approximate the posterior. This code has been introduced, tested and applied to various cosmology data sets.

  12. Green's function Monte Carlo calculations of /sup 4/He

    SciTech Connect

    Carlson, J.A.

    1988-01-01

    Green's Function Monte Carlo methods have been developed to study the ground state properties of light nuclei. These methods are shown to reproduce results of Faddeev calculations for A = 3, and are then used to calculate ground state energies, one- and two-body distribution functions, and the D-state probability for the alpha particle. Results are compared to variational Monte Carlo calculations for several nuclear interaction models. 31 refs.

  13. Successful combination of the stochastic linearization and Monte Carlo methods

    NASA Technical Reports Server (NTRS)

    Elishakoff, I.; Colombi, P.

    1993-01-01

    A combination of a stochastic linearization and Monte Carlo techniques is presented for the first time in literature. A system with separable nonlinear damping and nonlinear restoring force is considered. The proposed combination of the energy-wise linearization with the Monte Carlo method yields an error under 5 percent, which corresponds to the error reduction associated with the conventional stochastic linearization by a factor of 4.6.

  14. de Finetti Priors using Markov chain Monte Carlo computations.

    PubMed

    Bacallado, Sergio; Diaconis, Persi; Holmes, Susan

    2015-07-01

    Recent advances in Monte Carlo methods allow us to revisit work by de Finetti who suggested the use of approximate exchangeability in the analyses of contingency tables. This paper gives examples of computational implementations using Metropolis Hastings, Langevin and Hamiltonian Monte Carlo to compute posterior distributions for test statistics relevant for testing independence, reversible or three way models for discrete exponential families using polynomial priors and Gröbner bases.

  15. de Finetti Priors using Markov chain Monte Carlo computations

    PubMed Central

    Bacallado, Sergio; Diaconis, Persi; Holmes, Susan

    2015-01-01

    Recent advances in Monte Carlo methods allow us to revisit work by de Finetti who suggested the use of approximate exchangeability in the analyses of contingency tables. This paper gives examples of computational implementations using Metropolis Hastings, Langevin and Hamiltonian Monte Carlo to compute posterior distributions for test statistics relevant for testing independence, reversible or three way models for discrete exponential families using polynomial priors and Gröbner bases. PMID:26412947

  16. Monte Carlo methods and applications in nuclear physics

    SciTech Connect

    Carlson, J.

    1990-01-01

    Monte Carlo methods for studying few- and many-body quantum systems are introduced, with special emphasis given to their applications in nuclear physics. Variational and Green's function Monte Carlo methods are presented in some detail. The status of calculations of light nuclei is reviewed, including discussions of the three-nucleon-interaction, charge and magnetic form factors, the coulomb sum rule, and studies of low-energy radiative transitions. 58 refs., 12 figs.

  17. Event-chain Monte Carlo for classical continuous spin models

    NASA Astrophysics Data System (ADS)

    Michel, Manon; Mayer, Johannes; Krauth, Werner

    2015-10-01

    We apply the event-chain Monte Carlo algorithm to classical continuum spin models on a lattice and clarify the condition for its validity. In the two-dimensional XY model, it outperforms the local Monte Carlo algorithm by two orders of magnitude, although it remains slower than the Wolff cluster algorithm. In the three-dimensional XY spin glass model at low temperature, the event-chain algorithm is far superior to the other algorithms.

  18. DPEMC: A Monte Carlo for double diffraction

    NASA Astrophysics Data System (ADS)

    Boonekamp, M.; Kúcs, T.

    2005-05-01

    We extend the POMWIG Monte Carlo generator developed by B. Cox and J. Forshaw, to include new models of central production through inclusive and exclusive double Pomeron exchange in proton-proton collisions. Double photon exchange processes are described as well, both in proton-proton and heavy-ion collisions. In all contexts, various models have been implemented, allowing for comparisons and uncertainty evaluation and enabling detailed experimental simulations. Program summaryTitle of the program:DPEMC, version 2.4 Catalogue identifier: ADVF Program summary URL:http://cpc.cs.qub.ac.uk/summaries/ADVF Program obtainable from: CPC Program Library, Queen's University of Belfast, N. Ireland Computer: any computer with the FORTRAN 77 compiler under the UNIX or Linux operating systems Operating system: UNIX; Linux Programming language used: FORTRAN 77 High speed storage required:<25 MB No. of lines in distributed program, including test data, etc.: 71 399 No. of bytes in distributed program, including test data, etc.: 639 950 Distribution format: tar.gz Nature of the physical problem: Proton diffraction at hadron colliders can manifest itself in many forms, and a variety of models exist that attempt to describe it [A. Bialas, P.V. Landshoff, Phys. Lett. B 256 (1991) 540; A. Bialas, W. Szeremeta, Phys. Lett. B 296 (1992) 191; A. Bialas, R.A. Janik, Z. Phys. C 62 (1994) 487; M. Boonekamp, R. Peschanski, C. Royon, Phys. Rev. Lett. 87 (2001) 251806; Nucl. Phys. B 669 (2003) 277; R. Enberg, G. Ingelman, A. Kissavos, N. Timneanu, Phys. Rev. Lett. 89 (2002) 081801; R. Enberg, G. Ingelman, L. Motyka, Phys. Lett. B 524 (2002) 273; R. Enberg, G. Ingelman, N. Timneanu, Phys. Rev. D 67 (2003) 011301; B. Cox, J. Forshaw, Comput. Phys. Comm. 144 (2002) 104; B. Cox, J. Forshaw, B. Heinemann, Phys. Lett. B 540 (2002) 26; V. Khoze, A. Martin, M. Ryskin, Phys. Lett. B 401 (1997) 330; Eur. Phys. J. C 14 (2000) 525; Eur. Phys. J. C 19 (2001) 477; Erratum, Eur. Phys. J. C 20 (2001) 599; Eur

  19. Monte Carlo simulation of peak-acceleration attenuation using a finite-fault uniform-patch model including isochrone and extremal characteristics

    USGS Publications Warehouse

    Rogers, A.M.; Perkins, D.M.

    1996-01-01

    A finite-fault statistical model of the earthquake source is used to confirm observed magnitude and distance saturation scaling in a large peak-acceleration data set. This model allows us to determine the form of peak-acceleration attenuation curves without a priori assumptions about their shape or scaling properties. The source is composed of patches having uniform size and statistical properties. The primary source parameters are the patch peak-acceleration distribution mean, the distribution standard deviation, the patch size, and patch-rupture duration. Although our model assumes no scaling of peak acceleration with magnitude at the patch, the peak-acceleration attenuation curves, nevertheless, strongly scale with magnitude (dap/dM) ??? 0, and the scaling is distance dependent (dap/dM) ??? f(r). The distance-dependent magnitude scaling arises from two principal sources in the model. For a propagating rupture, loci exist on the fault from which radiated energy arrives at a particular station at the same time. These loci are referred to as isochrones. As fault size increases, the length of the isochrones and, hence, the number of additive pulses increase. Thus, peak accelerations increase with magnitude. The second effect, which arises in a completely different manner, is due to extreme-value properties. That is, as the fault size increases, the number of patches on the fault and the number of peak values at the station increase. Because these attenuated pulses are produced by a statistical distribution at the patch, the largest value will depend on the total number of peak values available on the seismogram. We refer to this result as the extremal effect, because it is predicted by the theory of extreme values. Both the extremal and isochrone effects are moderated by attenuation and distance to the fault, leading to magnitude- and distance-dependent peak-acceleration scaling. Remarkably, the scaling produced by both effects is very similar, although the

  20. kmos: A lattice kinetic Monte Carlo framework

    NASA Astrophysics Data System (ADS)

    Hoffmann, Max J.; Matera, Sebastian; Reuter, Karsten

    2014-07-01

    Kinetic Monte Carlo (kMC) simulations have emerged as a key tool for microkinetic modeling in heterogeneous catalysis and other materials applications. Systems, where site-specificity of all elementary reactions allows a mapping onto a lattice of discrete active sites, can be addressed within the particularly efficient lattice kMC approach. To this end we describe the versatile kmos software package, which offers a most user-friendly implementation, execution, and evaluation of lattice kMC models of arbitrary complexity in one- to three-dimensional lattice systems, involving multiple active sites in periodic or aperiodic arrangements, as well as site-resolved pairwise and higher-order lateral interactions. Conceptually, kmos achieves a maximum runtime performance which is essentially independent of lattice size by generating code for the efficiency-determining local update of available events that is optimized for a defined kMC model. For this model definition and the control of all runtime and evaluation aspects kmos offers a high-level application programming interface. Usage proceeds interactively, via scripts, or a graphical user interface, which visualizes the model geometry, the lattice occupations and rates of selected elementary reactions, while allowing on-the-fly changes of simulation parameters. We demonstrate the performance and scaling of kmos with the application to kMC models for surface catalytic processes, where for given operation conditions (temperature and partial pressures of all reactants) central simulation outcomes are catalytic activity and selectivities, surface composition, and mechanistic insight into the occurrence of individual elementary processes in the reaction network.

  1. Lattice Monte Carlo simulations of polymer melts

    NASA Astrophysics Data System (ADS)

    Hsu, Hsiao-Ping

    2014-12-01

    We use Monte Carlo simulations to study polymer melts consisting of fully flexible and moderately stiff chains in the bond fluctuation model at a volume fraction 0.5. In order to reduce the local density fluctuations, we test a pre-packing process for the preparation of the initial configurations of the polymer melts, before the excluded volume interaction is switched on completely. This process leads to a significantly faster decrease of the number of overlapping monomers on the lattice. This is useful for simulating very large systems, where the statistical properties of the model with a marginally incomplete elimination of excluded volume violations are the same as those of the model with strictly excluded volume. We find that the internal mean square end-to-end distance for moderately stiff chains in a melt can be very well described by a freely rotating chain model with a precise estimate of the bond-bond orientational correlation between two successive bond vectors in equilibrium. The plot of the probability distributions of the reduced end-to-end distance of chains of different stiffness also shows that the data collapse is excellent and described very well by the Gaussian distribution for ideal chains. However, while our results confirm the systematic deviations between Gaussian statistics for the chain structure factor Sc(q) [minimum in the Kratky-plot] found by Wittmer et al. [EPL 77, 56003 (2007)] for fully flexible chains in a melt, we show that for the available chain length these deviations are no longer visible, when the chain stiffness is included. The mean square bond length and the compressibility estimated from collective structure factors depend slightly on the stiffness of the chains.

  2. Monte-Carlo simulation of Callisto's exosphere

    NASA Astrophysics Data System (ADS)

    Vorburger, A.; Wurz, P.; Lammer, H.; Barabash, S.; Mousis, O.

    2015-12-01

    We model Callisto's exosphere based on its ice as well as non-ice surface via the use of a Monte-Carlo exosphere model. For the ice component we implement two putative compositions that have been computed from two possible extreme formation scenarios of the satellite. One composition represents the oxidizing state and is based on the assumption that the building blocks of Callisto were formed in the protosolar nebula and the other represents the reducing state of the gas, based on the assumption that the satellite accreted from solids condensed in the jovian sub-nebula. For the non-ice component we implemented the compositions of typical CI as well as L type chondrites. Both chondrite types have been suggested to represent Callisto's non-ice composition best. As release processes we consider surface sublimation, ion sputtering and photon-stimulated desorption. Particles are followed on their individual trajectories until they either escape Callisto's gravitational attraction, return to the surface, are ionized, or are fragmented. Our density profiles show that whereas the sublimated species dominate close to the surface on the sun-lit side, their density profiles (with the exception of H and H2) decrease much more rapidly than the sputtered particles. The Neutral gas and Ion Mass (NIM) spectrometer, which is part of the Particle Environment Package (PEP), will investigate Callisto's exosphere during the JUICE mission. Our simulations show that NIM will be able to detect sublimated and sputtered particles from both the ice and non-ice surface. NIM's measured chemical composition will allow us to distinguish between different formation scenarios.

  3. Perturbation Monte Carlo methods for tissue structure alterations.

    PubMed

    Nguyen, Jennifer; Hayakawa, Carole K; Mourant, Judith R; Spanier, Jerome

    2013-01-01

    This paper describes an extension of the perturbation Monte Carlo method to model light transport when the phase function is arbitrarily perturbed. Current perturbation Monte Carlo methods allow perturbation of both the scattering and absorption coefficients, however, the phase function can not be varied. The more complex method we develop and test here is not limited in this way. We derive a rigorous perturbation Monte Carlo extension that can be applied to a large family of important biomedical light transport problems and demonstrate its greater computational efficiency compared with using conventional Monte Carlo simulations to produce forward transport problem solutions. The gains of the perturbation method occur because only a single baseline Monte Carlo simulation is needed to obtain forward solutions to other closely related problems whose input is described by perturbing one or more parameters from the input of the baseline problem. The new perturbation Monte Carlo methods are tested using tissue light scattering parameters relevant to epithelia where many tumors originate. The tissue model has parameters for the number density and average size of three classes of scatterers; whole nuclei, organelles such as lysosomes and mitochondria, and small particles such as ribosomes or large protein complexes. When these parameters or the wavelength is varied the scattering coefficient and the phase function vary. Perturbation calculations give accurate results over variations of ∼15-25% of the scattering parameters.

  4. Implications of Monte Carlo Statistical Errors in Criticality Safety Assessments

    SciTech Connect

    Pevey, Ronald E.

    2005-09-15

    Most criticality safety calculations are performed using Monte Carlo techniques because of Monte Carlo's ability to handle complex three-dimensional geometries. For Monte Carlo calculations, the more histories sampled, the lower the standard deviation of the resulting estimates. The common intuition is, therefore, that the more histories, the better; as a result, analysts tend to run Monte Carlo analyses as long as possible (or at least to a minimum acceptable uncertainty). For Monte Carlo criticality safety analyses, however, the optimization situation is complicated by the fact that procedures usually require that an extra margin of safety be added because of the statistical uncertainty of the Monte Carlo calculations. This additional safety margin affects the impact of the choice of the calculational standard deviation, both on production and on safety. This paper shows that, under the assumptions of normally distributed benchmarking calculational errors and exact compliance with the upper subcritical limit (USL), the standard deviation that optimizes production is zero, but there is a non-zero value of the calculational standard deviation that minimizes the risk of inadvertently labeling a supercritical configuration as subcritical. Furthermore, this value is shown to be a simple function of the typical benchmarking step outcomes--the bias, the standard deviation of the bias, the upper subcritical limit, and the number of standard deviations added to calculated k-effectives before comparison to the USL.

  5. Quantum Monte Carlo Endstation for Petascale Computing

    SciTech Connect

    David Ceperley

    2011-03-02

    CUDA GPU platform. We restructured the CPU algorithms to express additional parallelism, minimize GPU-CPU communication, and efficiently utilize the GPU memory hierarchy. Using mixed precision on GT200 GPUs and MPI for intercommunication and load balancing, we observe typical full-application speedups of approximately 10x to 15x relative to quad-core Xeon CPUs alone, while reproducing the double-precision CPU results within statistical error. We developed an all-electron quantum Monte Carlo (QMC) method for solids that does not rely on pseudopotentials, and used it to construct a primary ultra-high-pressure calibration based on the equation of state of cubic boron nitride. We computed the static contribution to the free energy with the QMC method and obtained the phonon contribution from density functional theory, yielding a high-accuracy calibration up to 900 GPa usable directly in experiment. We computed the anharmonic Raman frequency shift with QMC simulations as a function of pressure and temperature, allowing optical pressure calibration. In contrast to present experimental approaches, small systematic errors in the theoretical EOS do not increase with pressure, and no extrapolation is needed. This all-electron method is applicable to first-row solids, providing a new reference for ab initio calculations of solids and benchmarks for pseudopotential accuracy. We compared experimental and theoretical results on the momentum distribution and the quasiparticle renormalization factor in sodium. From an x-ray Compton-profile measurement of the valence-electron momentum density, we derived its discontinuity at the Fermi wavevector finding an accurate measure of the renormalization factor that we compared with quantum-Monte-Carlo and G0W0 calculations performed both on crystalline sodium and on the homogeneous electron gas. Our calculated results are in good agreement with the experiment. We have been studying the heat of formation for various Kubas complexes of molecular

  6. A Monte Carlo method for the PDF (Probability Density Functions) equations of turbulent flow

    NASA Astrophysics Data System (ADS)

    Pope, S. B.

    1980-02-01

    The transport equations of joint probability density functions (pdfs) in turbulent flows are simulated using a Monte Carlo method because finite difference solutions of the equations are impracticable, mainly due to the large dimensionality of the pdfs. Attention is focused on equation for the joint pdf of chemical and thermodynamic properties in turbulent reactive flows. It is shown that the Monte Carlo method provides a true simulation of this equation and that the amount of computation required increases only linearly with the number of properties considered. Consequently, the method can be used to solve the pdf equation for turbulent flows involving many chemical species and complex reaction kinetics. Monte Carlo calculations of the pdf of temperature in a turbulent mixing layer are reported. These calculations are in good agreement with the measurements of Batt (1977).

  7. PRELIMINARY COUPLING OF THE MONTE CARLO CODE OPENMC AND THE MULTIPHYSICS OBJECT-ORIENTED SIMULATION ENVIRONMENT (MOOSE) FOR ANALYZING DOPPLER FEEDBACK IN MONTE CARLO SIMULATIONS

    SciTech Connect

    Matthew Ellis; Derek Gaston; Benoit Forget; Kord Smith

    2011-07-01

    In recent years the use of Monte Carlo methods for modeling reactors has become feasible due to the increasing availability of massively parallel computer systems. One of the primary challenges yet to be fully resolved, however, is the efficient and accurate inclusion of multiphysics feedback in Monte Carlo simulations. The research in this paper presents a preliminary coupling of the open source Monte Carlo code OpenMC with the open source Multiphysics Object-Oriented Simulation Environment (MOOSE). The coupling of OpenMC and MOOSE will be used to investigate efficient and accurate numerical methods needed to include multiphysics feedback in Monte Carlo codes. An investigation into the sensitivity of Doppler feedback to fuel temperature approximations using a two dimensional 17x17 PWR fuel assembly is presented in this paper. The results show a functioning multiphysics coupling between OpenMC and MOOSE. The coupling utilizes Functional Expansion Tallies to accurately and efficiently transfer pin power distributions tallied in OpenMC to unstructured finite element meshes used in MOOSE. The two dimensional PWR fuel assembly case also demonstrates that for a simplified model the pin-by-pin doppler feedback can be adequately replicated by scaling a representative pin based on pin relative powers.

  8. Range uncertainties in proton therapy and the role of Monte Carlo simulations

    PubMed Central

    Paganetti, Harald

    2012-01-01

    The main advantages of proton therapy are the reduced total energy deposited in the patient as compared to photon techniques and the finite range of the proton beam. The latter adds an additional degree of freedom to treatment planning. The range in tissue is associated with considerable uncertainties caused by imaging, patient setup, beam delivery and dose calculation. Reducing the uncertainties would allow a reduction of the treatment volume and thus allow a better utilization of the advantages of protons. This article summarizes the role of Monte Carlo simulations when aiming at a reduction of range uncertainties in proton therapy. Differences in dose calculation when comparing Monte Carlo with analytical algorithms are analyzed as well as range uncertainties due to material constants and CT conversion. Range uncertainties due to biological effects and the role of Monte Carlo for in vivo range verification are discussed. Furthermore, the current range uncertainty recipes used at several proton therapy facilities are revisited. We conclude that a significant impact of Monte Carlo dose calculation can be expected in complex geometries where local range uncertainties due to multiple Coulomb scattering will reduce the accuracy of analytical algorithms. In these cases Monte Carlo techniques might reduce the range uncertainty by several mm. PMID:22571913

  9. Range uncertainties in proton therapy and the role of Monte Carlo simulations.

    PubMed

    Paganetti, Harald

    2012-06-07

    The main advantages of proton therapy are the reduced total energy deposited in the patient as compared to photon techniques and the finite range of the proton beam. The latter adds an additional degree of freedom to treatment planning. The range in tissue is associated with considerable uncertainties caused by imaging, patient setup, beam delivery and dose calculation. Reducing the uncertainties would allow a reduction of the treatment volume and thus allow a better utilization of the advantages of protons. This paper summarizes the role of Monte Carlo simulations when aiming at a reduction of range uncertainties in proton therapy. Differences in dose calculation when comparing Monte Carlo with analytical algorithms are analyzed as well as range uncertainties due to material constants and CT conversion. Range uncertainties due to biological effects and the role of Monte Carlo for in vivo range verification are discussed. Furthermore, the current range uncertainty recipes used at several proton therapy facilities are revisited. We conclude that a significant impact of Monte Carlo dose calculation can be expected in complex geometries where local range uncertainties due to multiple Coulomb scattering will reduce the accuracy of analytical algorithms. In these cases Monte Carlo techniques might reduce the range uncertainty by several mm.

  10. Monte Carlo Techniques for Nuclear Systems - Theory Lectures

    SciTech Connect

    Brown, Forrest B.

    2016-11-29

    These are lecture notes for a Monte Carlo class given at the University of New Mexico. The following topics are covered: course information; nuclear eng. review & MC; random numbers and sampling; computational geometry; collision physics; tallies and statistics; eigenvalue calculations I; eigenvalue calculations II; eigenvalue calculations III; variance reduction; parallel Monte Carlo; parameter studies; fission matrix and higher eigenmodes; doppler broadening; Monte Carlo depletion; HTGR modeling; coupled MC and T/H calculations; fission energy deposition. Solving particle transport problems with the Monte Carlo method is simple - just simulate the particle behavior. The devil is in the details, however. These lectures provide a balanced approach to the theory and practice of Monte Carlo simulation codes. The first lectures provide an overview of Monte Carlo simulation methods, covering the transport equation, random sampling, computational geometry, collision physics, and statistics. The next lectures focus on the state-of-the-art in Monte Carlo criticality simulations, covering the theory of eigenvalue calculations, convergence analysis, dominance ratio calculations, bias in Keff and tallies, bias in uncertainties, a case study of a realistic calculation, and Wielandt acceleration techniques. The remaining lectures cover advanced topics, including HTGR modeling and stochastic geometry, temperature dependence, fission energy deposition, depletion calculations, parallel calculations, and parameter studies. This portion of the class focuses on using MCNP to perform criticality calculations for reactor physics and criticality safety applications. It is an intermediate level class, intended for those with at least some familiarity with MCNP. Class examples provide hands-on experience at running the code, plotting both geometry and results, and understanding the code output. The class includes lectures & hands-on computer use for a variety of Monte Carlo calculations

  11. Coherent Scattering Imaging Monte Carlo Simulation

    NASA Astrophysics Data System (ADS)

    Hassan, Laila Abdulgalil Rafik

    Conventional mammography has poor contrast between healthy and cancerous tissues due to the small difference in attenuation properties. Coherent scatter potentially provides more information because interference of coherently scattered radiation depends on the average intermolecular spacing, and can be used to characterize tissue types. However, typical coherent scatter analysis techniques are not compatible with rapid low dose screening techniques. Coherent scatter slot scan imaging is a novel imaging technique which provides new information with higher contrast. In this work a simulation of coherent scatter was performed for slot scan imaging to assess its performance and provide system optimization. In coherent scatter imaging, the coherent scatter is exploited using a conventional slot scan mammography system with anti-scatter grids tilted at the characteristic angle of cancerous tissues. A Monte Carlo simulation was used to simulate the coherent scatter imaging. System optimization was performed across several parameters, including source voltage, tilt angle, grid distances, grid ratio, and shielding geometry. The contrast increased as the grid tilt angle increased beyond the characteristic angle for the modeled carcinoma. A grid tilt angle of 16 degrees yielded the highest contrast and signal to noise ratio (SNR). Also, contrast increased as the source voltage increased. Increasing grid ratio improved contrast at the expense of decreasing SNR. A grid ratio of 10:1 was sufficient to give a good contrast without reducing the intensity to a noise level. The optimal source to sample distance was determined to be such that the source should be located at the focal distance of the grid. A carcinoma lump of 0.5x0.5x0.5 cm3 in size was detectable which is reasonable considering the high noise due to the usage of relatively small number of incident photons for computational reasons. A further study is needed to study the effect of breast density and breast thickness

  12. Finding organic vapors - a Monte Carlo approach

    NASA Astrophysics Data System (ADS)

    Vuollekoski, Henri; Boy, Michael; Kerminen, Veli-Matti; Kulmala, Markku

    2010-05-01

    drawbacks in accuracy, the inability to find diurnal variation and the lack of size resolution. Here, we aim to shed some light onto the problem by applying an ad hoc Monte Carlo algorithm to a well established aerosol dynamical model, the University of Helsinki Multicomponent Aerosol model (UHMA). By performing a side-by-side comparison with measurement data within the algorithm, this approach has the significant advantage of decreasing the amount of manual labor. But more importantly, by basing the comparison on particle number size distribution data - a quantity that can be quite reliably measured - the accuracy of the results is good.

  13. Uncertainty Analyses for Localized Tallies in Monte Carlo Eigenvalue Calculations

    SciTech Connect

    Mervin, Brenden T.; Maldonado, G Ivan; Mosher, Scott W; Wagner, John C

    2011-01-01

    It is well known that statistical estimates obtained from Monte Carlo criticality simulations can be adversely affected by cycle-to-cycle correlations in the fission source. In addition there are several other more fundamental issues that may lead to errors in Monte Carlo results. These factors can have a significant impact on the calculated eigenvalue, localized tally means and their associated standard deviations. In fact, modern Monte Carlo computational tools may generate standard deviation estimates that are a factor of five or more lower than the true standard deviation for a particular tally due to the inter-cycle correlations in the fission source. The magnitude of this under-prediction can climb as high as one hundred when combined with an ill-converged fission source or poor sampling techniques. Since Monte Carlo methods are widely used in reactor analysis (as a benchmarking tool) and criticality safety applications, an in-depth understanding of the effects of these issues must be developed in order to support the practical use of Monte Carlo software packages. A rigorous statistical analysis of localized tally results in eigenvalue calculations is presented using the SCALE/KENO-VI and MCNP Monte Carlo codes. The purpose of this analysis is to investigate the under-prediction in the uncertainty and its sensitivity to problem characteristics and calculational parameters, and to provide a comparative study between the two codes with respect to this under-prediction. It is shown herein that adequate source convergence along with proper specification of Monte Carlo parameters can reduce the magnitude of under-prediction in the uncertainty to reasonable levels; below a factor of 2 when inter-cycle correlations in the fission source are not a significant factor. In addition, through the use of a modified sampling procedure, the effects of inter-cycle correlations on both the mean value and standard deviation estimates can be isolated.

  14. The Monte Carlo code MCPTV--Monte Carlo dose calculation in radiation therapy with carbon ions.

    PubMed

    Karg, Juergen; Speer, Stefan; Schmidt, Manfred; Mueller, Reinhold

    2010-07-07

    The Monte Carlo code MCPTV is presented. MCPTV is designed for dose calculation in treatment planning in radiation therapy with particles and especially carbon ions. MCPTV has a voxel-based concept and can perform a fast calculation of the dose distribution on patient CT data. Material and density information from CT are taken into account. Electromagnetic and nuclear interactions are implemented. Furthermore the algorithm gives information about the particle spectra and the energy deposition in each voxel. This can be used to calculate the relative biological effectiveness (RBE) for each voxel. Depth dose distributions are compared to experimental data giving good agreement. A clinical example is shown to demonstrate the capabilities of the MCPTV dose calculation.

  15. Universality of the Ising and the S=1 model on Archimedean lattices: a Monte Carlo determination.

    PubMed

    Malakis, A; Gulpinar, G; Karaaslan, Y; Papakonstantinou, T; Aslan, G

    2012-03-01

    The Ising models S=1/2 and S=1 are studied by efficient Monte Carlo schemes on the (3,4,6,4) and the (3,3,3,3,6) Archimedean lattices. The algorithms used, a hybrid Metropolis-Wolff algorithm and a parallel tempering protocol, are briefly described and compared with the simple Metropolis algorithm. Accurate Monte Carlo data are produced at the exact critical temperatures of the Ising model for these lattices. Their finite-size analysis provide, with high accuracy, all critical exponents which, as expected, are the same with the well-known 2D Ising model exact values. A detailed finite-size scaling analysis of our Monte Carlo data for the S=1 model on the same lattices provides very clear evidence that this model obeys, also very well, the 2D Ising model critical exponents. As a result, we find that recent Monte Carlo simulations and attempts to define effective dimensionality for the S=1 model on these lattices are misleading. Accurate estimates are obtained for the critical amplitudes of the logarithmic expansions of the specific heat for both models on the two Archimedean lattices.

  16. Universality of the Ising and the S=1 model on Archimedean lattices: A Monte Carlo determination

    NASA Astrophysics Data System (ADS)

    Malakis, A.; Gulpinar, G.; Karaaslan, Y.; Papakonstantinou, T.; Aslan, G.

    2012-03-01

    The Ising models S=1/2 and S=1 are studied by efficient Monte Carlo schemes on the (3,4,6,4) and the (3,3,3,3,6) Archimedean lattices. The algorithms used, a hybrid Metropolis-Wolff algorithm and a parallel tempering protocol, are briefly described and compared with the simple Metropolis algorithm. Accurate Monte Carlo data are produced at the exact critical temperatures of the Ising model for these lattices. Their finite-size analysis provide, with high accuracy, all critical exponents which, as expected, are the same with the well-known 2D Ising model exact values. A detailed finite-size scaling analysis of our Monte Carlo data for the S=1 model on the same lattices provides very clear evidence that this model obeys, also very well, the 2D Ising model critical exponents. As a result, we find that recent Monte Carlo simulations and attempts to define effective dimensionality for the S=1 model on these lattices are misleading. Accurate estimates are obtained for the critical amplitudes of the logarithmic expansions of the specific heat for both models on the two Archimedean lattices.

  17. An unbiased Hessian representation for Monte Carlo PDFs.

    PubMed

    Carrazza, Stefano; Forte, Stefano; Kassabov, Zahari; Latorre, José Ignacio; Rojo, Juan

    We develop a methodology for the construction of a Hessian representation of Monte Carlo sets of parton distributions, based on the use of a subset of the Monte Carlo PDF replicas as an unbiased linear basis, and of a genetic algorithm for the determination of the optimal basis. We validate the methodology by first showing that it faithfully reproduces a native Monte Carlo PDF set (NNPDF3.0), and then, that if applied to Hessian PDF set (MMHT14) which was transformed into a Monte Carlo set, it gives back the starting PDFs with minimal information loss. We then show that, when applied to a large Monte Carlo PDF set obtained as combination of several underlying sets, the methodology leads to a Hessian representation in terms of a rather smaller set of parameters (MC-H PDFs), thereby providing an alternative implementation of the recently suggested Meta-PDF idea and a Hessian version of the recently suggested PDF compression algorithm (CMC-PDFs). The mc2hessian conversion code is made publicly available together with (through LHAPDF6) a Hessian representations of the NNPDF3.0 set, and the MC-H PDF set.

  18. Frequency domain optical tomography using a Monte Carlo perturbation method

    NASA Astrophysics Data System (ADS)

    Yamamoto, Toshihiro; Sakamoto, Hiroki

    2016-04-01

    A frequency domain Monte Carlo method is applied to near-infrared optical tomography, where an intensity-modulated light source with a given modulation frequency is used to reconstruct optical properties. The frequency domain reconstruction technique allows for better separation between the scattering and absorption properties of inclusions, even for ill-posed inverse problems, due to cross-talk between the scattering and absorption reconstructions. The frequency domain Monte Carlo calculation for light transport in an absorbing and scattering medium has thus far been analyzed mostly for the reconstruction of optical properties in simple layered tissues. This study applies a Monte Carlo calculation algorithm, which can handle complex-valued particle weights for solving a frequency domain transport equation, to optical tomography in two-dimensional heterogeneous tissues. The Jacobian matrix that is needed to reconstruct the optical properties is obtained by a first-order "differential operator" technique, which involves less variance than the conventional "correlated sampling" technique. The numerical examples in this paper indicate that the newly proposed Monte Carlo method provides reconstructed results for the scattering and absorption coefficients that compare favorably with the results obtained from conventional deterministic or Monte Carlo methods.

  19. Complete Monte Carlo Simulation of Neutron Scattering Experiments

    SciTech Connect

    Drosg, M.

    2011-12-13

    In the far past, it was not possible to accurately correct for the finite geometry and the finite sample size of a neutron scattering set-up. The limited calculation power of the ancient computers as well as the lack of powerful Monte Carlo codes and the limitation in the data base available then prevented a complete simulation of the actual experiment. Using e.g. the Monte Carlo neutron transport code MCNPX [1], neutron scattering experiments can be simulated almost completely with a high degree of precision using a modern PC, which has a computing power that is ten thousand times that of a super computer of the early 1970s. Thus, (better) corrections can also be obtained easily for previous published data provided that these experiments are sufficiently well documented. Better knowledge of reference data (e.g. atomic mass, relativistic correction, and monitor cross sections) further contributes to data improvement. Elastic neutron scattering experiments from liquid samples of the helium isotopes performed around 1970 at LANL happen to be very well documented. Considering that the cryogenic targets are expensive and complicated, it is certainly worthwhile to improve these data by correcting them using this comparatively straightforward method. As two thirds of all differential scattering cross section data of {sup 3}He(n,n){sup 3}He are connected to the LANL data, it became necessary to correct the dependent data measured in Karlsruhe, Germany, as well. A thorough simulation of both the LANL experiments and the Karlsruhe experiment is presented, starting from the neutron production, followed by the interaction in the air, the interaction with the cryostat structure, and finally the scattering medium itself. In addition, scattering from the hydrogen reference sample was simulated. For the LANL data, the multiple scattering corrections are smaller by a factor of five at least, making this work relevant. Even more important are the corrections to the Karlsruhe data

  20. Exact special twist method for quantum Monte Carlo simulations

    NASA Astrophysics Data System (ADS)

    Dagrada, Mario; Karakuzu, Seher; Vildosola, Verónica Laura; Casula, Michele; Sorella, Sandro

    2016-12-01

    We present a systematic investigation of the special twist method introduced by Rajagopal et al. [Phys. Rev. B 51, 10591 (1995), 10.1103/PhysRevB.51.10591] for reducing finite-size effects in correlated calculations of periodic extended systems with Coulomb interactions and Fermi statistics. We propose a procedure for finding special twist values which, at variance with previous applications of this method, reproduce the energy of the mean-field infinite-size limit solution within an adjustable (arbitrarily small) numerical error. This choice of the special twist is shown to be the most accurate single-twist solution for curing one-body finite-size effects in correlated calculations. For these reasons we dubbed our procedure "exact special twist" (EST). EST only needs a fully converged independent-particles or mean-field calculation within the primitive cell and a simple fit to find the special twist along a specific direction in the Brillouin zone. We first assess the performances of EST in a simple correlated model such as the three-dimensional electron gas. Afterwards, we test its efficiency within ab initio quantum Monte Carlo simulations of metallic elements of increasing complexity. We show that EST displays an overall good performance in reducing finite-size errors comparable to the widely used twist average technique but at a much lower computational cost since it involves the evaluation of just one wave function. We also demonstrate that the EST method shows similar performances in the calculation of correlation functions, such as the ionic forces for structural relaxation and the pair radial distribution function in liquid hydrogen. Our conclusions point to the usefulness of EST for correlated supercell calculations; our method will be particularly relevant when the physical problem under consideration requires large periodic cells.

  1. Monte Carlo dose calculations in advanced radiotherapy

    NASA Astrophysics Data System (ADS)

    Bush, Karl Kenneth

    The remarkable accuracy of Monte Carlo (MC) dose calculation algorithms has led to the widely accepted view that these methods should and will play a central role in the radiotherapy treatment verification and planning of the future. The advantages of using MC clinically are particularly evident for radiation fields passing through inhomogeneities, such as lung and air cavities, and for small fields, including those used in today's advanced intensity modulated radiotherapy techniques. Many investigators have reported significant dosimetric differences between MC and conventional dose calculations in such complex situations, and have demonstrated experimentally the unmatched ability of MC calculations in modeling charged particle disequilibrium. The advantages of using MC dose calculations do come at a cost. The nature of MC dose calculations require a highly detailed, in-depth representation of the physical system (accelerator head geometry/composition, anatomical patient geometry/composition and particle interaction physics) to allow accurate modeling of external beam radiation therapy treatments. To perform such simulations is computationally demanding and has only recently become feasible within mainstream radiotherapy practices. In addition, the output of the accelerator head simulation can be highly sensitive to inaccuracies within a model that may not be known with sufficient detail. The goal of this dissertation is to both improve and advance the implementation of MC dose calculations in modern external beam radiotherapy. To begin, a novel method is proposed to fine-tune the output of an accelerator model to better represent the measured output. In this method an intensity distribution of the electron beam incident on the model is inferred by employing a simulated annealing algorithm. The method allows an investigation of arbitrary electron beam intensity distributions and is not restricted to the commonly assumed Gaussian intensity. In a second component of

  2. Monte Carlo simulation in statistical physics: an introduction

    NASA Astrophysics Data System (ADS)

    Binder, K., Heermann, D. W.

    Monte Carlo Simulation in Statistical Physics deals with the computer simulation of many-body systems in condensed-matter physics and related fields of physics, chemistry and beyond, to traffic flows, stock market fluctuations, etc.). Using random numbers generated by a computer, probability distributions are calculated, allowing the estimation of the thermodynamic properties of various systems. This book describes the theoretical background to several variants of these Monte Carlo methods and gives a systematic presentation from which newcomers can learn to perform such simulations and to analyze their results. This fourth edition has been updated and a new chapter on Monte Carlo simulation of quantum-mechanical problems has been added. To help students in their work a special web server has been installed to host programs and discussion groups (http://wwwcp.tphys.uni-heidelberg.de). Prof. Binder was the winner of the Berni J. Alder CECAM Award for Computational Physics 2001.

  3. Monte Carlo simulation of laser attenuation characteristics in fog

    NASA Astrophysics Data System (ADS)

    Wang, Hong-Xia; Sun, Chao; Zhu, You-zhang; Sun, Hong-hui; Li, Pan-shi

    2011-06-01

    Based on the Mie scattering theory and the gamma size distribution model, the scattering extinction parameter of spherical fog-drop is calculated. For the transmission attenuation of the laser in the fog, a Monte Carlo simulation model is established, and the impact of attenuation ratio on visibility and field angle is computed and analysed using the program developed by MATLAB language. The results of the Monte Carlo method in this paper are compared with the results of single scattering method. The results show that the influence of multiple scattering need to be considered when the visibility is low, and single scattering calculations have larger errors. The phenomenon of multiple scattering can be interpreted more better when the Monte Carlo is used to calculate the attenuation ratio of the laser transmitting in the fog.

  4. Classical Perturbation Theory for Monte Carlo Studies of System Reliability

    SciTech Connect

    Lewins, Jeffrey D.

    2001-03-15

    A variational principle for a Markov system allows the derivation of perturbation theory for models of system reliability, with prospects of extension to generalized Markov processes of a wide nature. It is envisaged that Monte Carlo or stochastic simulation will supply the trial functions for such a treatment, which obviates the standard difficulties of direct analog Monte Carlo perturbation studies. The development is given in the specific mode for first- and second-order theory, using an example with known analytical solutions. The adjoint equation is identified with the importance function and a discussion given as to how both the forward and backward (adjoint) fields can be obtained from a single Monte Carlo study, with similar interpretations for the additional functions required by second-order theory. Generalized Markov models with age-dependence are identified as coming into the scope of this perturbation theory.

  5. BACKWARD AND FORWARD MONTE CARLO METHOD IN POLARIZED RADIATIVE TRANSFER

    SciTech Connect

    Yong, Huang; Guo-Dong, Shi; Ke-Yong, Zhu

    2016-03-20

    In general, the Stocks vector cannot be calculated in reverse in the vector radiative transfer. This paper presents a novel backward and forward Monte Carlo simulation strategy to study the vector radiative transfer in the participated medium. A backward Monte Carlo process is used to calculate the ray trajectory and the endpoint of the ray. The Stocks vector is carried out by a forward Monte Carlo process. A one-dimensional graded index semi-transparent medium was presented as the physical model and the thermal emission consideration of polarization was studied in the medium. The solution process to non-scattering, isotropic scattering, and the anisotropic scattering medium, respectively, is discussed. The influence of the optical thickness and albedo on the Stocks vector are studied. The results show that the U, V-components of the apparent Stocks vector are very small, but the Q-component of the apparent Stocks vector is relatively larger, which cannot be ignored.

  6. Tool for Rapid Analysis of Monte Carlo Simulations

    NASA Technical Reports Server (NTRS)

    Restrepo, Carolina; McCall, Kurt E.; Hurtado, John E.

    2011-01-01

    Designing a spacecraft, or any other complex engineering system, requires extensive simulation and analysis work. Oftentimes, the large amounts of simulation data generated are very di cult and time consuming to analyze, with the added risk of overlooking potentially critical problems in the design. The authors have developed a generic data analysis tool that can quickly sort through large data sets and point an analyst to the areas in the data set that cause specific types of failures. The Tool for Rapid Analysis of Monte Carlo simulations (TRAM) has been used in recent design and analysis work for the Orion vehicle, greatly decreasing the time it takes to evaluate performance requirements. A previous version of this tool was developed to automatically identify driving design variables in Monte Carlo data sets. This paper describes a new, parallel version, of TRAM implemented on a graphical processing unit, and presents analysis results for NASA's Orion Monte Carlo data to demonstrate its capabilities.

  7. Monte Carlo tests of the ELIPGRID-PC algorithm

    SciTech Connect

    Davidson, J.R.

    1995-04-01

    The standard tool for calculating the probability of detecting pockets of contamination called hot spots has been the ELIPGRID computer code of Singer and Wickman. The ELIPGRID-PC program has recently made this algorithm available for an IBM{reg_sign} PC. However, no known independent validation of the ELIPGRID algorithm exists. This document describes a Monte Carlo simulation-based validation of a modified version of the ELIPGRID-PC code. The modified ELIPGRID-PC code is shown to match Monte Carlo-calculated hot-spot detection probabilities to within {plus_minus}0.5% for 319 out of 320 test cases. The one exception, a very thin elliptical hot spot located within a rectangular sampling grid, differed from the Monte Carlo-calculated probability by about 1%. These results provide confidence in the ability of the modified ELIPGRID-PC code to accurately predict hot-spot detection probabilities within an acceptable range of error.

  8. SPQR: a Monte Carlo reactor kinetics code. [LMFBR

    SciTech Connect

    Cramer, S.N.; Dodds, H.L.

    1980-02-01

    The SPQR Monte Carlo code has been developed to analyze fast reactor core accident problems where conventional methods are considered inadequate. The code is based on the adiabatic approximation of the quasi-static method. This initial version contains no automatic material motion or feedback. An existing Monte Carlo code is used to calculate the shape functions and the integral quantities needed in the kinetics module. Several sample problems have been devised and analyzed. Due to the large statistical uncertainty associated with the calculation of reactivity in accident simulations, the results, especially at later times, differ greatly from deterministic methods. It was also found that in large uncoupled systems, the Monte Carlo method has difficulty in handling asymmetric perturbations.

  9. Photon beam description in PEREGRINE for Monte Carlo dose calculations

    SciTech Connect

    Cox, L. J., LLNL

    1997-03-04

    Goal of PEREGRINE is to provide capability for accurate, fast Monte Carlo calculation of radiation therapy dose distributions for routine clinical use and for research into efficacy of improved dose calculation. An accurate, efficient method of describing and sampling radiation sources is needed, and a simple, flexible solution is provided. The teletherapy source package for PEREGRINE, coupled with state-of-the-art Monte Carlo simulations of treatment heads, makes it possible to describe any teletherapy photon beam to the precision needed for highly accurate Monte Carlo dose calculations in complex clinical configurations that use standard patient modifiers such as collimator jaws, wedges, blocks, and/or multi-leaf collimators. Generic beam descriptions for a class of treatment machines can readily be adjusted to yield dose calculation to match specific clinical sites.

  10. Implementation of Monte Carlo Simulations for the Gamma Knife System

    NASA Astrophysics Data System (ADS)

    Xiong, W.; Huang, D.; Lee, L.; Feng, J.; Morris, K.; Calugaru, E.; Burman, C.; Li, J.; Ma, C.-M.

    2007-06-01

    Currently the Gamma Knife system is accompanied with a treatment planning system, Leksell GammaPlan (LGP) which is a standard, computer-based treatment planning system for Gamma Knife radiosurgery. In LGP, the dose calculation algorithm does not consider the scatter dose contributions and the inhomogeneity effect due to the skull and air cavities. To improve the dose calculation accuracy, Monte Carlo simulations have been implemented for the Gamma Knife planning system. In this work, the 201 Cobalt-60 sources in the Gamma Knife unit are considered to have the same activity. Each Cobalt-60 source is contained in a cylindric stainless steel capsule. The particle phase space information is stored in four beam data files, which are collected in the inner sides of the 4 treatment helmets, after the Cobalt beam passes through the stationary and helmet collimators. Patient geometries are rebuilt from patient CT data. Twenty two Patients are included in the Monte Carlo simulation for this study. The dose is calculated using Monte Carlo in both homogenous and inhomogeneous geometries with identical beam parameters. To investigate the attenuation effect of the skull bone the dose in a 16cm diameter spherical QA phantom is measured with and without a 1.5mm Lead-covering and also simulated using Monte Carlo. The dose ratios with and without the 1.5mm Lead-covering are 89.8% based on measurements and 89.2% according to Monte Carlo for a 18mm-collimator Helmet. For patient geometries, the Monte Carlo results show that although the relative isodose lines remain almost the same with and without inhomogeneity corrections, the difference in the absolute dose is clinically significant. The average inhomogeneity correction is (3.9 ± 0.90) % for the 22 patients investigated. These results suggest that the inhomogeneity effect should be considered in the dose calculation for Gamma Knife treatment planning.

  11. Thermodynamic Properties of a Trapped Bose Gas:. a Diffusion Monte Carlo Study

    NASA Astrophysics Data System (ADS)

    Datta, S.

    We investigate the thermodynamic properties of a trapped Bose gas of Rb atoms interacting through a repulsive potential at low but finite temperature (kBT < μ < Tc) by Quantum Monte Carlo method based upon the generalization of Feynman-Kac method1-3 applicable to many-body systems at T=0 to finite temperatures. In this paper, we report temperature variation of condensation fraction, chemical potential, density profile, total energy of the system, release energy, frequency shifts and moment of inertia within the realistic potential model (Morse type) for the first time by diffusion Monte Carlo technique. The most remarkable success was in achieving the same trend in the temperature variation of frequency shifts as was observed in JILA4 for both m=2 and m=0 modes. For other things, we agree with the work of Giorgini et al.,5 Pitaevskii et al.6 and Krauth.7

  12. Diffusion Monte Carlo Study of Para-Diiodobenzene Polymorphism Revisited.

    PubMed

    Hongo, Kenta; Watson, Mark A; Iitaka, Toshiaki; Aspuru-Guzik, Alán; Maezono, Ryo

    2015-03-10

    We revisit our investigation of the diffusion Monte Carlo (DMC) simulation of para-diiodobenzene (p-DIB) molecular crystal polymorphism. [See J. Phys. Chem. Lett. 2010, 1, 1789-1794.] We perform, for the first time, a rigorous study of finite-size effects and choice of nodal surface on the prediction of polymorph stability in molecular crystals using fixed-node DMC. Our calculations are the largest that are currently feasible using the resources of the K-computer and provide insights into the formidable challenge of predicting such properties from first principles. In particular, we show that finite-size effects can influence the trial nodal surface of a small (1 × 1 × 1) simulation cell considerably. Therefore, we repeated our DMC simulations with a 1 × 3 × 3 simulation cell, which is the largest such calculation to date. We used a density functional theory (DFT) nodal surface generated with the PBE functional, and we accumulated statistical samples with ∼6.4 × 10(5) core hours for each polymorph. Our final results predict a polymorph stability that is consistent with experiment, but they also indicate that the results in our previous paper were somewhat fortuitous. We analyze the finite-size errors using model periodic Coulomb (MPC) interactions and kinetic energy corrections, according to the CCMH scheme of Chiesa, Ceperley, Martin, and Holzmann. We investigate the dependence of the finite-size errors on different aspect ratios of the simulation cell (k-mesh convergence) in order to understand how to choose an appropriate ratio for the DMC calculations. Even in the most expensive simulations currently possible, we show that the finite size errors in the DMC total energies are much larger than the energy difference between the two polymorphs, although error cancellation means that the polymorph prediction is accurate. Finally, we found that the T-move scheme is essential for these massive DMC simulations in order to circumvent population explosions and

  13. Parallel Monte Carlo Simulation for control system design

    NASA Technical Reports Server (NTRS)

    Schubert, Wolfgang M.

    1995-01-01

    The research during the 1993/94 academic year addressed the design of parallel algorithms for stochastic robustness synthesis (SRS). SRS uses Monte Carlo simulation to compute probabilities of system instability and other design-metric violations. The probabilities form a cost function which is used by a genetic algorithm (GA). The GA searches for the stochastic optimal controller. The existing sequential algorithm was analyzed and modified to execute in a distributed environment. For this, parallel approaches to Monte Carlo simulation and genetic algorithms were investigated. Initial empirical results are available for the KSR1.

  14. A review of best practices for Monte Carlo criticality calculations

    SciTech Connect

    Brown, Forrest B

    2009-01-01

    Monte Carlo methods have been used to compute k{sub eff} and the fundamental mode eigenfunction of critical systems since the 1950s. While such calculations have become routine using standard codes such as MCNP and SCALE/KENO, there still remain 3 concerns that must be addressed to perform calculations correctly: convergence of k{sub eff} and the fission distribution, bias in k{sub eff} and tally results, and bias in statistics on tally results. This paper provides a review of the fundamental problems inherent in Monte Carlo criticality calculations. To provide guidance to practitioners, suggested best practices for avoiding these problems are discussed and illustrated by examples.

  15. Monte Carlo Simulations of Phosphate Polyhedron Connectivity in Glasses

    SciTech Connect

    ALAM,TODD M.

    1999-12-21

    Monte Carlo simulations of phosphate tetrahedron connectivity distributions in alkali and alkaline earth phosphate glasses are reported. By utilizing a discrete bond model, the distribution of next-nearest neighbor connectivities between phosphate polyhedron for random, alternating and clustering bonding scenarios was evaluated as a function of the relative bond energy difference. The simulated distributions are compared to experimentally observed connectivities reported for solid-state two-dimensional exchange and double-quantum NMR experiments of phosphate glasses. These Monte Carlo simulations demonstrate that the polyhedron connectivity is best described by a random distribution in lithium phosphate and calcium phosphate glasses.

  16. PEPSI — a Monte Carlo generator for polarized leptoproduction

    NASA Astrophysics Data System (ADS)

    Mankiewicz, L.; Schäfer, A.; Veltri, M.

    1992-09-01

    We describe PEPSI (Polarized Electron Proton Scattering Interactions), a Monte Carlo program for polarized deep inelastic leptoproduction mediated by electromagnetic interaction, and explain how to use it. The code is a modification of the LEPTO 4.3 Lund Monte Carlo for unpolarized scattering. The hard virtual gamma-parton scattering is generated according to the polarization-dependent QCD cross-section of the first order in α S. PEPSI requires the standard polarization-independent JETSET routines to simulate the fragmentation into final hadrons.

  17. A Monte Carlo method for combined segregation and linkage analysis

    SciTech Connect

    Guo, S.W. ); Thompson, E.A. )

    1992-11-01

    The authors introduce a Monte Carlo approach to combined segregation and linkage analysis of a quantitative trait observed in an extended pedigree. In conjunction with the Monte Carlo method of likelihood-ratio evaluation proposed by Thompson and Guo, the method provides for estimation and hypothesis testing. The greatest attraction of this approach is its ability to handle complex genetic models and large pedigrees. Two examples illustrate the practicality of the method. One is of simulated data on a large pedigree; the other is a reanalysis of published data previously analyzed by other methods. 40 refs, 5 figs., 5 tabs.

  18. Markov chain Monte Carlo linkage analysis of complex quantitative phenotypes.

    PubMed

    Hinrichs, A; Reich, T

    2001-01-01

    We report a Markov chain Monte Carlo analysis of the five simulated quantitative traits in Genetic Analysis Workshop 12 using the Loki software. Our objectives were to determine the efficacy of the Markov chain Monte Carlo method and to test a new scoring technique. Our initial blind analysis, on replicate 42 (the "best replicate") successfully detected four out of the five disease loci and found no false positives. A power analysis shows that the software could usually detect 4 of the 10 trait/gene combinations at an empirical point-wise p-value of 1.5 x 10(-4).

  19. Hybrid Monte Carlo/deterministic methods for radiation shielding problems

    NASA Astrophysics Data System (ADS)

    Becker, Troy L.

    For the past few decades, the most common type of deep-penetration (shielding) problem simulated using Monte Carlo methods has been the source-detector problem, in which a response is calculated at a single location in space. Traditionally, the nonanalog Monte Carlo methods used to solve these problems have required significant user input to generate and sufficiently optimize the biasing parameters necessary to obtain a statistically reliable solution. It has been demonstrated that this laborious task can be replaced by automated processes that rely on a deterministic adjoint solution to set the biasing parameters---the so-called hybrid methods. The increase in computational power over recent years has also led to interest in obtaining the solution in a region of space much larger than a point detector. In this thesis, we propose two methods for solving problems ranging from source-detector problems to more global calculations---weight windows and the Transform approach. These techniques employ sonic of the same biasing elements that have been used previously; however, the fundamental difference is that here the biasing techniques are used as elements of a comprehensive tool set to distribute Monte Carlo particles in a user-specified way. The weight window achieves the user-specified Monte Carlo particle distribution by imposing a particular weight window on the system, without altering the particle physics. The Transform approach introduces a transform into the neutron transport equation, which results in a complete modification of the particle physics to produce the user-specified Monte Carlo distribution. These methods are tested in a three-dimensional multigroup Monte Carlo code. For a basic shielding problem and a more realistic one, these methods adequately solved source-detector problems and more global calculations. Furthermore, they confirmed that theoretical Monte Carlo particle distributions correspond to the simulated ones, implying that these methods

  20. Parton distribution functions in Monte Carlo factorisation scheme

    NASA Astrophysics Data System (ADS)

    Jadach, S.; Płaczek, W.; Sapeta, S.; Siódmok, A.; Skrzypek, M.

    2016-12-01

    A next step in development of the KrkNLO method of including complete NLO QCD corrections to hard processes in a LO parton-shower Monte Carlo is presented. It consists of a generalisation of the method, previously used for the Drell-Yan process, to Higgs-boson production. This extension is accompanied with the complete description of parton distribution functions in a dedicated, Monte Carlo factorisation scheme, applicable to any process of production of one or more colour-neutral particles in hadron-hadron collisions.

  1. Towards Fast, Scalable Hard Particle Monte Carlo Simulations on GPUs

    NASA Astrophysics Data System (ADS)

    Anderson, Joshua A.; Irrgang, M. Eric; Glaser, Jens; Harper, Eric S.; Engel, Michael; Glotzer, Sharon C.

    2014-03-01

    Parallel algorithms for Monte Carlo simulations of thermodynamic ensembles of particles have received little attention because of the inherent serial nature of the statistical sampling. We discuss the implementation of Monte Carlo for arbitrary hard shapes in HOOMD-blue, a GPU-accelerated particle simulation tool, to enable million particle simulations in a field where thousands is the norm. In this talk, we discuss our progress on basic parallel algorithms, optimizations that maximize GPU performance, and communication patterns for scaling to multiple GPUs. Research applications include colloidal assembly and other uses in materials design, biological aggregation, and operations research.

  2. Kinetic Monte Carlo method applied to nucleic acid hairpin folding.

    PubMed

    Sauerwine, Ben; Widom, Michael

    2011-12-01

    Kinetic Monte Carlo on coarse-grained systems, such as nucleic acid secondary structure, is advantageous for being able to access behavior at long time scales, even minutes or hours. Transition rates between coarse-grained states depend upon intermediate barriers, which are not directly simulated. We propose an Arrhenius rate model and an intermediate energy model that incorporates the effects of the barrier between simulated states without enlarging the state space itself. Applying our Arrhenius rate model to DNA hairpin folding, we demonstrate improved agreement with experiment compared to the usual kinetic Monte Carlo model. Further improvement results from including rigidity of single-stranded stacking.

  3. Quantum Monte Carlo Benchmarks Functionals for Silica Polymorphs

    NASA Astrophysics Data System (ADS)

    Driver, K. P.; Wilkins, J. W.; Hennig, R. G.; Umrigar, C. J.; Scuseria, G.; Militzer, B.; Cohen, R. E.

    2007-03-01

    For many silica polytypes, the local density approximation (LDA) does a better job than the generalized gradient approximation (GGA) in predicting structural properties and bulk moduli. However, gradient corrections to the charge density are necessary for accurate phase energy differences. Functionals that go beyond GGA may improve the accuracy of both structures and energies. For example, a meta-GGA functional, TPSS, and hybrid functionals B3LYP and HSE have shown improvement in other systems. We compare results from these functionals for structural properties, energy differences, and bulk moduli for a few high pressure phases of silica, and benchmark the results with Quantum Monte Carlo (QMC). Preliminary QMC results indicate that careful wavefunction optimization and finite size effects are of particular importance in obtaining accurate silica phase properties. Supported by DOE(DE-FG02-99ER45795), NSF (EAR-0530301, DMR-0205328), and Sandia National Laboratory. Computation at OSC and NERSC. [1]Th. Demuth et al., J. Phys.: Cond. Matter 11, 3833 (1999). [2]J. Heyd et al., J. Chem. Phys. 121, 1187 (2004). [3]E. R. Batista et al., Phys. Rev. B 74, 121102(R) (2006).

  4. Towards overcoming the Monte Carlo sign problem with tensor networks

    NASA Astrophysics Data System (ADS)

    Bañuls, Mari Carmen; Cichy, Krzysztof; Ignacio Cirac, J.; Jansen, Karl; Kühn, Stefan; Saito, Hana

    2017-03-01

    The study of lattice gauge theories with Monte Carlo simulations is hindered by the infamous sign problem that appears under certain circumstances, in particular at non-zero chemical potential. So far, there is no universal method to overcome this problem. However, recent years brought a new class of non-perturbative Hamiltonian techniques named tensor networks, where the sign problem is absent. In previous work, we have demonstrated that this approach, in particular matrix product states in 1+1 dimensions, can be used to perform precise calculations in a lattice gauge theory, the massless and massive Schwinger model. We have computed the mass spectrum of this theory, its thermal properties and real-time dynamics. In this work, we review these results and we extend our calculations to the case of two flavours and non-zero chemical potential. We are able to reliably reproduce known analytical results for this model, thus demonstrating that tensor networks can tackle the sign problem of a lattice gauge theory at finite density.

  5. Quantum Monte Carlo simulations for disordered Bose systems

    SciTech Connect

    Trivedi, N.

    1992-03-01

    Interacting bosons in a random potential can be used to model {sup 3}He adsorbed in porous media, universal aspects of the superconductor-insulator transition in disordered films, and vortices in disordered type II superconductors. We study a model of bosons on a 2D square lattice with a random potential of strength V and on-site repulsion U. We first describe the path integral Monte Carlo algorithm used to simulate this system. The 2D quantum problem (at T=0) gets mapped onto a classical problem of strings or directed polymers moving in 3D with each string representing the world line of a boson. We discuss efficient ways of sampling the polymer configurations as well as the permutations between the bosons. We calculate the superfluid density and the excitation spectrum. Using these results we distinguish between a superfluid, a localized or Bose glass'' insulator with gapless excitations and a Mott insulator with a finite gap to excitations (found only at commensurate densities). We discover novel effects arising from the interpaly between V and U and present preliminary results for the phase diagram at incommensurate and commensurate densities.

  6. Quantum Monte Carlo simulations for disordered Bose systems

    SciTech Connect

    Trivedi, N.

    1992-03-01

    Interacting bosons in a random potential can be used to model {sup 3}He adsorbed in porous media, universal aspects of the superconductor-insulator transition in disordered films, and vortices in disordered type II superconductors. We study a model of bosons on a 2D square lattice with a random potential of strength V and on-site repulsion U. We first describe the path integral Monte Carlo algorithm used to simulate this system. The 2D quantum problem (at T=0) gets mapped onto a classical problem of strings or directed polymers moving in 3D with each string representing the world line of a boson. We discuss efficient ways of sampling the polymer configurations as well as the permutations between the bosons. We calculate the superfluid density and the excitation spectrum. Using these results we distinguish between a superfluid, a localized or ``Bose glass`` insulator with gapless excitations and a Mott insulator with a finite gap to excitations (found only at commensurate densities). We discover novel effects arising from the interpaly between V and U and present preliminary results for the phase diagram at incommensurate and commensurate densities.

  7. Quantum Monte Carlo applied to Solids under Pressure

    NASA Astrophysics Data System (ADS)

    Shulenburger, Luke; Mattsson, T. R.

    2012-02-01

    Diffusion quantum Monte Carlo (DMC) has been applied to solids under pressure in several different contexts a high degree of success.ootnotetextJ. Kolorenc and L. Mitas. Rep. Prog. Phys. 74 026502 (2011) All of these calculations must address three errors present in DMC calculations of solids: the fixed node approximation, the pseudopotential approximation and the finite size approximation. Due to the varying approximations to address these errors, these calculations suffer from an uncertainty that is almost comparable to that introduced by the choice of functional in density functional theory (DFT). In this presentation, we present lattice constants and bulk moduli of more than fifteen solids under compression performed with a consistent approach to these three approximations. These results help establish the general accuracy that may be expected from DMC calculations of solids under pressure and also provide a reference from which improvements to DMC methods may be judged. [4pt] Sandia National Laboratories is a multiprogram laboratory managed and operated by Sandia Corporation, a wholly owned subsidiary of Lockheed Martin Corporation, for the U.S. Department of Energy's National Nuclear Security Administration under Contract No. DE-AC04-94AL85000.

  8. Monte Carlo Method for Solving the Fredholm Integral Equations of the Second Kind

    NASA Astrophysics Data System (ADS)

    ZhiMin, Hong; ZaiZai, Yan; JianRui, Chen

    2012-12-01

    This article is concerned with a numerical algorithm for solving approximate solutions of Fredholm integral equations of the second kind with random sampling. We use Simpson's rule for solving integral equations, which yields a linear system. The Monte Carlo method, based on the simulation of a finite discrete Markov chain, is employed to solve this linear system. To show the efficiency of the method, we use numerical examples. Results obtained by the present method indicate that the method is an effective alternate method.

  9. Monte Carlo Study of the Xy-Model on SIERPIŃSKI Carpet

    NASA Astrophysics Data System (ADS)

    Mitrović, Božidar; Przedborski, Michelle A.

    2014-09-01

    We have performed a Monte Carlo (MC) study of the classical XY-model on a Sierpiński carpet, which is a planar fractal structure with infinite order of ramification and fractal dimension 1.8928. We employed the Wolff cluster algorithm in our simulations and our results, in particular those for the susceptibility and the helicity modulus, indicate the absence of finite-temperature Berezinskii-Kosterlitz-Thouless (BKT) transition in this system.

  10. Single-cluster-update Monte Carlo method for the random anisotropy model

    NASA Astrophysics Data System (ADS)

    Rößler, U. K.

    1999-06-01

    A Wolff-type cluster Monte Carlo algorithm for random magnetic models is presented. The algorithm is demonstrated to reduce significantly the critical slowing down for planar random anisotropy models with weak anisotropy strength. Dynamic exponents z<~1.0 of best cluster algorithms are estimated for models with ratio of anisotropy to exchange constant D/J=1.0 on cubic lattices in three dimensions. For these models, critical exponents are derived from a finite-size scaling analysis.

  11. Monte Carlo simulation for kinetic chemotaxis model: An application to the traveling population wave

    NASA Astrophysics Data System (ADS)

    Yasuda, Shugo

    2017-02-01

    A Monte Carlo simulation of chemotactic bacteria is developed on the basis of the kinetic model and is applied to a one-dimensional traveling population wave in a microchannel. In this simulation, the Monte Carlo method, which calculates the run-and-tumble motions of bacteria, is coupled with a finite volume method to calculate the macroscopic transport of the chemical cues in the environment. The simulation method can successfully reproduce the traveling population wave of bacteria that was observed experimentally and reveal the microscopic dynamics of bacterium coupled with the macroscopic transports of the chemical cues and bacteria population density. The results obtained by the Monte Carlo method are also compared with the asymptotic solution derived from the kinetic chemotaxis equation in the continuum limit, where the Knudsen number, which is defined by the ratio of the mean free path of bacterium to the characteristic length of the system, vanishes. The validity of the Monte Carlo method in the asymptotic behaviors for small Knudsen numbers is numerically verified.

  12. Monte Carlo Analysis of Quantum Transport and Fluctuations in Semiconductors.

    DTIC Science & Technology

    1986-02-18

    methods to quantum transport within the Liouville formulation. The second part concerns with fluctuations of carrier velocities and energies both in...interactions) on the transport properties. Keywords: Monte Carlo; Charge Transport; Quantum Transport ; Fluctuations; Semiconductor Physics; Master Equation...The present report contains technical matter related to the research performed on two different subjects. The first part concerns with quantum

  13. Monte Carlo simulation by computer for life-cycle costing

    NASA Technical Reports Server (NTRS)

    Gralow, F. H.; Larson, W. J.

    1969-01-01

    Prediction of behavior and support requirements during the entire life cycle of a system enables accurate cost estimates by using the Monte Carlo simulation by computer. The system reduces the ultimate cost to the procuring agency because it takes into consideration the costs of initial procurement, operation, and maintenance.

  14. MODELING LEACHING OF VIRUSES BY THE MONTE CARLO METHOD

    EPA Science Inventory

    A predictive screening model was developed for fate and transport
    of viruses in the unsaturated zone. A database of input parameters
    allowed Monte Carlo analysis with the model. The resulting kernel
    densities of predicted attenuation during percolation indicated very ...

  15. The Metropolis Monte Carlo Method in Statistical Physics

    NASA Astrophysics Data System (ADS)

    Landau, David P.

    2003-11-01

    A brief overview is given of some of the advances in statistical physics that have been made using the Metropolis Monte Carlo method. By complementing theory and experiment, these have increased our understanding of phase transitions and other phenomena in condensed matter systems. A brief description of a new method, commonly known as "Wang-Landau sampling," will also be presented.

  16. Quantum Monte Carlo simulation of topological phase transitions

    NASA Astrophysics Data System (ADS)

    Yamamoto, Arata; Kimura, Taro

    2016-12-01

    We study the electron-electron interaction effects on topological phase transitions by the ab initio quantum Monte Carlo simulation. We analyze two-dimensional class A topological insulators and three-dimensional Weyl semimetals with the long-range Coulomb interaction. The direct computation of the Chern number shows the electron-electron interaction modifies or extinguishes topological phase transitions.

  17. The Use of Monte Carlo Techniques to Teach Probability.

    ERIC Educational Resources Information Center

    Newell, G. J.; MacFarlane, J. D.

    1985-01-01

    Presents sports-oriented examples (cricket and football) in which Monte Carlo methods are used on microcomputers to teach probability concepts. Both examples include computer programs (with listings) which utilize the microcomputer's random number generator. Instructional strategies, with further challenges to help students understand the role of…

  18. Error estimations and their biases in Monte Carlo eigenvalue calculations

    SciTech Connect

    Ueki, Taro; Mori, Takamasa; Nakagawa, Masayuki

    1997-01-01

    In the Monte Carlo eigenvalue calculation of neutron transport, the eigenvalue is calculated as the average of multiplication factors from cycles, which are called the cycle k{sub eff}`s. Biases in the estimators of the variance and intercycle covariances in Monte Carlo eigenvalue calculations are analyzed. The relations among the real and apparent values of variances and intercycle covariances are derived, where real refers to a true value that is calculated from independently repeated Monte Carlo runs and apparent refers to the expected value of estimates from a single Monte Carlo run. Next, iterative methods based on the foregoing relations are proposed to estimate the standard deviation of the eigenvalue. The methods work well for the cases in which the ratios of the real to apparent values of variances are between 1.4 and 3.1. Even in the case where the foregoing ratio is >5, >70% of the standard deviation estimates fall within 40% from the true value.

  19. Diffuse photon density wave measurements and Monte Carlo simulations

    NASA Astrophysics Data System (ADS)

    Kuzmin, Vladimir L.; Neidrauer, Michael T.; Diaz, David; Zubkov, Leonid A.

    2015-10-01

    Diffuse photon density wave (DPDW) methodology is widely used in a number of biomedical applications. Here, we present results of Monte Carlo simulations that employ an effective numerical procedure based upon a description of radiative transfer in terms of the Bethe-Salpeter equation. A multifrequency noncontact DPDW system was used to measure aqueous solutions of intralipid at a wide range of source-detector separation distances, at which the diffusion approximation of the radiative transfer equation is generally considered to be invalid. We find that the signal-noise ratio is larger for the considered algorithm in comparison with the conventional Monte Carlo approach. Experimental data are compared to the Monte Carlo simulations using several values of scattering anisotropy and to the diffusion approximation. Both the Monte Carlo simulations and diffusion approximation were in very good agreement with the experimental data for a wide range of source-detector separations. In addition, measurements with different wavelengths were performed to estimate the size and scattering anisotropy of scatterers.

  20. Calculating coherent pair production with Monte Carlo methods

    SciTech Connect

    Bottcher, C.; Strayer, M.R.

    1989-01-01

    We discuss calculations of the coherent electromagnetic pair production in ultra-relativistic hadron collisions. This type of production, in lowest order, is obtained from three diagrams which contain two virtual photons. We discuss simple Monte Carlo methods for evaluating these classes of diagrams without recourse to involved algebraic reduction schemes. 19 refs., 11 figs.

  1. A Monte Carlo simulation of a supersaturated sodium chloride solution

    NASA Astrophysics Data System (ADS)

    Schwendinger, Michael G.; Rode, Bernd M.

    1989-03-01

    A simulation of a supersaturated sodium chloride solution with the Monte Carlo statistical thermodynamic method is reported. The water-water interactions are described by the Matsuoka-Clementi-Yoshimine (MCY) potential, while the ion-water potentials have been derived from ab initio calculations. Structural features of the solution have been evaluated, special interest being focused on possible precursors of nucleation.

  2. Monte Carlo capabilities of the SCALE code system

    DOE PAGES

    Rearden, Bradley T.; Petrie, Jr., Lester M.; Peplow, Douglas E.; ...

    2014-09-12

    SCALE is a broadly used suite of tools for nuclear systems modeling and simulation that provides comprehensive, verified and validated, user-friendly capabilities for criticality safety, reactor physics, radiation shielding, and sensitivity and uncertainty analysis. For more than 30 years, regulators, licensees, and research institutions around the world have used SCALE for nuclear safety analysis and design. SCALE provides a “plug-and-play” framework that includes three deterministic and three Monte Carlo radiation transport solvers that can be selected based on the desired solution, including hybrid deterministic/Monte Carlo simulations. SCALE includes the latest nuclear data libraries for continuous-energy and multigroup radiation transport asmore » well as activation, depletion, and decay calculations. SCALE’s graphical user interfaces assist with accurate system modeling, visualization, and convenient access to desired results. SCALE 6.2 will provide several new capabilities and significant improvements in many existing features, especially with expanded continuous-energy Monte Carlo capabilities for criticality safety, shielding, depletion, and sensitivity and uncertainty analysis. Finally, an overview of the Monte Carlo capabilities of SCALE is provided here, with emphasis on new features for SCALE 6.2.« less

  3. Monte Carlo capabilities of the SCALE code system

    SciTech Connect

    Rearden, Bradley T.; Petrie, Jr., Lester M.; Peplow, Douglas E.; Bekar, Kursat B.; Wiarda, Dorothea; Celik, Cihangir; Perfetti, Christopher M.; Ibrahim, Ahmad M.; Hart, S. W. D.; Dunn, Michael E.; Marshall, William J.

    2014-09-12

    SCALE is a broadly used suite of tools for nuclear systems modeling and simulation that provides comprehensive, verified and validated, user-friendly capabilities for criticality safety, reactor physics, radiation shielding, and sensitivity and uncertainty analysis. For more than 30 years, regulators, licensees, and research institutions around the world have used SCALE for nuclear safety analysis and design. SCALE provides a “plug-and-play” framework that includes three deterministic and three Monte Carlo radiation transport solvers that can be selected based on the desired solution, including hybrid deterministic/Monte Carlo simulations. SCALE includes the latest nuclear data libraries for continuous-energy and multigroup radiation transport as well as activation, depletion, and decay calculations. SCALE’s graphical user interfaces assist with accurate system modeling, visualization, and convenient access to desired results. SCALE 6.2 will provide several new capabilities and significant improvements in many existing features, especially with expanded continuous-energy Monte Carlo capabilities for criticality safety, shielding, depletion, and sensitivity and uncertainty analysis. Finally, an overview of the Monte Carlo capabilities of SCALE is provided here, with emphasis on new features for SCALE 6.2.

  4. CMS Monte Carlo production operations in a distributed computing environment

    SciTech Connect

    Mohapatra, A.; Lazaridis, C.; Hernandez, J.M.; Caballero, J.; Hof, C.; Kalinin, S.; Flossdorf, A.; Abbrescia, M.; De Filippis, N.; Donvito, G.; Maggi, G.; /Bari U. /INFN, Bari /INFN, Pisa /Vrije U., Brussels /Brussels U. /Imperial Coll., London /CERN /Princeton U. /Fermilab

    2008-01-01

    Monte Carlo production for the CMS experiment is carried out in a distributed computing environment; the goal of producing 30M simulated events per month in the first half of 2007 has been reached. A brief overview of the production operations and statistics is presented.

  5. Bayesian internal dosimetry calculations using Markov Chain Monte Carlo.

    PubMed

    Miller, G; Martz, H F; Little, T T; Guilmette, R

    2002-01-01

    A new numerical method for solving the inverse problem of internal dosimetry is described. The new method uses Markov Chain Monte Carlo and the Metropolis algorithm. Multiple intake amounts, biokinetic types, and times of intake are determined from bioassay data by integrating over the Bayesian posterior distribution. The method appears definitive, but its application requires a large amount of computing time.

  6. Play It Again: Teaching Statistics with Monte Carlo Simulation

    ERIC Educational Resources Information Center

    Sigal, Matthew J.; Chalmers, R. Philip

    2016-01-01

    Monte Carlo simulations (MCSs) provide important information about statistical phenomena that would be impossible to assess otherwise. This article introduces MCS methods and their applications to research and statistical pedagogy using a novel software package for the R Project for Statistical Computing constructed to lessen the often steep…

  7. Automated Monte Carlo Simulation of Proton Therapy Treatment Plans.

    PubMed

    Verburg, Joost Mathijs; Grassberger, Clemens; Dowdell, Stephen; Schuemann, Jan; Seco, Joao; Paganetti, Harald

    2016-12-01

    Simulations of clinical proton radiotherapy treatment plans using general purpose Monte Carlo codes have been proven to be a valuable tool for basic research and clinical studies. They have been used to benchmark dose calculation methods, to study radiobiological effects, and to develop new technologies such as in vivo range verification methods. Advancements in the availability of computational power have made it feasible to perform such simulations on large sets of patient data, resulting in a need for automated and consistent simulations. A framework called MCAUTO was developed for this purpose. Both passive scattering and pencil beam scanning delivery are supported. The code handles the data exchange between the treatment planning system and the Monte Carlo system, which requires not only transfer of plan and imaging information but also translation of institutional procedures, such as output factor definitions. Simulations are performed on a high-performance computing infrastructure. The simulation methods were designed to use the full capabilities of Monte Carlo physics models, while also ensuring consistency in the approximations that are common to both pencil beam and Monte Carlo dose calculations. Although some methods need to be tailored to institutional planning systems and procedures, the described procedures show a general road map that can be easily translated to other systems.

  8. Monte Carlo method for magnetic impurities in metals

    NASA Technical Reports Server (NTRS)

    Hirsch, J. E.; Fye, R. M.

    1986-01-01

    The paper discusses a Monte Carlo algorithm to study properties of dilute magnetic alloys; the method can treat a small number of magnetic impurities interacting wiith the conduction electrons in a metal. Results for the susceptibility of a single Anderson impurity in the symmetric case show the expected universal behavior at low temperatures. Some results for two Anderson impurities are also discussed.

  9. An Overview of the Monte Carlo Methods, Codes, & Applications Group

    SciTech Connect

    Trahan, Travis John

    2016-08-30

    This report sketches the work of the Group to deliver first-principle Monte Carlo methods, production quality codes, and radiation transport-based computational and experimental assessments using the codes MCNP and MCATK for such applications as criticality safety, non-proliferation, nuclear energy, nuclear threat reduction and response, radiation detection and measurement, radiation health protection, and stockpile stewardship.

  10. Parallel Monte Carlo simulation of multilattice thin film growth

    NASA Astrophysics Data System (ADS)

    Shu, J. W.; Lu, Qin; Wong, Wai-on; Huang, Han-chen

    2001-07-01

    This paper describe a new parallel algorithm for the multi-lattice Monte Carlo atomistic simulator for thin film deposition (ADEPT), implemented on parallel computer using the PVM (Parallel Virtual Machine) message passing library. This parallel algorithm is based on domain decomposition with overlapping and asynchronous communication. Multiple lattices are represented by a single reference lattice through one-to-one mappings, with resulting computational demands being comparable to those in the single-lattice Monte Carlo model. Asynchronous communication and domain overlapping techniques are used to reduce the waiting time and communication time among parallel processors. Results show that the algorithm is highly efficient with large number of processors. The algorithm was implemented on a parallel machine with 50 processors, and it is suitable for parallel Monte Carlo simulation of thin film growth with either a distributed memory parallel computer or a shared memory machine with message passing libraries. In this paper, the significant communication time in parallel MC simulation of thin film growth is effectively reduced by adopting domain decomposition with overlapping between sub-domains and asynchronous communication among processors. The overhead of communication does not increase evidently and speedup shows an ascending tendency when the number of processor increases. A near linear increase in computing speed was achieved with number of processors increases and there is no theoretical limit on the number of processors to be used. The techniques developed in this work are also suitable for the implementation of the Monte Carlo code on other parallel systems.

  11. Monte Carlo methods for multidimensional integration for European option pricing

    NASA Astrophysics Data System (ADS)

    Todorov, V.; Dimov, I. T.

    2016-10-01

    In this paper, we illustrate examples of highly accurate Monte Carlo and quasi-Monte Carlo methods for multiple integrals related to the evaluation of European style options. The idea is that the value of the option is formulated in terms of the expectation of some random variable; then the average of independent samples of this random variable is used to estimate the value of the option. First we obtain an integral representation for the value of the option using the risk neutral valuation formula. Then with an appropriations change of the constants we obtain a multidimensional integral over the unit hypercube of the corresponding dimensionality. Then we compare a specific type of lattice rules over one of the best low discrepancy sequence of Sobol for numerical integration. Quasi-Monte Carlo methods are compared with Adaptive and Crude Monte Carlo techniques for solving the problem. The four approaches are completely different thus it is a question of interest to know which one of them outperforms the other for evaluation multidimensional integrals in finance. Some of the advantages and disadvantages of the developed algorithms are discussed.

  12. Monte Carlo study of the atmospheric spread function

    NASA Technical Reports Server (NTRS)

    Pearce, W. A.

    1986-01-01

    Monte Carlo radiative transfer simulations are used to study the atmospheric spread function appropriate to satellite-based sensing of the earth's surface. The parameters which are explored include the nadir angle of view, the size distribution of the atmospheric aerosol, and the aerosol vertical profile.

  13. Diffuse photon density wave measurements and Monte Carlo simulations.

    PubMed

    Kuzmin, Vladimir L; Neidrauer, Michael T; Diaz, David; Zubkov, Leonid A

    2015-10-01

    Diffuse photon density wave (DPDW) methodology is widely used in a number of biomedical applications. Here, we present results of Monte Carlo simulations that employ an effective numerical procedure based upon a description of radiative transfer in terms of the Bethe–Salpeter equation. A multifrequency noncontact DPDW system was used to measure aqueous solutions of intralipid at a wide range of source–detector separation distances, at which the diffusion approximation of the radiative transfer equation is generally considered to be invalid. We find that the signal–noise ratio is larger for the considered algorithm in comparison with the conventional Monte Carlo approach. Experimental data are compared to the Monte Carlo simulations using several values of scattering anisotropy and to the diffusion approximation. Both the Monte Carlo simulations and diffusion approximation were in very good agreement with the experimental data for a wide range of source–detector separations. In addition, measurements with different wavelengths were performed to estimate the size and scattering anisotropy of scatterers.

  14. Monte Carlo Approach for Reliability Estimations in Generalizability Studies.

    ERIC Educational Resources Information Center

    Dimitrov, Dimiter M.

    A Monte Carlo approach is proposed, using the Statistical Analysis System (SAS) programming language, for estimating reliability coefficients in generalizability theory studies. Test scores are generated by a probabilistic model that considers the probability for a person with a given ability score to answer an item with a given difficulty…

  15. Testing Dependent Correlations with Nonoverlapping Variables: A Monte Carlo Simulation

    ERIC Educational Resources Information Center

    Silver, N. Clayton; Hittner, James B.; May, Kim

    2004-01-01

    The authors conducted a Monte Carlo simulation of 4 test statistics or comparing dependent correlations with no variables in common. Empirical Type 1 error rates and power estimates were determined for K. Pearson and L. N. G. Filon's (1898) z, O. J. Dunn and V. A. Clark's (1969) z, J. H. Steiger's (1980) original modification of Dunn and Clark's…

  16. Exploring Mass Perception with Markov Chain Monte Carlo

    ERIC Educational Resources Information Center

    Cohen, Andrew L.; Ross, Michael G.

    2009-01-01

    Several previous studies have examined the ability to judge the relative mass of objects in idealized collisions. With a newly developed technique of psychological Markov chain Monte Carlo sampling (A. N. Sanborn & T. L. Griffiths, 2008), this work explores participants; perceptions of different collision mass ratios. The results reveal…

  17. Monte Carlo Estimation of the Electric Field in Stellarators

    NASA Astrophysics Data System (ADS)

    Bauer, F.; Betancourt, O.; Garabedian, P.; Ng, K. C.

    1986-10-01

    The BETA computer codes have been developed to study ideal magnetohydrodynamic equilibrium and stability of stellarators and to calculate neoclassical transport for electrons as well as ions by the Monte Carlo method. In this paper a numerical procedure is presented to select resonant terms in the electric potential so that the distribution functions and confinement times of the ions and electrons become indistinguishable.

  18. Impact of random numbers on parallel Monte Carlo application

    SciTech Connect

    Pandey, Ras B.

    2002-10-22

    A number of graduate students are involved at various level of research in this project. We investigate the basic issues in materials using Monte Carlo simulations with specific interest in heterogeneous materials. Attempts have been made to seek collaborations with the DOE laboratories. Specific details are given.

  19. Improved geometry representations for Monte Carlo radiation transport.

    SciTech Connect

    Martin, Matthew Ryan

    2004-08-01

    ITS (Integrated Tiger Series) permits a state-of-the-art Monte Carlo solution of linear time-integrated coupled electron/photon radiation transport problems with or without the presence of macroscopic electric and magnetic fields of arbitrary spatial dependence. ITS allows designers to predict product performance in radiation environments.

  20. SABRINA: an interactive solid geometry modeling program for Monte Carlo

    SciTech Connect

    West, J.T.

    1985-01-01

    SABRINA is a fully interactive three-dimensional geometry modeling program for MCNP. In SABRINA, a user interactively constructs either body geometry, or surface geometry models, and interactively debugs spatial descriptions for the resulting objects. This enhanced capability significantly reduces the effort in constructing and debugging complicated three-dimensional geometry models for Monte Carlo Analysis.

  1. A Monte Carlo photocurrent/photoemission computer program

    NASA Technical Reports Server (NTRS)

    Chadsey, W. L.; Ragona, C.

    1972-01-01

    A Monte Carlo computer program was developed for the computation of photocurrents and photoemission in gamma (X-ray)-irradiated materials. The program was used for computation of radiation-induced surface currents on space vehicles and the computation of radiation-induced space charge environments within space vehicles.

  2. On a full Monte Carlo approach to quantum mechanics

    NASA Astrophysics Data System (ADS)

    Sellier, J. M.; Dimov, I.

    2016-12-01

    The Monte Carlo approach to numerical problems has shown to be remarkably efficient in performing very large computational tasks since it is an embarrassingly parallel technique. Additionally, Monte Carlo methods are well known to keep performance and accuracy with the increase of dimensionality of a given problem, a rather counterintuitive peculiarity not shared by any known deterministic method. Motivated by these very peculiar and desirable computational features, in this work we depict a full Monte Carlo approach to the problem of simulating single- and many-body quantum systems by means of signed particles. In particular we introduce a stochastic technique, based on the strategy known as importance sampling, for the computation of the Wigner kernel which, so far, has represented the main bottleneck of this method (it is equivalent to the calculation of a multi-dimensional integral, a problem in which complexity is known to grow exponentially with the dimensions of the problem). The introduction of this stochastic technique for the kernel is twofold: firstly it reduces the complexity of a quantum many-body simulation from non-linear to linear, secondly it introduces an embarassingly parallel approach to this very demanding problem. To conclude, we perform concise but indicative numerical experiments which clearly illustrate how a full Monte Carlo approach to many-body quantum systems is not only possible but also advantageous. This paves the way towards practical time-dependent, first-principle simulations of relatively large quantum systems by means of affordable computational resources.

  3. Dark Photon Monte Carlo at SeaQuest

    NASA Astrophysics Data System (ADS)

    Hicks, Caleb; SeaQuest/E906 Collaboration

    2016-09-01

    Fermi National Laboratory's E906/SeaQuest is an experiment primarily designed to study the ratio of anti-down to anti-up quarks in the nucleon quark sea as a function of Bjorken x. SeaQuest's measurement is obtained by measuring the muon pairs produced by the Drell-Yan process. The experiment can also search for muon pair vertices past the target and beam dump, which would be a signature of Dark Photon decay. It is therefore necessary to run Monte Carlo simulations to determine how a changed Z vertex affects the detection and distribution of muon pairs using SeaQuest's detectors. SeaQuest has an existing Monte Carlo program that has been used for simulations of the Drell-Yan process as well as J/psi decay and other processes. The Monte Carlo program was modified to use a fixed Z vertex when generating muon pairs. Events were then generated with varying Z vertices and the resulting simulations were then analyzed. This work is focuses on the results of the Monte Carlo simulations and the effects on Dark Photon detection. This research was supported by US DOE MENP Grant DE-FG02-03ER41243.

  4. Parallel canonical Monte Carlo simulations through sequential updating of particles

    NASA Astrophysics Data System (ADS)

    O'Keeffe, C. J.; Orkoulas, G.

    2009-04-01

    In canonical Monte Carlo simulations, sequential updating of particles is equivalent to random updating due to particle indistinguishability. In contrast, in grand canonical Monte Carlo simulations, sequential implementation of the particle transfer steps in a dense grid of distinct points in space improves both the serial and the parallel efficiency of the simulation. The main advantage of sequential updating in parallel canonical Monte Carlo simulations is the reduction in interprocessor communication, which is usually a slow process. In this work, we propose a parallelization method for canonical Monte Carlo simulations via domain decomposition techniques and sequential updating of particles. Each domain is further divided into a middle and two outer sections. Information exchange is required after the completion of the updating of the outer regions. During the updating of the middle section, communication does not occur unless a particle moves out of this section. Results on two- and three-dimensional Lennard-Jones fluids indicate a nearly perfect improvement in parallel efficiency for large systems.

  5. Parallel canonical Monte Carlo simulations through sequential updating of particles.

    PubMed

    O'Keeffe, C J; Orkoulas, G

    2009-04-07

    In canonical Monte Carlo simulations, sequential updating of particles is equivalent to random updating due to particle indistinguishability. In contrast, in grand canonical Monte Carlo simulations, sequential implementation of the particle transfer steps in a dense grid of distinct points in space improves both the serial and the parallel efficiency of the simulation. The main advantage of sequential updating in parallel canonical Monte Carlo simulations is the reduction in interprocessor communication, which is usually a slow process. In this work, we propose a parallelization method for canonical Monte Carlo simulations via domain decomposition techniques and sequential updating of particles. Each domain is further divided into a middle and two outer sections. Information exchange is required after the completion of the updating of the outer regions. During the updating of the middle section, communication does not occur unless a particle moves out of this section. Results on two- and three-dimensional Lennard-Jones fluids indicate a nearly perfect improvement in parallel efficiency for large systems.

  6. A Variational Monte Carlo Approach to Atomic Structure

    ERIC Educational Resources Information Center

    Davis, Stephen L.

    2007-01-01

    The practicality and usefulness of variational Monte Carlo calculations to atomic structure are demonstrated. It is found to succeed in quantitatively illustrating electron shielding, effective nuclear charge, l-dependence of the orbital energies, and singlet-tripetenergy splitting and ionization energy trends in atomic structure theory.

  7. Determining MTF of digital detector system with Monte Carlo simulation

    NASA Astrophysics Data System (ADS)

    Jeong, Eun Seon; Lee, Hyung Won; Nam, Sang Hee

    2005-04-01

    We have designed a detector based on a-Se(amorphous Selenium) and done simulation the detector with Monte Carlo method. We will apply the cascaded linear system theory to determine the MTF for whole detector system. For direct comparison with experiment, we have simulated 139um pixel pitch and used simulated X-ray tube spectrum.

  8. A Monte Carlo Approach for Adaptive Testing with Content Constraints

    ERIC Educational Resources Information Center

    Belov, Dmitry I.; Armstrong, Ronald D.; Weissman, Alexander

    2008-01-01

    This article presents a new algorithm for computerized adaptive testing (CAT) when content constraints are present. The algorithm is based on shadow CAT methodology to meet content constraints but applies Monte Carlo methods and provides the following advantages over shadow CAT: (a) lower maximum item exposure rates, (b) higher utilization of the…

  9. Monte Carlo Capabilities of the SCALE Code System

    NASA Astrophysics Data System (ADS)

    Rearden, B. T.; Petrie, L. M.; Peplow, D. E.; Bekar, K. B.; Wiarda, D.; Celik, C.; Perfetti, C. M.; Ibrahim, A. M.; Hart, S. W. D.; Dunn, M. E.

    2014-06-01

    SCALE is a widely used suite of tools for nuclear systems modeling and simulation that provides comprehensive, verified and validated, user-friendly capabilities for criticality safety, reactor physics, radiation shielding, and sensitivity and uncertainty analysis. For more than 30 years, regulators, licensees, and research institutions around the world have used SCALE for nuclear safety analysis and design. SCALE provides a "plug-and-play" framework that includes three deterministic and three Monte Carlo radiation transport solvers that can be selected based on the desired solution, including hybrid deterministic/Monte Carlo simulations. SCALE includes the latest nuclear data libraries for continuous-energy and multigroup radiation transport as well as activation, depletion, and decay calculations. SCALE's graphical user interfaces assist with accurate system modeling, visualization, and convenient access to desired results. SCALE 6.2, to be released in 2014, will provide several new capabilities and significant improvements in many existing features, especially with expanded continuous-energy Monte Carlo capabilities for criticality safety, shielding, depletion, and sensitivity and uncertainty analysis. An overview of the Monte Carlo capabilities of SCALE is provided here, with emphasis on new features for SCALE 6.2.

  10. Harnessing graphical structure in Markov chain Monte Carlo learning

    SciTech Connect

    Stolorz, P.E.; Chew P.C.

    1996-12-31

    The Monte Carlo method is recognized as a useful tool in learning and probabilistic inference methods common to many datamining problems. Generalized Hidden Markov Models and Bayes nets are especially popular applications. However, the presence of multiple modes in many relevant integrands and summands often renders the method slow and cumbersome. Recent mean field alternatives designed to speed things up have been inspired by experience gleaned from physics. The current work adopts an approach very similar to this in spirit, but focusses instead upon dynamic programming notions as a basis for producing systematic Monte Carlo improvements. The idea is to approximate a given model by a dynamic programming-style decomposition, which then forms a scaffold upon which to build successively more accurate Monte Carlo approximations. Dynamic programming ideas alone fail to account for non-local structure, while standard Monte Carlo methods essentially ignore all structure. However, suitably-crafted hybrids can successfully exploit the strengths of each method, resulting in algorithms that combine speed with accuracy. The approach relies on the presence of significant {open_quotes}local{close_quotes} information in the problem at hand. This turns out to be a plausible assumption for many important applications. Example calculations are presented, and the overall strengths and weaknesses of the approach are discussed.

  11. APPLICATION OF BAYESIAN MONTE CARLO ANALYSIS TO A LAGRANGIAN PHOTOCHEMICAL AIR QUALITY MODEL. (R824792)

    EPA Science Inventory

    Uncertainties in ozone concentrations predicted with a Lagrangian photochemical air quality model have been estimated using Bayesian Monte Carlo (BMC) analysis. Bayesian Monte Carlo analysis provides a means of combining subjective "prior" uncertainty estimates developed ...

  12. Incorporation of Monte-Carlo Computer Techniques into Science and Mathematics Education.

    ERIC Educational Resources Information Center

    Danesh, Iraj

    1987-01-01

    Described is a Monte-Carlo method for modeling physical systems with a computer. Also discussed are ways to incorporate Monte-Carlo simulation techniques for introductory science and mathematics teaching and also for enriching computer and simulation courses. (RH)

  13. Fast Monte Carlo for radiation therapy: the PEREGRINE Project

    SciTech Connect

    Hartmann Siantar, C.L.; Bergstrom, P.M.; Chandler, W.P.; Cox, L.J.; Daly, T.P.; Garrett, D.; House, R.K.; Moses, E.I.; Powell, C.L.; Patterson, R.W.; Schach von Wittenau, A.E.

    1997-11-11

    The purpose of the PEREGRINE program is to bring high-speed, high- accuracy, high-resolution Monte Carlo dose calculations to the desktop in the radiation therapy clinic. PEREGRINE is a three- dimensional Monte Carlo dose calculation system designed specifically for radiation therapy planning. It provides dose distributions from external beams of photons, electrons, neutrons, and protons as well as from brachytherapy sources. Each external radiation source particle passes through collimator jaws and beam modifiers such as blocks, compensators, and wedges that are used to customize the treatment to maximize the dose to the tumor. Absorbed dose is tallied in the patient or phantom as Monte Carlo simulation particles are followed through a Cartesian transport mesh that has been manually specified or determined from a CT scan of the patient. This paper describes PEREGRINE capabilities, results of benchmark comparisons, calculation times and performance, and the significance of Monte Carlo calculations for photon teletherapy. PEREGRINE results show excellent agreement with a comprehensive set of measurements for a wide variety of clinical photon beam geometries, on both homogeneous and heterogeneous test samples or phantoms. PEREGRINE is capable of calculating >350 million histories per hour for a standard clinical treatment plan. This results in a dose distribution with voxel standard deviations of <2% of the maximum dose on 4 million voxels with 1 mm resolution in the CT-slice plane in under 20 minutes. Calculation times include tracking particles through all patient specific beam delivery components as well as the patient. Most importantly, comparison of Monte Carlo dose calculations with currently-used algorithms reveal significantly different dose distributions for a wide variety of treatment sites, due to the complex 3-D effects of missing tissue, tissue heterogeneities, and accurate modeling of the radiation source.

  14. Multiple-time-stepping generalized hybrid Monte Carlo methods

    SciTech Connect

    Escribano, Bruno; Akhmatskaya, Elena; Reich, Sebastian; Azpiroz, Jon M.

    2015-01-01

    Performance of the generalized shadow hybrid Monte Carlo (GSHMC) method [1], which proved to be superior in sampling efficiency over its predecessors [2–4], molecular dynamics and hybrid Monte Carlo, can be further improved by combining it with multi-time-stepping (MTS) and mollification of slow forces. We demonstrate that the comparatively simple modifications of the method not only lead to better performance of GSHMC itself but also allow for beating the best performed methods, which use the similar force splitting schemes. In addition we show that the same ideas can be successfully applied to the conventional generalized hybrid Monte Carlo method (GHMC). The resulting methods, MTS-GHMC and MTS-GSHMC, provide accurate reproduction of thermodynamic and dynamical properties, exact temperature control during simulation and computational robustness and efficiency. MTS-GHMC uses a generalized momentum update to achieve weak stochastic stabilization to the molecular dynamics (MD) integrator. MTS-GSHMC adds the use of a shadow (modified) Hamiltonian to filter the MD trajectories in the HMC scheme. We introduce a new shadow Hamiltonian formulation adapted to force-splitting methods. The use of such Hamiltonians improves the acceptance rate of trajectories and has a strong impact on the sampling efficiency of the method. Both methods were implemented in the open-source MD package ProtoMol and were tested on a water and a protein systems. Results were compared to those obtained using a Langevin Molly (LM) method [5] on the same systems. The test results demonstrate the superiority of the new methods over LM in terms of stability, accuracy and sampling efficiency. This suggests that putting the MTS approach in the framework of hybrid Monte Carlo and using the natural stochasticity offered by the generalized hybrid Monte Carlo lead to improving stability of MTS and allow for achieving larger step sizes in the simulation of complex systems.

  15. Concurrent Monte Carlo transport and fluence optimization with fluence adjusting scalable transport Monte Carlo

    PubMed Central

    Svatos, M.; Zankowski, C.; Bednarz, B.

    2016-01-01

    Purpose: The future of radiation therapy will require advanced inverse planning solutions to support single-arc, multiple-arc, and “4π” delivery modes, which present unique challenges in finding an optimal treatment plan over a vast search space, while still preserving dosimetric accuracy. The successful clinical implementation of such methods would benefit from Monte Carlo (MC) based dose calculation methods, which can offer improvements in dosimetric accuracy when compared to deterministic methods. The standard method for MC based treatment planning optimization leverages the accuracy of the MC dose calculation and efficiency of well-developed optimization methods, by precalculating the fluence to dose relationship within a patient with MC methods and subsequently optimizing the fluence weights. However, the sequential nature of this implementation is computationally time consuming and memory intensive. Methods to reduce the overhead of the MC precalculation have been explored in the past, demonstrating promising reductions of computational time overhead, but with limited impact on the memory overhead due to the sequential nature of the dose calculation and fluence optimization. The authors propose an entirely new form of “concurrent” Monte Carlo treat plan optimization: a platform which optimizes the fluence during the dose calculation, reduces wasted computation time being spent on beamlets that weakly contribute to the final dose distribution, and requires only a low memory footprint to function. In this initial investigation, the authors explore the key theoretical and practical considerations of optimizing fluence in such a manner. Methods: The authors present a novel derivation and implementation of a gradient descent algorithm that allows for optimization during MC particle transport, based on highly stochastic information generated through particle transport of very few histories. A gradient rescaling and renormalization algorithm, and the

  16. A simple new way to help speed up Monte Carlo convergence rates: Energy-scaled displacement Monte Carlo

    NASA Astrophysics Data System (ADS)

    Goldman, Saul

    1983-10-01

    A method we call energy-scaled displacement Monte Carlo (ESDMC) whose purpose is to improve sampling efficiency and thereby speed up convergence rates in Monte Carlo calculations is presented. The method involves scaling the maximum displacement a particle may make on a trial move to the particle's configurational energy. The scaling is such that on the average, the most stable particles make the smallest moves and the most energetic particles the largest moves. The method is compared to Metropolis Monte Carlo (MMC) and Force Bias Monte Carlo of (FBMC) by applying all three methods to a dense Lennard-Jones fluid at two temperatures, and to hot ST2 water. The functions monitored as the Markov chains developed were, for the Lennard-Jones case: melting, radial distribution functions, internal energies, and heat capacities. For hot ST2 water, we monitored energies and heat capacities. The results suggest that ESDMC samples configuration space more efficiently than either MMC or FBMC in these systems for the biasing parameters used here. The benefit from using ESDMC seemed greatest for the Lennard-Jones systems.

  17. A Preliminary Study of In-House Monte Carlo Simulations: An Integrated Monte Carlo Verification System

    SciTech Connect

    Mukumoto, Nobutaka; Tsujii, Katsutomo; Saito, Susumu; Yasunaga, Masayoshi; Takegawa, Hidek; Yamamoto, Tokihiro; Numasaki, Hodaka; Teshima, Teruki

    2009-10-01

    Purpose: To develop an infrastructure for the integrated Monte Carlo verification system (MCVS) to verify the accuracy of conventional dose calculations, which often fail to accurately predict dose distributions, mainly due to inhomogeneities in the patient's anatomy, for example, in lung and bone. Methods and Materials: The MCVS consists of the graphical user interface (GUI) based on a computational environment for radiotherapy research (CERR) with MATLAB language. The MCVS GUI acts as an interface between the MCVS and a commercial treatment planning system to import the treatment plan, create MC input files, and analyze MC output dose files. The MCVS consists of the EGSnrc MC codes, which include EGSnrc/BEAMnrc to simulate the treatment head and EGSnrc/DOSXYZnrc to calculate the dose distributions in the patient/phantom. In order to improve computation time without approximations, an in-house cluster system was constructed. Results: The phase-space data of a 6-MV photon beam from a Varian Clinac unit was developed and used to establish several benchmarks under homogeneous conditions. The MC results agreed with the ionization chamber measurements to within 1%. The MCVS GUI could import and display the radiotherapy treatment plan created by the MC method and various treatment planning systems, such as RTOG and DICOM-RT formats. Dose distributions could be analyzed by using dose profiles and dose volume histograms and compared on the same platform. With the cluster system, calculation time was improved in line with the increase in the number of central processing units (CPUs) at a computation efficiency of more than 98%. Conclusions: Development of the MCVS was successful for performing MC simulations and analyzing dose distributions.

  18. Nonequilibrium candidate Monte Carlo is an efficient tool for equilibrium simulation

    SciTech Connect

    Nilmeier, J. P.; Crooks, G. E.; Minh, D. D. L.; Chodera, J. D.

    2011-10-24

    Metropolis Monte Carlo simulation is a powerful tool for studying the equilibrium properties of matter. In complex condensed-phase systems, however, it is difficult to design Monte Carlo moves with high acceptance probabilities that also rapidly sample uncorrelated configurations. Here, we introduce a new class of moves based on nonequilibrium dynamics: candidate configurations are generated through a finite-time process in which a system is actively driven out of equilibrium, and accepted with criteria that preserve the equilibrium distribution. The acceptance rule is similar to the Metropolis acceptance probability, but related to the nonequilibrium work rather than the instantaneous energy difference. Our method is applicable to sampling from both a single thermodynamic state or a mixture of thermodynamic states, and allows both coordinates and thermodynamic parameters to be driven in nonequilibrium proposals. While generating finite-time switching trajectories incurs an additional cost, driving some degrees of freedom while allowing others to evolve naturally can lead to large enhancements in acceptance probabilities, greatly reducing structural correlation times. Using nonequilibrium driven processes vastly expands the repertoire of useful Monte Carlo proposals in simulations of dense solvated systems.

  19. MONITOR- MONTE CARLO INVESTIGATION OF TRAJECTORY OPERATIONS AND REQUIREMENTS

    NASA Technical Reports Server (NTRS)

    Glass, A. B.

    1994-01-01

    The Monte Carlo Investigation of Trajectory Operations and Requirements (MONITOR) program was developed to perform spacecraft mission maneuver simulations for geosynchronous, single maneuver, and comet encounter type trajectories. MONITOR is a multifaceted program which enables the modeling of various orbital sequences and missions, the generation of Monte Carlo simulation statistics, and the parametric scanning of user requested variables over specified intervals. The MONITOR program has been used primarily to study geosynchronous missions and has the capability to model Space Shuttle deployed satellite trajectories. The ability to perform a Monte Carlo error analysis of user specified orbital parameters using predicted maneuver execution errors can make MONITOR a significant part of any mission planning and analysis system. The MONITOR program can be executed in four operational modes. In the first mode, analytic state covariance matrix propagation is performed using state transition matrices for the coasting and powered burn phases of the trajectory. A two-body central force field is assumed throughout the analysis. Histograms of the final orbital elements and other state dependent variables may be evaluated by a Monte Carlo analysis. In the second mode, geosynchronous missions can be simulated from parking orbit injection through station acquisition. A two-body central force field is assumed throughout the simulation. Nominal mission studies can be conducted; however, the main use of this mode lies in evaluating the behavior of pertinent orbital trajectory parameters by making use of a Monte Carlo analysis. In the third mode, MONITOR performs parametric scans of user-requested variables for a nominal mission. Various orbital sequences may be specified; however, primary use is devoted to geosynchronous missions. A maximum of five variables may be scanned at a time. The fourth mode simulates a mission from orbit injection through comet encounter with optional

  20. Kernel density estimator methods for Monte Carlo radiation transport

    NASA Astrophysics Data System (ADS)

    Banerjee, Kaushik

    In this dissertation, the Kernel Density Estimator (KDE), a nonparametric probability density estimator, is studied and used to represent global Monte Carlo (MC) tallies. KDE is also employed to remove the singularities from two important Monte Carlo tallies, namely point detector and surface crossing flux tallies. Finally, KDE is also applied to accelerate the Monte Carlo fission source iteration for criticality problems. In the conventional MC calculation histograms are used to represent global tallies which divide the phase space into multiple bins. Partitioning the phase space into bins can add significant overhead to the MC simulation and the histogram provides only a first order approximation to the underlying distribution. The KDE method is attractive because it can estimate MC tallies in any location within the required domain without any particular bin structure. Post-processing of the KDE tallies is sufficient to extract detailed, higher order tally information for an arbitrary grid. The quantitative and numerical convergence properties of KDE tallies are also investigated and they are shown to be superior to conventional histograms as well as the functional expansion tally developed by Griesheimer. Monte Carlo point detector and surface crossing flux tallies are two widely used tallies but they suffer from an unbounded variance. As a result, the central limit theorem can not be used for these tallies to estimate confidence intervals. By construction, KDE tallies can be directly used to estimate flux at a point but the variance of this point estimate does not converge as 1/N, which is not unexpected for a point quantity. However, an improved approach is to modify both point detector and surface crossing flux tallies directly by using KDE within a variance reduction approach by taking advantage of the fact that KDE estimates the underlying probability density function. This methodology is demonstrated by several numerical examples and demonstrates that

  1. Monte Carlo simulation of particle acceleration at astrophysical shocks

    NASA Technical Reports Server (NTRS)

    Campbell, Roy K.

    1989-01-01

    A Monte Carlo code was developed for the simulation of particle acceleration at astrophysical shocks. The code is implemented in Turbo Pascal on a PC. It is modularized and structured in such a way that modification and maintenance are relatively painless. Monte Carlo simulations of particle acceleration at shocks follow the trajectories of individual particles as they scatter repeatedly across the shock front, gaining energy with each crossing. The particles are assumed to scatter from magnetohydrodynamic (MHD) turbulence on both sides of the shock. A scattering law is used which is related to the assumed form of the turbulence, and the particle and shock parameters. High energy cosmic ray spectra derived from Monte Carlo simulations have observed power law behavior just as the spectra derived from analytic calculations based on a diffusion equation. This high energy behavior is not sensitive to the scattering law used. In contrast with Monte Carlo calculations diffusive calculations rely on the initial injection of supra-thermal particles into the shock environment. Monte Carlo simulations are the only known way to describe the extraction of particles directly from the thermal pool. This was the triumph of the Monte Carlo approach. The question of acceleration efficiency is an important one in the shock acceleration game. The efficiency of shock waves efficient to account for the observed flux of high energy galactic cosmic rays was examined. The efficiency of the acceleration process depends on the thermal particle pick-up and hence the low energy scattering in detail. One of the goals is the self-consistent derivation of the accelerated particle spectra and the MHD turbulence spectra. Presumably the upstream turbulence, which scatters the particles so they can be accelerated, is excited by the streaming accelerated particles and the needed downstream turbulence is convected from the upstream region. The present code is to be modified to include a better

  2. Monte Carlo Calculation of the Linear Resistance of a Three Dimensional Lattice superconductor Model in the London Limit

    SciTech Connect

    Weber, H.; Jensen, H.J.

    1997-03-01

    We have studied the linear resistance of a three dimensional lattice superconductor model in the London limit by Monte Carlo simulation of the vortex loop dynamics. We find excellent finite size scaling at the phase transition. We determine the dynamical exponent z=1.51 for the isotropic London lattice model. {copyright} {ital 1997} {ital The American Physical Society}

  3. A Monte-Carlo study for the critical exponents of the three-dimensional O(6) model

    NASA Astrophysics Data System (ADS)

    Loison, D.

    1999-09-01

    Using Wolff's single-cluster Monte-Carlo update algorithm, the three-dimensional O(6)-Heisenberg model on a simple cubic lattice is simulated. With the help of finite size scaling we compute the critical exponents ν, β, γ and η. Our results agree with the field-theory predictions but not so well with the prediction of the series expansions.

  4. Monte Carlo simulations of an Ising bilayer with non-equivalent planes

    NASA Astrophysics Data System (ADS)

    Diaz, I. J. L.; Branco, N. S.

    2017-02-01

    We study the thermodynamic and magnetic properties of an Ising bilayer ferrimagnet. The system is composed of two interacting non-equivalent planes in which the intralayer couplings are ferromagnetic while the interlayer interactions are antiferromagnetic. Moreover, one of the planes is randomly diluted. The study is carried out within a Monte Carlo approach employing the multiple histogram reweighting method and finite-size scaling tools. The occurrence of a compensation phenomenon is verified and the compensation temperature, as well as the critical temperature for the model, are obtained as functions of the Hamiltonian parameters. We present a detailed discussion of the regions of the parameter space where the compensation effect is present or absent. Our results are then compared to a mean-field-like approximation applied to the same model by Balcerzak and Szałowski (2014). Although the Monte Carlo and mean-field results agree qualitatively, our quantitative results are significantly different.

  5. Generalized Moment Method for Gap Estimation and Quantum Monte Carlo Level Spectroscopy.

    PubMed

    Suwa, Hidemaro; Todo, Synge

    2015-08-21

    We formulate a convergent sequence for the energy gap estimation in the worldline quantum Monte Carlo method. The ambiguity left in the conventional gap calculation for quantum systems is eliminated. Our estimation will be unbiased in the low-temperature limit, and also the error bar is reliably estimated. The level spectroscopy from quantum Monte Carlo data is developed as an application of the unbiased gap estimation. From the spectral analysis, we precisely determine the Kosterlitz-Thouless quantum phase-transition point of the spin-Peierls model. It is established that the quantum phonon with a finite frequency is essential to the critical theory governed by the antiadiabatic limit, i.e., the k=1 SU(2) Wess-Zumino-Witten model.

  6. Radiative interactions in multi-dimensional chemically reacting flows using Monte Carlo simulations

    NASA Technical Reports Server (NTRS)

    Liu, Jiwen; Tiwari, Surendra N.

    1994-01-01

    The Monte Carlo method (MCM) is applied to analyze radiative heat transfer in nongray gases. The nongray model employed is based on the statistical narrow band model with an exponential-tailed inverse intensity distribution. The amount and transfer of the emitted radiative energy in a finite volume element within a medium are considered in an exact manner. The spectral correlation between transmittances of two different segments of the same path in a medium makes the statistical relationship different from the conventional relationship, which only provides the non-correlated results for nongray methods is discussed. Validation of the Monte Carlo formulations is conducted by comparing results of this method of other solutions. In order to further establish the validity of the MCM, a relatively simple problem of radiative interactions in laminar parallel plate flows is considered. One-dimensional correlated Monte Carlo formulations are applied to investigate radiative heat transfer. The nongray Monte Carlo solutions are also obtained for the same problem and they also essentially match the available analytical solutions. the exact correlated and non-correlated Monte Carlo formulations are very complicated for multi-dimensional systems. However, by introducing the assumption of an infinitesimal volume element, the approximate correlated and non-correlated formulations are obtained which are much simpler than the exact formulations. Consideration of different problems and comparison of different solutions reveal that the approximate and exact correlated solutions agree very well, and so do the approximate and exact non-correlated solutions. However, the two non-correlated solutions have no physical meaning because they significantly differ from the correlated solutions. An accurate prediction of radiative heat transfer in any nongray and multi-dimensional system is possible by using the approximate correlated formulations. Radiative interactions are investigated in

  7. Design of composite laminates by a Monte Carlo method

    NASA Astrophysics Data System (ADS)

    Fang, Chin; Springer, George S.

    1993-01-01

    A Monte Carlo procedure was developed for optimizing symmetric fiber reinforced composite laminates such that the weight is minimum and the Tsai-Wu strength failure criterion is satisfied in each ply. The laminate may consist of several materials including an idealized core, and may be subjected to several sets of combined in-plane and bending loads. The procedure yields the number of plies, the fiber orientation, and the material of each ply and the material and thickness of the core. A user friendly computer code was written for performing the numerical calculations. Laminates optimized by the code were compared to laminates resulting from existing optimization methods. These comparisons showed that the present Monte Carlo procedure is a useful and efficient tool for the design of composite laminates.

  8. Relaxation dynamics in small clusters: A modified Monte Carlo approach

    SciTech Connect

    Pal, Barnana

    2008-02-01

    Relaxation dynamics in two-dimensional atomic clusters consisting of mono-atomic particles interacting through Lennard-Jones (L-J) potential has been investigated using Monte Carlo simulation. A modification of the conventional Metropolis algorithm is proposed to introduce realistic thermal motion of the particles moving in the interacting L-J potential field. The proposed algorithm leads to a quick equilibration from the nonequilibrium cluster configuration in a certain temperature regime, where the relaxation time ({tau}), measured in terms of Monte Carlo Steps (MCS) per particle, vary inversely with the square root of system temperature ({radical}T) and pressure (P); {tau} {proportional_to} (P{radical}T){sup -1}. From this a realistic correlation between MCS and time has been predicted.

  9. Monte Carlo Methods for Bridging the Timescale Gap

    NASA Astrophysics Data System (ADS)

    Wilding, Nigel; Landau, David P.

    We identify the origin, and elucidate the character of the extended time-scales that plague computer simulation studies of first and second order phase transitions. A brief survey is provided of a number of new and existing techniques that attempt to circumvent these problems. Attention is then focused on two novel methods with which we have particular experience: “Wang-Landau sampling” and Phase Switch Monte Carlo. Detailed case studies are made of the application of the Wang-Landau approach to calculate the density of states of the 2D Ising model and the Edwards-Anderson spin glass. The principles and operation of Phase Switch Monte Carlo are described and its utility in tackling ‘difficult’ first order phase transitions is illustrated via a case study of hard-sphere freezing. We conclude with a brief overview of promising new methods for the improvement of deterministic, spin dynamics simulations.

  10. Monte Carlo Ground State Energy for Trapped Boson Systems

    NASA Astrophysics Data System (ADS)

    Rudd, Ethan; Mehta, N. P.

    2012-06-01

    Diffusion Monte Carlo (DMC) and Green's Function Monte Carlo (GFMC) algorithms were implemented to obtain numerical approximations for the ground state energies of systems of bosons in a harmonic trap potential. Gaussian pairwise particle interactions of the form V0e^-|ri-rj|^2/r0^2 were implemented in the DMC code. These results were verified for small values of V0 via a first-order perturbation theory approximation for which the N-particle matrix element evaluated to N2 V0(1 + 1/r0^2)^3/2. By obtaining the scattering length from the 2-body potential in the perturbative regime (V0φ 1), ground state energy results were compared to modern renormalized models by P.R. Johnson et. al, New J. Phys. 11, 093022 (2009).

  11. Application of Monte Carlo simulations to improve basketball shooting strategy

    NASA Astrophysics Data System (ADS)

    Min, Byeong June

    2016-10-01

    The underlying physics of basketball shooting seems to be a straightforward example of Newtonian mechanics that can easily be traced by using numerical methods. However, a human basketball player does not make use of all the possible basketball trajectories. Instead, a basketball player will build up a database of successful shots and select the trajectory that has the greatest tolerance to the small variations of the real world. We simulate the basketball player's shooting training as a Monte Carlo sequence to build optimal shooting strategies, such as the launch speed and angle of the basketball, and whether to take a direct shot or a bank shot, as a function of the player's court position and height. The phase-space volume Ω that belongs to the successful launch velocities generated by Monte Carlo simulations is then used as the criterion to optimize a shooting strategy that incorporates not only mechanical, but also human, factors.

  12. Radiotherapy Monte Carlo simulation using cloud computing technology.

    PubMed

    Poole, C M; Cornelius, I; Trapp, J V; Langton, C M

    2012-12-01

    Cloud computing allows for vast computational resources to be leveraged quickly and easily in bursts as and when required. Here we describe a technique that allows for Monte Carlo radiotherapy dose calculations to be performed using GEANT4 and executed in the cloud, with relative simulation cost and completion time evaluated as a function of machine count. As expected, simulation completion time decreases as 1/n for n parallel machines, and relative simulation cost is found to be optimal where n is a factor of the total simulation time in hours. Using the technique, we demonstrate the potential usefulness of cloud computing as a solution for rapid Monte Carlo simulation for radiotherapy dose calculation without the need for dedicated local computer hardware as a proof of principal.

  13. Monte Carlo Simulations of Arterial Imaging with Optical Coherence Tomography

    SciTech Connect

    Amendt, P.; Estabrook, K.; Everett, M.; London, R.A.; Maitland, D.; Zimmerman, G.; Colston, B.; da Silva, L.; Sathyam, U.

    2000-02-01

    The laser-tissue interaction code LATIS [London et al., Appl. Optics 36, 9068 ( 1998)] is used to analyze photon scattering histories representative of optical coherence tomography (OCT) experiment performed at Lawrence Livermore National Laboratory. Monte Carlo photonics with Henyey-Greenstein anisotropic scattering is implemented and used to simulate signal discrimination of intravascular structure. An analytic model is developed and used to obtain a scaling law relation for optimization of the OCT signal and to validate Monte Carlo photonics. The appropriateness of the Henyey-Greenstein phase function is studied by direct comparison with more detailed Mie scattering theory using an ensemble of spherical dielectric scatterers. Modest differences are found between the two prescriptions for describing photon angular scattering in tissue. In particular, the Mie scattering phase functions provide less overall reflectance signal but more signal contrast compared to the Henyey-Greenstein formulation.

  14. Large-cell Monte Carlo renormalization of irreversible growth processes

    NASA Technical Reports Server (NTRS)

    Nakanishi, H.; Family, F.

    1985-01-01

    Monte Carlo sampling is applied to a recently formulated direct-cell renormalization method for irreversible, disorderly growth processes. Large-cell Monte Carlo renormalization is carried out for various nonequilibrium problems based on the formulation dealing with relative probabilities. Specifically, the method is demonstrated by application to the 'true' self-avoiding walk and the Eden model of growing animals for d = 2, 3, and 4 and to the invasion percolation problem for d = 2 and 3. The results are asymptotically in agreement with expectations; however, unexpected complications arise, suggesting the possibility of crossovers, and in any case, demonstrating the danger of using small cells alone, because of the very slow convergence as the cell size b is extrapolated to infinity. The difficulty of applying the present method to the diffusion-limited-aggregation model, is commented on.

  15. Monte Carlo Study of Real Time Dynamics on the Lattice

    NASA Astrophysics Data System (ADS)

    Alexandru, Andrei; Başar, Gökçe; Bedaque, Paulo F.; Vartak, Sohan; Warrington, Neill C.

    2016-08-01

    Monte Carlo studies involving real time dynamics are severely restricted by the sign problem that emerges from a highly oscillatory phase of the path integral. In this Letter, we present a new method to compute real time quantities on the lattice using the Schwinger-Keldysh formalism via Monte Carlo simulations. The key idea is to deform the path integration domain to a complex manifold where the phase oscillations are mild and the sign problem is manageable. We use the previously introduced "contraction algorithm" to create a Markov chain on this alternative manifold. We substantiate our approach by analyzing the quantum mechanical anharmonic oscillator. Our results are in agreement with the exact ones obtained by diagonalization of the Hamiltonian. The method we introduce is generic and, in principle, applicable to quantum field theory albeit very slow. We discuss some possible improvements that should speed up the algorithm.

  16. Minimising biases in full configuration interaction quantum Monte Carlo.

    PubMed

    Vigor, W A; Spencer, J S; Bearpark, M J; Thom, A J W

    2015-03-14

    We show that Full Configuration Interaction Quantum Monte Carlo (FCIQMC) is a Markov chain in its present form. We construct the Markov matrix of FCIQMC for a two determinant system and hence compute the stationary distribution. These solutions are used to quantify the dependence of the population dynamics on the parameters defining the Markov chain. Despite the simplicity of a system with only two determinants, it still reveals a population control bias inherent to the FCIQMC algorithm. We investigate the effect of simulation parameters on the population control bias for the neon atom and suggest simulation setups to, in general, minimise the bias. We show a reweight ing scheme to remove the bias caused by population control commonly used in diffusion Monte Carlo [Umrigar et al., J. Chem. Phys. 99, 2865 (1993)] is effective and recommend its use as a post processing step.

  17. Estimation of beryllium ground state energy by Monte Carlo simulation

    SciTech Connect

    Kabir, K. M. Ariful; Halder, Amal

    2015-05-15

    Quantum Monte Carlo method represent a powerful and broadly applicable computational tool for finding very accurate solution of the stationary Schrödinger equation for atoms, molecules, solids and a variety of model systems. Using variational Monte Carlo method we have calculated the ground state energy of the Beryllium atom. Our calculation are based on using a modified four parameters trial wave function which leads to good result comparing with the few parameters trial wave functions presented before. Based on random Numbers we can generate a large sample of electron locations to estimate the ground state energy of Beryllium. Our calculation gives good estimation for the ground state energy of the Beryllium atom comparing with the corresponding exact data.

  18. Monte Carlo methods for light propagation in biological tissues.

    PubMed

    Vinckenbosch, Laura; Lacaux, Céline; Tindel, Samy; Thomassin, Magalie; Obara, Tiphaine

    2015-11-01

    Light propagation in turbid media is driven by the equation of radiative transfer. We give a formal probabilistic representation of its solution in the framework of biological tissues and we implement algorithms based on Monte Carlo methods in order to estimate the quantity of light that is received by a homogeneous tissue when emitted by an optic fiber. A variance reduction method is studied and implemented, as well as a Markov chain Monte Carlo method based on the Metropolis-Hastings algorithm. The resulting estimating methods are then compared to the so-called Wang-Prahl (or Wang) method. Finally, the formal representation allows to derive a non-linear optimization algorithm close to Levenberg-Marquardt that is used for the estimation of the scattering and absorption coefficients of the tissue from measurements.

  19. Monte Carlo Methods in ICF (LIRPP Vol. 13)

    NASA Astrophysics Data System (ADS)

    Zimmerman, George B.

    2016-10-01

    Monte Carlo methods appropriate to simulate the transport of x-rays, neutrons, ions and electrons in Inertial Confinement Fusion targets are described and analyzed. The Implicit Monte Carlo method of x-ray transport handles symmetry within indirect drive ICF hohlraums well, but can be improved SOX in efficiency by angular biasing the x-rays towards the fuel capsule. Accurate simulation of thermonuclear burn and burn diagnostics involves detailed particle source spectra, charged particle ranges, inflight reaction kinematics, corrections for bulk and thermal Doppler effects and variance reduction to obtain adequate statistics for rare events. It is found that the effects of angular Coulomb scattering must be included in models of charged particle transport through heterogeneous materials.

  20. Monte Carlo modeling of coherent scattering: Influence of interference

    SciTech Connect

    Leliveld, C.J.; Maas, J.G.; Bom, V.R.; Eijk, C.W.E. van

    1996-12-01

    In this study, the authors present Monte Carlo (MC) simulation results for the intensity and angular distribution of scattered radiation from cylindrical absorbers. For coherent scattering the authors have taken into account the effects of interference by using new molecular form factor data for the AAPM plastic materials and water. The form factor data were compiled from X-ray diffraction measurements. The new data have been implemented in the authors` Electron Gamma Shower (EGS4) Monte Carlo system. The hybrid MC simulation results show a significant influence on the intensity and the angular distribution of coherently scattered photons. They conclude that MC calculations are significantly in error when interference effects are ignored in the model for coherent scattering. Especially for simulation studies of scattered radiation in collimated geometries, where small angle scattering will prevail, the coherent scatter contribution is highly overestimated when conventional form factor data are used.

  1. Monte Carlo analysis of satellite debris footprint dispersion

    NASA Technical Reports Server (NTRS)

    Rao, P. P.; Woeste, M. A.

    1979-01-01

    A comprehensive study is performed to investigate satellite debris impact point dispersion using a combination of Monte Carlo statistical analysis and parametric methods. The Monte Carlo technique accounts for nonlinearities in the entry point dispersion, which is represented by a covariance matrix of position and velocity errors. Because downrange distance of impact is a monotonic function of debris ballistic coefficient, a parametric method is useful for determining dispersion boundaries. The scheme is applied in the present analysis to estimate the Skylab footprint dispersions for a controlled reentry. A significant increase in the footprint dispersion is noticed for satellite breakup above a 200,000-ft altitude. A general discussion of the method used for analysis is presented together with some typical results obtained for the Skylab deboost mission, which was designed before NASA abandoned plans for a Skylab controlled reentry.

  2. Condensed Matter Applications of Quantum Monte Carlo at the Petascale

    NASA Astrophysics Data System (ADS)

    Ceperley, David

    2014-03-01

    Applications of the Quantum Monte Carlo method have a number of advantages allowing them to be useful for high performance computation. The method scales well in particle number, can treat complex systems with weak or strong correlation including disordered systems, and large thermal and zero point effects of the nuclei. The methods are adaptable to a variety of computer architectures and have multiple parallelization strategies. Most errors are under control so that increases in computer resources allow a systematic increase in accuracy. We will discuss a number of recent applications of Quantum Monte Carlo including dense hydrogen and transition metal systems and suggest future directions. Support from DOE grants DE-FG52-09NA29456, SCIDAC DE-SC0008692, the Network for Ab Initio Many-Body Methods and INCITE allocation.

  3. Computer Monte Carlo simulation in quantitative resource estimation

    USGS Publications Warehouse

    Root, D.H.; Menzie, W.D.; Scott, W.A.

    1992-01-01

    The method of making quantitative assessments of mineral resources sufficiently detailed for economic analysis is outlined in three steps. The steps are (1) determination of types of deposits that may be present in an area, (2) estimation of the numbers of deposits of the permissible deposit types, and (3) combination by Monte Carlo simulation of the estimated numbers of deposits with the historical grades and tonnages of these deposits to produce a probability distribution of the quantities of contained metal. Two examples of the estimation of the number of deposits (step 2) are given. The first example is for mercury deposits in southwestern Alaska and the second is for lode tin deposits in the Seward Peninsula. The flow of the Monte Carlo simulation program is presented with particular attention to the dependencies between grades and tonnages of deposits and between grades of different metals in the same deposit. ?? 1992 Oxford University Press.

  4. Mammography X-Ray Spectra Simulated with Monte Carlo

    SciTech Connect

    Vega-Carrillo, H. R.; Gonzalez, J. Ramirez; Manzanares-Acuna, E.; Hernandez-Davila, V. M.; Villasana, R. Hernandez; Mercado, G. A.

    2008-08-11

    Monte Carlo calculations have been carried out to obtain the x-ray spectra of various target-filter combinations for a mammography unit. Mammography is widely used to diagnose breast cancer. Further to Mo target with Mo filter combination, Rh/Rh, Mo/Rh, Mo/Al, Rh/Al, and W/Rh are also utilized. In this work Monte Carlo calculations, using MCNP 4C code, were carried out to estimate the x-ray spectra produced when a beam of 28 keV electrons did collide with Mo, Rh and W targets. Resulting x-ray spectra show characteristic x-rays and continuous bremsstrahlung. Spectra were also calculated including filters.

  5. Monte Carlo Strategies for Selecting Parameter Values in Simulation Experiments.

    PubMed

    Leigh, Jessica W; Bryant, David

    2015-09-01

    Simulation experiments are used widely throughout evolutionary biology and bioinformatics to compare models, promote methods, and test hypotheses. The biggest practical constraint on simulation experiments is the computational demand, particularly as the number of parameters increases. Given the extraordinary success of Monte Carlo methods for conducting inference in phylogenetics, and indeed throughout the sciences, we investigate ways in which Monte Carlo framework can be used to carry out simulation experiments more efficiently. The key idea is to sample parameter values for the experiments, rather than iterate through them exhaustively. Exhaustive analyses become completely infeasible when the number of parameters gets too large, whereas sampled approaches can fare better in higher dimensions. We illustrate the framework with applications to phylogenetics and genetic archaeology.

  6. Lattice-switch Monte Carlo: the fcc—bcc problem

    NASA Astrophysics Data System (ADS)

    Underwood, T. L.; Ackland, G. J.

    2015-09-01

    Lattice-switch Monte Carlo is an efficient method for calculating the free energy difference between two solid phases, or a solid and a fluid phase. Here, we provide a brief introduction to the method, and list its applications since its inception. We then describe a lattice switch for the fcc and bcc phases based on the Bain orientation relationship. Finally, we present preliminary results regarding our application of the method to the fcc and bcc phases in the Lennard-Jones system. Our initial calculations reveal that the bcc phase is unstable, quickly degenerating into some as yet undetermined metastable solid phase. This renders conventional lattice-switch Monte Carlo intractable for this phase. Possible solutions to this problem are discussed.

  7. Fast Off-Lattice Monte Carlo Simulations with Soft Potentials

    NASA Astrophysics Data System (ADS)

    Zong, Jing; Yang, Delian; Yin, Yuhua; Zhang, Xinghua; Wang, Qiang (David)

    2011-03-01

    Fast off-lattice Monte Carlo simulations with soft repulsive potentials that allow particle overlapping give orders of magnitude faster/better sampling of the configurational space than conventional molecular simulations with hard-core repulsions (such as the hard-sphere or Lennard-Jones repulsion). Here we present our fast off-lattice Monte Carlo simulations ranging from small-molecule soft spheres and liquid crystals to polymeric systems including homopolymers and rod-coil diblock copolymers. The simulation results are compared with various theories based on the same Hamiltonian as in the simulations (thus without any parameter-fitting) to quantitatively reveal the consequences of approximations in these theories. Q. Wang and Y. Yin, J. Chem. Phys., 130, 104903 (2009).

  8. Quantum Monte Carlo calculations with chiral effective field theory interactions.

    PubMed

    Gezerlis, A; Tews, I; Epelbaum, E; Gandolfi, S; Hebeler, K; Nogga, A; Schwenk, A

    2013-07-19

    We present the first quantum Monte Carlo (QMC) calculations with chiral effective field theory (EFT) interactions. To achieve this, we remove all sources of nonlocality, which hamper the inclusion in QMC calculations, in nuclear forces to next-to-next-to-leading order. We perform auxiliary-field diffusion Monte Carlo (AFDMC) calculations for the neutron matter energy up to saturation density based on local leading-order, next-to-leading order, and next-to-next-to-leading order nucleon-nucleon interactions. Our results exhibit a systematic order-by-order convergence in chiral EFT and provide nonperturbative benchmarks with theoretical uncertainties. For the softer interactions, perturbative calculations are in excellent agreement with the AFDMC results. This work paves the way for QMC calculations with systematic chiral EFT interactions for nuclei and nuclear matter, for testing the perturbativeness of different orders, and allows for matching to lattice QCD results by varying the pion mass.

  9. Drag coefficient modeling for grace using Direct Simulation Monte Carlo

    NASA Astrophysics Data System (ADS)

    Mehta, Piyush M.; McLaughlin, Craig A.; Sutton, Eric K.

    2013-12-01

    Drag coefficient is a major source of uncertainty in predicting the orbit of a satellite in low Earth orbit (LEO). Computational methods like the Test Particle Monte Carlo (TPMC) and Direct Simulation Monte Carlo (DSMC) are important tools in accurately computing physical drag coefficients. However, the methods are computationally expensive and cannot be employed real time. Therefore, modeling of the physical drag coefficient is required. This work presents a technique of developing parameterized drag coefficients models using the DSMC method. The technique is validated by developing a model for the Gravity Recovery and Climate Experiment (GRACE) satellite. Results show that drag coefficients computed using the developed model for GRACE agree to within 1% with those computed using DSMC.

  10. Fixed-node diffusion Monte Carlo method for lithium systems

    NASA Astrophysics Data System (ADS)

    Rasch, K. M.; Mitas, L.

    2015-07-01

    We study lithium systems over a range of a number of atoms, specifically atomic anion, dimer, metallic cluster, and body-centered-cubic crystal, using the fixed-node diffusion Monte Carlo method. The focus is on analysis of the fixed-node errors of each system, and for that purpose we test several orbital sets in order to provide the most accurate nodal hypersurfaces. The calculations include both core and valence electrons in order to avoid any possible impact by pseudopotentials. To quantify the fixed-node errors, we compare our results to other highly accurate calculations, and wherever available, to experimental observations. The results for these Li systems show that the fixed-node diffusion Monte Carlo method achieves accurate total energies, recovers 96 -99 % of the correlation energy, and estimates binding energies with errors bounded by 0.1 eV /at .

  11. Accelerated Monte Carlo simulations with restricted Boltzmann machines

    NASA Astrophysics Data System (ADS)

    Huang, Li; Wang, Lei

    2017-01-01

    Despite their exceptional flexibility and popularity, Monte Carlo methods often suffer from slow mixing times for challenging statistical physics problems. We present a general strategy to overcome this difficulty by adopting ideas and techniques from the machine learning community. We fit the unnormalized probability of the physical model to a feed-forward neural network and reinterpret the architecture as a restricted Boltzmann machine. Then, exploiting its feature detection ability, we utilize the restricted Boltzmann machine to propose efficient Monte Carlo updates to speed up the simulation of the original physical system. We implement these ideas for the Falicov-Kimball model and demonstrate an improved acceptance ratio and autocorrelation time near the phase transition point.

  12. Sign problem and Monte Carlo calculations beyond Lefschetz thimbles

    SciTech Connect

    Alexandru, Andrei; Basar, Gokce; Bedaque, Paulo F.; Ridgway, Gregory W.; Warrington, Neill C.

    2016-05-10

    We point out that Monte Carlo simulations of theories with severe sign problems can be profitably performed over manifolds in complex space different from the one with fixed imaginary part of the action (“Lefschetz thimble”). We describe a family of such manifolds that interpolate between the tangent space at one critical point (where the sign problem is milder compared to the real plane but in some cases still severe) and the union of relevant thimbles (where the sign problem is mild but a multimodal distribution function complicates the Monte Carlo sampling). As a result, we exemplify this approach using a simple 0+1 dimensional fermion model previously used on sign problem studies and show that it can solve the model for some parameter values where a solution using Lefschetz thimbles was elusive.

  13. Sign problem and Monte Carlo calculations beyond Lefschetz thimbles

    DOE PAGES

    Alexandru, Andrei; Basar, Gokce; Bedaque, Paulo F.; ...

    2016-05-10

    We point out that Monte Carlo simulations of theories with severe sign problems can be profitably performed over manifolds in complex space different from the one with fixed imaginary part of the action (“Lefschetz thimble”). We describe a family of such manifolds that interpolate between the tangent space at one critical point (where the sign problem is milder compared to the real plane but in some cases still severe) and the union of relevant thimbles (where the sign problem is mild but a multimodal distribution function complicates the Monte Carlo sampling). As a result, we exemplify this approach using amore » simple 0+1 dimensional fermion model previously used on sign problem studies and show that it can solve the model for some parameter values where a solution using Lefschetz thimbles was elusive.« less

  14. Cluster Monte Carlo methods for the FePt Hamiltonian

    NASA Astrophysics Data System (ADS)

    Lyberatos, A.; Parker, G. J.

    2016-02-01

    Cluster Monte Carlo methods for the classical spin Hamiltonian of FePt with long range exchange interactions are presented. We use a combination of the Swendsen-Wang (or Wolff) and Metropolis algorithms that satisfies the detailed balance condition and ergodicity. The algorithms are tested by calculating the temperature dependence of the magnetization, susceptibility and heat capacity of L10-FePt nanoparticles in a range including the critical region. The cluster models yield numerical results in good agreement within statistical error with the standard single-spin flipping Monte Carlo method. The variation of the spin autocorrelation time with grain size is used to deduce the dynamic exponent of the algorithms. Our cluster models do not provide a more accurate estimate of the magnetic properties at equilibrium.

  15. Adaptive Mesh and Algorithm Refinement Using Direct Simulation Monte Carlo

    NASA Astrophysics Data System (ADS)

    Garcia, Alejandro L.; Bell, John B.; Crutchfield, William Y.; Alder, Berni J.

    1999-09-01

    Adaptive mesh and algorithm refinement (AMAR) embeds a particle method within a continuum method at the finest level of an adaptive mesh refinement (AMR) hierarchy. The coupling between the particle region and the overlaying continuum grid is algorithmically equivalent to that between the fine and coarse levels of AMR. Direct simulation Monte Carlo (DSMC) is used as the particle algorithm embedded within a Godunov-type compressible Navier-Stokes solver. Several examples are presented and compared with purely continuum calculations.

  16. Monte Carlo calculation of patient organ doses from computed tomography.

    PubMed

    Oono, Takeshi; Araki, Fujio; Tsuduki, Shoya; Kawasaki, Keiichi

    2014-01-01

    In this study, we aimed to evaluate quantitatively the patient organ dose from computed tomography (CT) using Monte Carlo calculations. A multidetector CT unit (Aquilion 16, TOSHIBA Medical Systems) was modeled with the GMctdospp (IMPS, Germany) software based on the EGSnrc Monte Carlo code. The X-ray spectrum and the configuration of the bowtie filter for the Monte Carlo modeling were determined from the chamber measurements for the half-value layer (HVL) of aluminum and the dose profile (off-center ratio, OCR) in air. The calculated HVL and OCR were compared with measured values for body irradiation with 120 kVp. The Monte Carlo-calculated patient dose distribution was converted to the absorbed dose measured by a Farmer chamber with a (60)Co calibration factor at the center of a CT water phantom. The patient dose was evaluated from dose-volume histograms for the internal organs in the pelvis. The calculated Al HVL was in agreement within 0.3% with the measured value of 5.2 mm. The calculated dose profile in air matched the measured value within 5% in a range of 15 cm from the central axis. The mean doses for soft tissues were 23.5, 23.8, and 27.9 mGy for the prostate, rectum, and bladder, respectively, under exposure conditions of 120 kVp, 200 mA, a beam pitch of 0.938, and beam collimation of 32 mm. For bones of the femur and pelvis, the mean doses were 56.1 and 63.6 mGy, respectively. The doses for bone increased by up to 2-3 times that of soft tissue, corresponding to the ratio of their mass-energy absorption coefficients.

  17. Monte Carlo simulation of virtual Compton scattering below pion threshold

    NASA Astrophysics Data System (ADS)

    Janssens, P.; Van Hoorebeke, L.; Fonvieille, H.; D'Hose, N.; Bertin, P. Y.; Bensafa, I.; Degrande, N.; Distler, M.; Di Salvo, R.; Doria, L.; Friedrich, J. M.; Friedrich, J.; Hyde-Wright, Ch.; Jaminion, S.; Kerhoas, S.; Laveissière, G.; Lhuillier, D.; Marchand, D.; Merkel, H.; Roche, J.; Tamas, G.; Vanderhaeghen, M.; Van de Vyver, R.; Van de Wiele, J.; Walcher, Th.

    2006-10-01

    This paper describes the Monte Carlo simulation developed specifically for the Virtual Compton Scattering (VCS) experiments below pion threshold that have been performed at MAMI and JLab. This simulation generates events according to the (Bethe-Heitler + Born) cross-section behaviour and takes into account all relevant resolution-deteriorating effects. It determines the "effective" solid angle for the various experimental settings which are used for the precise determination of the photon electroproduction absolute cross-section.

  18. Towards a Revised Monte Carlo Neutral Particle Surface Interaction Model

    SciTech Connect

    D.P. Stotler

    2005-06-09

    The components of the neutral- and plasma-surface interaction model used in the Monte Carlo neutral transport code DEGAS 2 are reviewed. The idealized surfaces and processes handled by that model are inadequate for accurately simulating neutral transport behavior in present day and future fusion devices. We identify some of the physical processes missing from the model, such as mixed materials and implanted hydrogen, and make some suggestions for improving the model.

  19. Procedure for Adapting Direct Simulation Monte Carlo Meshes

    NASA Technical Reports Server (NTRS)

    Woronowicz, Michael S.; Wilmoth, Richard G.; Carlson, Ann B.; Rault, Didier F. G.

    1992-01-01

    A technique is presented for adapting computational meshes used in the G2 version of the direct simulation Monte Carlo method. The physical ideas underlying the technique are discussed, and adaptation formulas are developed for use on solutions generated from an initial mesh. The effect of statistical scatter on adaptation is addressed, and results demonstrate the ability of this technique to achieve more accurate results without increasing necessary computational resources.

  20. Instantaneous GNSS attitude determination: A Monte Carlo sampling approach

    NASA Astrophysics Data System (ADS)

    Sun, Xiucong; Han, Chao; Chen, Pei

    2017-04-01

    A novel instantaneous GNSS ambiguity resolution approach which makes use of only single-frequency carrier phase measurements for ultra-short baseline attitude determination is proposed. The Monte Carlo sampling method is employed to obtain the probability density function of ambiguities from a quaternion-based GNSS-attitude model and the LAMBDA method strengthened with a screening mechanism is then utilized to fix the integer values. Experimental results show that 100% success rate could be achieved for ultra-short baselines.

  1. Improved numerical techniques for processing Monte Carlo thermal scattering data

    SciTech Connect

    Schmidt, E; Rose, P

    1980-01-01

    As part of a Thermal Benchmark Validation Program sponsored by the Electric Power Research Institute (EPRI), the National Nuclear Data Center has been calculating thermal reactor lattices using the SAM-F Monte Carlo Computer Code. As part of this program a significant improvement has been made in the adequacy of the numerical procedures used to process the thermal differential scattering cross sections for hydrogen bound in H/sub 2/O.

  2. Monte Carlo Methods and Applications for the Nuclear Shell Model

    SciTech Connect

    Dean, D.J.; White, J.A.

    1998-08-10

    The shell-model Monte Carlo (SMMC) technique transforms the traditional nuclear shell-model problem into a path-integral over auxiliary fields. We describe below the method and its applications to four physics issues: calculations of sd-pf-shell nuclei, a discussion of electron-capture rates in pf-shell nuclei, exploration of pairing correlations in unstable nuclei, and level densities in rare earth systems.

  3. Monte Carlo calculations for r-process nucleosynthesis

    SciTech Connect

    Mumpower, Matthew Ryan

    2015-11-12

    A Monte Carlo framework is developed for exploring the impact of nuclear model uncertainties on the formation of the heavy elements. Mass measurements tightly constrain the macroscopic sector of FRDM2012. For r-process nucleosynthesis, it is necessary to understand the microscopic physics of the nuclear model employed. A combined approach of measurements and a deeper understanding of the microphysics is thus warranted to elucidate the site of the r-process.

  4. Monte Carlo simulation of the ELIMED beamline using Geant4

    NASA Astrophysics Data System (ADS)

    Pipek, J.; Romano, F.; Milluzzo, G.; Cirrone, G. A. P.; Cuttone, G.; Amico, A. G.; Margarone, D.; Larosa, G.; Leanza, R.; Petringa, G.; Schillaci, F.; Scuderi, V.

    2017-03-01

    In this paper, we present a Geant4-based Monte Carlo application for ELIMED beamline [1-6] simulation, including its features and several preliminary results. We have developed the application to aid the design of the beamline, to estimate various beam characteristics, and to assess the amount of secondary radiation. In future, an enhanced version of this application will support the beamline users when preparing their experiments.

  5. Monte Carlo Studies of the Fcc Ising Model.

    NASA Astrophysics Data System (ADS)

    Polgreen, Thomas Lee

    Monte Carlo simulations are performed on the antiferromagnetic fcc Ising model which is relevant to the binary alloy CuAu. The model exhibits a first-order ordering transition as a function of temperature. The lattice free energy of the model is determined for all temperatures. By matching free energies of the ordered and disordered phases, the transition temperature is determined to be T(,t) = 1.736 J where J is the coupling constant of the model. The free energy as determined by series expansion and the Kikuchi cluster variation method is compared with the Monte Carlo results. These methods work well for the ordered phase, but not for the disordered phase. A determination of the pair correlation in the disordered phase along the {100} direction indicates a correlation length of (DBLTURN) 2.5a at the phase transition. The correlation length exhibits mean-field-like temperature dependence. The Cowley-Warren short range order parameters are determined as a function of temperature for the first twelve nearest-neighbor shells of this model. The Monte Carlo results are used to determine the free parameter in a mean-field-like class of theories described by Clapp and Moss. The ability of these theories to predict ratios between pair potentials is tested with these results. In addition, evidence of a region of heterophase fluctuations is presented in agreement with x-ray diffuse scattering measurements on Cu(,3)Au. The growth of order following a rapid quench from disorder is studied by means of a dynamic Monte Carlo simulation. The results compare favorably with the Landau theory proposed by Chan for temperatures near the first-order phase transition. For lower temperatures, the results are in agreement with the theories of Lifshitz and Allen and Chan. In the intermediate temperature range, our extension of Chan's theory is able to explain our simulation results and recent experimental results.

  6. Monte Carlo approach to nuclei and nuclear matter

    SciTech Connect

    Fantoni, Stefano; Gandolfi, Stefano; Illarionov, Alexey Yu.; Schmidt, Kevin E.; Pederiva, Francesco

    2008-10-13

    We report on the most recent applications of the Auxiliary Field Diffusion Monte Carlo (AFDMC) method. The equation of state (EOS) for pure neutron matter in both normal and BCS phase and the superfluid gap in the low-density regime are computed, using a realistic Hamiltonian containing the Argonne AV8' plus Urbana IX three-nucleon interaction. Preliminary results for the EOS of isospin-asymmetric nuclear matter are also presented.

  7. Monte Carlo Simulations: Number of Iterations and Accuracy

    DTIC Science & Technology

    2015-07-01

    Monte Carlo, confidence interval, central limit theorem, number of iterations, Wilson score method, Wald method, normal probability plot 16. SECURITY...Iterations 16 6. Conclusions 17 7. References and Notes 20 Appendix. MATLAB Code to Produce a Normal Probability Plot for Data in Array A 23...for normality can be performed to quantify the confidence level of a normality assumption. The basic idea of an NPP is to plot the sample data in

  8. Monte Carlo verification of IMRT treatment plans on grid.

    PubMed

    Gómez, Andrés; Fernández Sánchez, Carlos; Mouriño Gallego, José Carlos; López Cacheiro, Javier; González Castaño, Francisco J; Rodríguez-Silva, Daniel; Domínguez Carrera, Lorena; González Martínez, David; Pena García, Javier; Gómez Rodríguez, Faustino; González Castaño, Diego; Pombar Cameán, Miguel

    2007-01-01

    The eIMRT project is producing new remote computational tools for helping radiotherapists to plan and deliver treatments. The first available tool will be the IMRT treatment verification using Monte Carlo, which is a computational expensive problem that can be executed remotely on a GRID. In this paper, the current implementation of this process using GRID and SOA technologies is presented, describing the remote execution environment and the client.

  9. Mcfast, a Parameterized Fast Monte Carlo for Detector Studies

    NASA Astrophysics Data System (ADS)

    Boehnlein, Amber S.

    McFast is a modularized and parameterized fast Monte Carlo program which is designed to generate physics analysis information for different detector configurations and subdetector designs. McFast is based on simple geometrical object definitions and includes hit generation, parameterized track generation, vertexing, a muon system, electromagnetic calorimetry, and trigger framework for physics studies. Auxiliary tools include a geometry editor, visualization, and an i/o system.

  10. Direct Monte Carlo Simulations of Hypersonic Viscous Interactions Including Separation

    NASA Technical Reports Server (NTRS)

    Moss, James N.; Rault, Didier F. G.; Price, Joseph M.

    1993-01-01

    Results of calculations obtained using the direct simulation Monte Carlo method for Mach 25 flow over a control surface are presented. The numerical simulations are for a 35-deg compression ramp at a low-density wind-tunnel test condition. Calculations obtained using both two- and three-dimensional solutions are reviewed, and a qualitative comparison is made with the oil flow pictures highlight separation and three-dimensional flow structure.

  11. Distributional monte carlo methods for the boltzmann equation

    NASA Astrophysics Data System (ADS)

    Schrock, Christopher R.

    Stochastic particle methods (SPMs) for the Boltzmann equation, such as the Direct Simulation Monte Carlo (DSMC) technique, have gained popularity for the prediction of flows in which the assumptions behind the continuum equations of fluid mechanics break down; however, there are still a number of issues that make SPMs computationally challenging for practical use. In traditional SPMs, simulated particles may possess only a single velocity vector, even though they may represent an extremely large collection of actual particles. This limits the method to converge only in law to the Boltzmann solution. This document details the development of new SPMs that allow the velocity of each simulated particle to be distributed. This approach has been termed Distributional Monte Carlo (DMC). A technique is described which applies kernel density estimation to Nanbu's DSMC algorithm. It is then proven that the method converges not just in law, but also in solution for Linfinity(R 3) solutions of the space homogeneous Boltzmann equation. This provides for direct evaluation of the velocity density function. The derivation of a general Distributional Monte Carlo method is given which treats collision interactions between simulated particles as a relaxation problem. The framework is proven to converge in law to the solution of the space homogeneous Boltzmann equation, as well as in solution for Linfinity(R3) solutions. An approach based on the BGK simplification is presented which computes collision outcomes deterministically. Each technique is applied to the well-studied Bobylev-Krook-Wu solution as a numerical test case. Accuracy and variance of the solutions are examined as functions of various simulation parameters. Significantly improved accuracy and reduced variance are observed in the normalized moments for the Distributional Monte Carlo technique employing discrete BGK collision modeling.

  12. Testing trivializing maps in the Hybrid Monte Carlo algorithm

    PubMed Central

    Engel, Georg P.; Schaefer, Stefan

    2011-01-01

    We test a recent proposal to use approximate trivializing maps in a field theory to speed up Hybrid Monte Carlo simulations. Simulating the CPN−1 model, we find a small improvement with the leading order transformation, which is however compensated by the additional computational overhead. The scaling of the algorithm towards the continuum is not changed. In particular, the effect of the topological modes on the autocorrelation times is studied. PMID:21969733

  13. Valence-bond quantum Monte Carlo algorithms defined on trees.

    PubMed

    Deschner, Andreas; Sørensen, Erik S

    2014-09-01

    We present a class of algorithms for performing valence-bond quantum Monte Carlo of quantum spin models. Valence-bond quantum Monte Carlo is a projective T=0 Monte Carlo method based on sampling of a set of operator strings that can be viewed as forming a treelike structure. The algorithms presented here utilize the notion of a worm that moves up and down this tree and changes the associated operator string. In quite general terms, we derive a set of equations whose solutions correspond to a whole class of algorithms. As specific examples of this class of algorithms, we focus on two cases. The bouncing worm algorithm, for which updates are always accepted by allowing the worm to bounce up and down the tree, and the driven worm algorithm, where a single parameter controls how far up the tree the worm reaches before turning around. The latter algorithm involves only a single bounce where the worm turns from going up the tree to going down. The presence of the control parameter necessitates the introduction of an acceptance probability for the update.

  14. Comparison of deterministic and Monte Carlo methods in shielding design.

    PubMed

    Oliveira, A D; Oliveira, C

    2005-01-01

    In shielding calculation, deterministic methods have some advantages and also some disadvantages relative to other kind of codes, such as Monte Carlo. The main advantage is the short computer time needed to find solutions while the disadvantages are related to the often-used build-up factor that is extrapolated from high to low energies or with unknown geometrical conditions, which can lead to significant errors in shielding results. The aim of this work is to investigate how good are some deterministic methods to calculating low-energy shielding, using attenuation coefficients and build-up factor corrections. Commercial software MicroShield 5.05 has been used as the deterministic code while MCNP has been used as the Monte Carlo code. Point and cylindrical sources with slab shield have been defined allowing comparison between the capability of both Monte Carlo and deterministic methods in a day-by-day shielding calculation using sensitivity analysis of significant parameters, such as energy and geometrical conditions.

  15. A New Approach to Monte Carlo Simulations in Statistical Physics

    NASA Astrophysics Data System (ADS)

    Landau, David P.

    2002-08-01

    Monte Carlo simulations [1] have become a powerful tool for the study of diverse problems in statistical/condensed matter physics. Standard methods sample the probability distribution for the states of the system, most often in the canonical ensemble, and over the past several decades enormous improvements have been made in performance. Nonetheless, difficulties arise near phase transitions-due to critical slowing down near 2nd order transitions and to metastability near 1st order transitions, and these complications limit the applicability of the method. We shall describe a new Monte Carlo approach [2] that uses a random walk in energy space to determine the density of states directly. Once the density of states is known, all thermodynamic properties can be calculated. This approach can be extended to multi-dimensional parameter spaces and should be effective for systems with complex energy landscapes, e.g., spin glasses, protein folding models, etc. Generalizations should produce a broadly applicable optimization tool. 1. A Guide to Monte Carlo Simulations in Statistical Physics, D. P. Landau and K. Binder (Cambridge U. Press, Cambridge, 2000). 2. Fugao Wang and D. P. Landau, Phys. Rev. Lett. 86, 2050 (2001); Phys. Rev. E64, 056101-1 (2001).

  16. Autocorrelation and Dominance Ratio in Monte Carlo Criticality Calculations

    SciTech Connect

    Ueki, Taro; Brown, Forrest B.; Parsons, D. Kent; Kornreich, Drew E.

    2003-11-15

    The cycle-to-cycle correlation (autocorrelation) in Monte Carlo criticality calculations is analyzed concerning the dominance ratio of fission kernels. The mathematical analysis focuses on how the eigenfunctions of a fission kernel decay if operated on by the cycle-to-cycle error propagation operator of the Monte Carlo stationary source distribution. The analytical results obtained can be summarized as follows: When the dominance ratio of a fission kernel is close to unity, autocorrelation of the k-effective tallies is weak and may be negligible, while the autocorrelation of the source distribution is strong and decays slowly. The practical implication is that when one analyzes a critical reactor with a large dominance ratio by Monte Carlo methods, the confidence interval estimation of the fission rate and other quantities at individual locations must account for the strong autocorrelation. Numerical results are presented for sample problems with a dominance ratio of 0.85-0.99, where Shannon and relative entropies are utilized to exclude the influence of initial nonstationarity.

  17. Chemical accuracy from quantum Monte Carlo for the benzene dimer

    SciTech Connect

    Azadi, Sam; Cohen, R. E.

    2015-09-14

    We report an accurate study of interactions between benzene molecules using variational quantum Monte Carlo (VMC) and diffusion quantum Monte Carlo (DMC) methods. We compare these results with density functional theory using different van der Waals functionals. In our quantum Monte Carlo (QMC) calculations, we use accurate correlated trial wave functions including three-body Jastrow factors and backflow transformations. We consider two benzene molecules in the parallel displaced geometry, and find that by highly optimizing the wave function and introducing more dynamical correlation into the wave function, we compute the weak chemical binding energy between aromatic rings accurately. We find optimal VMC and DMC binding energies of −2.3(4) and −2.7(3) kcal/mol, respectively. The best estimate of the coupled-cluster theory through perturbative triplets/complete basis set limit is −2.65(2) kcal/mol [Miliordos et al., J. Phys. Chem. A 118, 7568 (2014)]. Our results indicate that QMC methods give chemical accuracy for weakly bound van der Waals molecular interactions, comparable to results from the best quantum chemistry methods.

  18. Pattern Recognition for a Flight Dynamics Monte Carlo Simulation

    NASA Technical Reports Server (NTRS)

    Restrepo, Carolina; Hurtado, John E.

    2011-01-01

    The design, analysis, and verification and validation of a spacecraft relies heavily on Monte Carlo simulations. Modern computational techniques are able to generate large amounts of Monte Carlo data but flight dynamics engineers lack the time and resources to analyze it all. The growing amounts of data combined with the diminished available time of engineers motivates the need to automate the analysis process. Pattern recognition algorithms are an innovative way of analyzing flight dynamics data efficiently. They can search large data sets for specific patterns and highlight critical variables so analysts can focus their analysis efforts. This work combines a few tractable pattern recognition algorithms with basic flight dynamics concepts to build a practical analysis tool for Monte Carlo simulations. Current results show that this tool can quickly and automatically identify individual design parameters, and most importantly, specific combinations of parameters that should be avoided in order to prevent specific system failures. The current version uses a kernel density estimation algorithm and a sequential feature selection algorithm combined with a k-nearest neighbor classifier to find and rank important design parameters. This provides an increased level of confidence in the analysis and saves a significant amount of time.

  19. Monte Carlo simulations of electron transport in strongly attaching gases

    NASA Astrophysics Data System (ADS)

    Petrovic, Zoran; Miric, Jasmina; Simonovic, Ilija; Bosnjakovic, Danko; Dujko, Sasa

    2016-09-01

    Extensive loss of electrons in strongly attaching gases imposes significant difficulties in Monte Carlo simulations at low electric field strengths. In order to compensate for such losses, some kind of rescaling procedures must be used. In this work, we discuss two rescaling procedures for Monte Carlo simulations of electron transport in strongly attaching gases: (1) discrete rescaling, and (2) continuous rescaling. The discrete rescaling procedure is based on duplication of electrons randomly chosen from the remaining swarm at certain discrete time steps. The continuous rescaling procedure employs a dynamically defined fictitious ionization process with the constant collision frequency chosen to be equal to the attachment collision frequency. These procedures should not in any way modify the distribution function. Monte Carlo calculations of transport coefficients for electrons in SF6 and CF3I are performed in a wide range of electric field strengths. However, special emphasis is placed upon the analysis of transport phenomena in the limit of lower electric fields where the transport properties are strongly affected by electron attachment. Two important phenomena arise: (1) the reduction of the mean energy with increasing E/N for electrons in SF6, and (2) the occurrence of negative differential conductivity in the bulk drift velocity of electrons in both SF6 and CF3I.

  20. VARIANCE ESTIMATION IN DOMAIN DECOMPOSED MONTE CARLO EIGENVALUE CALCULATIONS

    SciTech Connect

    Mervin, Brenden T; Maldonado, G. Ivan; Mosher, Scott W; Evans, Thomas M; Wagner, John C

    2012-01-01

    The number of tallies performed in a given Monte Carlo calculation is limited in most modern Monte Carlo codes by the amount of memory that can be allocated on a single processor. By using domain decomposition, the calculation is now limited by the total amount of memory available on all processors, allowing for significantly more tallies to be performed. However, decomposing the problem geometry introduces significant issues with the way tally statistics are conventionally calculated. In order to deal with the issue of calculating tally variances in domain decomposed environments for the Shift hybrid Monte Carlo code, this paper presents an alternative approach for reactor scenarios in which an assumption is made that once a particle leaves a domain, it does not reenter the domain. Particles that reenter the domain are instead treated as separate independent histories. This assumption introduces a bias that inevitably leads to under-prediction of the calculated variances for tallies within a few mean free paths of the domain boundaries. However, through the use of different decomposition strategies, primarily overlapping domains, the negative effects of such an assumption can be significantly reduced to within reasonable levels.

  1. Improved diffusion coefficients generated from Monte Carlo codes

    SciTech Connect

    Herman, B. R.; Forget, B.; Smith, K.; Aviles, B. N.

    2013-07-01

    Monte Carlo codes are becoming more widely used for reactor analysis. Some of these applications involve the generation of diffusion theory parameters including macroscopic cross sections and diffusion coefficients. Two approximations used to generate diffusion coefficients are assessed using the Monte Carlo code MC21. The first is the method of homogenization; whether to weight either fine-group transport cross sections or fine-group diffusion coefficients when collapsing to few-group diffusion coefficients. The second is a fundamental approximation made to the energy-dependent P1 equations to derive the energy-dependent diffusion equations. Standard Monte Carlo codes usually generate a flux-weighted transport cross section with no correction to the diffusion approximation. Results indicate that this causes noticeable tilting in reconstructed pin powers in simple test lattices with L2 norm error of 3.6%. This error is reduced significantly to 0.27% when weighting fine-group diffusion coefficients by the flux and applying a correction to the diffusion approximation. Noticeable tilting in reconstructed fluxes and pin powers was reduced when applying these corrections. (authors)

  2. Accelerating Monte Carlo power studies through parametric power estimation.

    PubMed

    Ueckert, Sebastian; Karlsson, Mats O; Hooker, Andrew C

    2016-04-01

    Estimating the power for a non-linear mixed-effects model-based analysis is challenging due to the lack of a closed form analytic expression. Often, computationally intensive Monte Carlo studies need to be employed to evaluate the power of a planned experiment. This is especially time consuming if full power versus sample size curves are to be obtained. A novel parametric power estimation (PPE) algorithm utilizing the theoretical distribution of the alternative hypothesis is presented in this work. The PPE algorithm estimates the unknown non-centrality parameter in the theoretical distribution from a limited number of Monte Carlo simulation and estimations. The estimated parameter linearly scales with study size allowing a quick generation of the full power versus study size curve. A comparison of the PPE with the classical, purely Monte Carlo-based power estimation (MCPE) algorithm for five diverse pharmacometric models showed an excellent agreement between both algorithms, with a low bias of less than 1.2 % and higher precision for the PPE. The power extrapolated from a specific study size was in a very good agreement with power curves obtained with the MCPE algorithm. PPE represents a promising approach to accelerate the power calculation for non-linear mixed effect models.

  3. Monte Carlo simulations of particle acceleration at oblique shocks

    NASA Technical Reports Server (NTRS)

    Baring, Matthew G.; Ellison, Donald C.; Jones, Frank C.

    1994-01-01

    The Fermi shock acceleration mechanism may be responsible for the production of high-energy cosmic rays in a wide variety of environments. Modeling of this phenomenon has largely focused on plane-parallel shocks, and one of the most promising techniques for its study is the Monte Carlo simulation of particle transport in shocked fluid flows. One of the principal problems in shock acceleration theory is the mechanism and efficiency of injection of particles from the thermal gas into the accelerated population. The Monte Carlo technique is ideally suited to addressing the injection problem directly, and previous applications of it to the quasi-parallel Earth bow shock led to very successful modeling of proton and heavy ion spectra, as well as other observed quantities. Recently this technique has been extended to oblique shock geometries, in which the upstream magnetic field makes a significant angle Theta(sub B1) to the shock normal. Spectral resutls from test particle Monte Carlo simulations of cosmic-ray acceleration at oblique, nonrelativistic shocks are presented. The results show that low Mach number shocks have injection efficiencies that are relatively insensitive to (though not independent of) the shock obliquity, but that there is a dramatic drop in efficiency for shocks of Mach number 30 or more as the obliquity increases above 15 deg. Cosmic-ray distributions just upstream of the shock reveal prominent bumps at energies below the thermal peak; these disappear far upstream but might be observable features close to astrophysical shocks.

  4. Progress on coupling UEDGE and Monte-Carlo simulation codes

    SciTech Connect

    Rensink, M.E.; Rognlien, T.D.

    1996-08-28

    Our objective is to develop an accurate self-consistent model for plasma and neutral sin the edge of tokamak devices such as DIII-D and ITER. The tow-dimensional fluid model in the UEDGE code has been used successfully for simulating a wide range of experimental plasma conditions. However, when the neutral mean free path exceeds the gradient scale length of the background plasma, the validity of the diffusive and inertial fluid models in UEDGE is questionable. In the long mean free path regime, neutrals can be accurately and efficiently described by a Monte Carlo neutrals model. Coupling of the fluid plasma model in UEDGE with a Monte Carlo neutrals model should improve the accuracy of our edge plasma simulations. The results described here used the EIRENE Monte Carlo neutrals code, but since information is passed to and from the UEDGE plasma code via formatted test files, any similar neutrals code such as DEGAS2 or NIMBUS could, in principle, be used.

  5. Chemical accuracy from quantum Monte Carlo for the benzene dimer.

    PubMed

    Azadi, Sam; Cohen, R E

    2015-09-14

    We report an accurate study of interactions between benzene molecules using variational quantum Monte Carlo (VMC) and diffusion quantum Monte Carlo (DMC) methods. We compare these results with density functional theory using different van der Waals functionals. In our quantum Monte Carlo (QMC) calculations, we use accurate correlated trial wave functions including three-body Jastrow factors and backflow transformations. We consider two benzene molecules in the parallel displaced geometry, and find that by highly optimizing the wave function and introducing more dynamical correlation into the wave function, we compute the weak chemical binding energy between aromatic rings accurately. We find optimal VMC and DMC binding energies of -2.3(4) and -2.7(3) kcal/mol, respectively. The best estimate of the coupled-cluster theory through perturbative triplets/complete basis set limit is -2.65(2) kcal/mol [Miliordos et al., J. Phys. Chem. A 118, 7568 (2014)]. Our results indicate that QMC methods give chemical accuracy for weakly bound van der Waals molecular interactions, comparable to results from the best quantum chemistry methods.

  6. Estimating return period of landslide triggering by Monte Carlo simulation

    NASA Astrophysics Data System (ADS)

    Peres, D. J.; Cancelliere, A.

    2016-10-01

    Assessment of landslide hazard is a crucial step for landslide mitigation planning. Estimation of the return period of slope instability represents a quantitative method to map landslide triggering hazard on a catchment. The most common approach to estimate return periods consists in coupling a triggering threshold equation, derived from an hydrological and slope stability process-based model, with a rainfall intensity-duration-frequency (IDF) curve. Such a traditional approach generally neglects the effect of rainfall intensity variability within events, as well as the variability of initial conditions, which depend on antecedent rainfall. We propose a Monte Carlo approach for estimating the return period of shallow landslide triggering which enables to account for both variabilities. Synthetic hourly rainfall-landslide data generated by Monte Carlo simulations are analysed to compute return periods as the mean interarrival time of a factor of safety less than one. Applications are first conducted to map landslide triggering hazard in the Loco catchment, located in highly landslide-prone area of the Peloritani Mountains, Sicily, Italy. Then a set of additional simulations are performed in order to evaluate the traditional IDF-based method by comparison with the Monte Carlo one. Results show that return period is affected significantly by variability of both rainfall intensity within events and of initial conditions, and that the traditional IDF-based approach may lead to an overestimation of the return period of landslide triggering, or, in other words, a non-conservative assessment of landslide hazard.

  7. MONTE CARLO RADIATION-HYDRODYNAMICS WITH IMPLICIT METHODS

    SciTech Connect

    Roth, Nathaniel; Kasen, Daniel

    2015-03-15

    We explore the application of Monte Carlo transport methods to solving coupled radiation-hydrodynamics (RHD) problems. We use a time-dependent, frequency-dependent, three-dimensional radiation transport code that is special relativistic and includes some detailed microphysical interactions such as resonant line scattering. We couple the transport code to two different one-dimensional (non-relativistic) hydrodynamics solvers: a spherical Lagrangian scheme and a Eulerian Godunov solver. The gas–radiation energy coupling is treated implicitly, allowing us to take hydrodynamical time-steps that are much longer than the radiative cooling time. We validate the code and assess its performance using a suite of radiation hydrodynamical test problems, including ones in the radiation energy dominated regime. We also develop techniques that reduce the noise of the Monte Carlo estimated radiation force by using the spatial divergence of the radiation pressure tensor. The results suggest that Monte Carlo techniques hold promise for simulating the multi-dimensional RHD of astrophysical systems.

  8. Monte Carlo Methodology Serves Up a Software Success

    NASA Technical Reports Server (NTRS)

    2003-01-01

    Widely used for the modeling of gas flows through the computation of the motion and collisions of representative molecules, the Direct Simulation Monte Carlo method has become the gold standard for producing research and engineering predictions in the field of rarefied gas dynamics. Direct Simulation Monte Carlo was first introduced in the early 1960s by Dr. Graeme Bird, a professor at the University of Sydney, Australia. It has since proved to be a valuable tool to the aerospace and defense industries in providing design and operational support data, as well as flight data analysis. In 2002, NASA brought to the forefront a software product that maintains the same basic physics formulation of Dr. Bird's method, but provides effective modeling of complex, three-dimensional, real vehicle simulations and parallel processing capabilities to handle additional computational requirements, especially in areas where computational fluid dynamics (CFD) is not applicable. NASA's Direct Simulation Monte Carlo Analysis Code (DAC) software package is now considered the Agency s premier high-fidelity simulation tool for predicting vehicle aerodynamics and aerothermodynamic environments in rarified, or low-density, gas flows.

  9. Monte Carlo modelling of positron transport in real world applications

    NASA Astrophysics Data System (ADS)

    Marjanović, S.; Banković, A.; Šuvakov, M.; Petrović, Z. Lj

    2014-05-01

    Due to the unstable nature of positrons and their short lifetime, it is difficult to obtain high positron particle densities. This is why the Monte Carlo simulation technique, as a swarm method, is very suitable for modelling most of the current positron applications involving gaseous and liquid media. The ongoing work on the measurements of cross-sections for positron interactions with atoms and molecules and swarm calculations for positrons in gasses led to the establishment of good cross-section sets for positron interaction with gasses commonly used in real-world applications. Using the standard Monte Carlo technique and codes that can follow both low- (down to thermal energy) and high- (up to keV) energy particles, we are able to model different systems directly applicable to existing experimental setups and techniques. This paper reviews the results on modelling Surko-type positron buffer gas traps, application of the rotating wall technique and simulation of positron tracks in water vapor as a substitute for human tissue, and pinpoints the challenges in and advantages of applying Monte Carlo simulations to these systems.

  10. Use of Monte Carlo simulations in the assessment of calibration strategies-Part I: an introduction to Monte Carlo mathematics.

    PubMed

    Burrows, John

    2013-04-01

    An introduction to the use of the mathematical technique of Monte Carlo simulations to evaluate least squares regression calibration is described. Monte Carlo techniques involve the repeated sampling of data from a population that may be derived from real (experimental) data, but is more conveniently generated by a computer using a model of the analytical system and a randomization process to produce a large database. Datasets are selected from this population and fed into the calibration algorithms under test, thus providing a facile way of producing a sufficiently large number of assessments of the algorithm to enable a statically valid appraisal of the calibration process to be made. This communication provides a description of the technique that forms the basis of the results presented in Parts II and III of this series, which follow in this issue, and also highlights the issues arising from the use of small data populations in bioanalysis.

  11. Independent pixel and Monte Carlo estimates of stratocumulus albedo

    NASA Technical Reports Server (NTRS)

    Cahalan, Robert F.; Ridgway, William; Wiscombe, Warren J.; Gollmer, Steven; HARSHVARDHAN

    1994-01-01

    Monte Carlo radiative transfer methods are employed here to estimate the plane-parallel albedo bias for marine stratocumulus clouds. This is the bias in estimates of the mesoscale-average albedo, which arises from the assumption that cloud liquid water is uniformly distributed. The authors compare such estimates with those based on a more realistic distribution generated from a fractal model of marine stratocumulus clouds belonging to the class of 'bounded cascade' models. In this model the cloud top and base are fixed, so that all variations in cloud shape are ignored. The model generates random variations in liquid water along a single horizontal direction, forming fractal cloud streets while conserving the total liquid water in the cloud field. The model reproduces the mean, variance, and skewness of the vertically integrated cloud liquid water, as well as its observed wavenumber spectrum, which is approximately a power law. The Monte Carlo method keeps track of the three-dimensional paths solar photons take through the cloud field, using a vectorized implementation of a direct technique. The simplifications in the cloud field studied here allow the computations to be accelerated. The Monte Carlo results are compared to those of the independent pixel approximation, which neglects net horizontal photon transport. Differences between the Monte Carlo and independent pixel estimates of the mesoscale-average albedo are on the order of 1% for conservative scattering, while the plane-parallel bias itself is an order of magnitude larger. As cloud absorption increases, the independent pixel approximation agrees even more closely with the Monte Carlo estimates. This result holds for a wide range of sun angles and aspect ratios. Thus, horizontal photon transport can be safely neglected in estimates of the area-average flux for such cloud models. This result relies on the rapid falloff of the wavenumber spectrum of stratocumulus, which ensures that the smaller

  12. Monte Carlo study of a Cyberknife stereotactic radiosurgery system

    SciTech Connect

    Araki, Fujio

    2006-08-15

    This study investigated small-field dosimetry for a Cyberknife stereotactic radiosurgery system using Monte Carlo simulations. The EGSnrc/BEAMnrc Monte Carlo code was used to simulate the Cyberknife treatment head, and the DOSXYZnrc code was implemented to calculate central axis depth-dose curves, off-axis dose profiles, and relative output factors for various circular collimator sizes of 5 to 60 mm. Water-to-air stopping power ratios necessary for clinical reference dosimetry of the Cyberknife system were also evaluated by Monte Carlo simulations. Additionally, a beam quality conversion factor, k{sub Q}, for the Cyberknife system was evaluated for cylindrical ion chambers with different wall material. The accuracy of the simulated beam was validated by agreement within 2% between the Monte Carlo calculated and measured central axis depth-dose curves and off-axis dose profiles. The calculated output factors were compared with those measured by a diode detector and an ion chamber in water. The diode output factors agreed within 1% with the calculated values down to a 10 mm collimator. The output factors with the ion chamber decreased rapidly for collimators below 20 mm. These results were confirmed by the comparison to those from Monte Carlo methods with voxel sizes and materials corresponding to both detectors. It was demonstrated that the discrepancy in the 5 and 7.5 mm collimators for the diode detector is due to the water nonequivalence of the silicon material, and the dose fall-off for the ion chamber is due to its large active volume against collimators below 20 mm. The calculated stopping power ratios of the 60 mm collimator from the Cyberknife system (without a flattening filter) agreed within 0.2% with those of a 10x10 cm{sup 2} field from a conventional linear accelerator with a heavy flattening filter and the incident electron energy, 6 MeV. The difference in the stopping power ratios between 5 and 60 mm collimators was within 0.5% at a 10 cm depth in

  13. Experiences with different parallel programming paradigms for Monte Carlo particle transport leads to a portable toolkit for parallel Monte Carlo

    SciTech Connect

    Martin, W.R.; Majumdar, A. . Dept. of Nuclear Engineering); Rathkopf, J.A. ); Litvin, M. )

    1993-04-01

    Monte Carlo particle transport is easy to implement on massively parallel computers relative to other methods of transport simulation. This paper describes experiences of implementing a realistic demonstration Monte Carlo code on a variety of parallel architectures. Our pool of tasks'' technique, which allows reproducibility from run to run regardless of the number of processors, is discussed. We present detailed timing studies of simulations performed on the 128 processor BBN-ACI TC2000 and preliminary timing results for the 32 processor Kendall Square Research KSR-1. Given sufficient workload to distribute across many computational nodes, the BBN achieves nearly linear speedup for a large number of nodes. The KSR, with which we have had less experience, performs poorly with more than ten processors. A simple model incorporating known causes of overhead accurately predicts observed behavior. A general-purpose communication and control package to facilitate the implementation of existing Monte Carlo packages is described together with timings on the BBN. This package adds insignificantly to the computational costs of parallel simulations.

  14. Experiences with different parallel programming paradigms for Monte Carlo particle transport leads to a portable toolkit for parallel Monte Carlo

    SciTech Connect

    Martin, W.R.; Majumdar, A.; Rathkopf, J.A.; Litvin, M.

    1993-04-01

    Monte Carlo particle transport is easy to implement on massively parallel computers relative to other methods of transport simulation. This paper describes experiences of implementing a realistic demonstration Monte Carlo code on a variety of parallel architectures. Our ``pool of tasks`` technique, which allows reproducibility from run to run regardless of the number of processors, is discussed. We present detailed timing studies of simulations performed on the 128 processor BBN-ACI TC2000 and preliminary timing results for the 32 processor Kendall Square Research KSR-1. Given sufficient workload to distribute across many computational nodes, the BBN achieves nearly linear speedup for a large number of nodes. The KSR, with which we have had less experience, performs poorly with more than ten processors. A simple model incorporating known causes of overhead accurately predicts observed behavior. A general-purpose communication and control package to facilitate the implementation of existing Monte Carlo packages is described together with timings on the BBN. This package adds insignificantly to the computational costs of parallel simulations.

  15. Coarse-grained computation for particle coagulation and sintering processes by linking Quadrature Method of Moments with Monte-Carlo

    SciTech Connect

    Zou Yu; Kavousanakis, Michail E.; Kevrekidis, Ioannis G.; Fox, Rodney O.

    2010-07-20

    The study of particle coagulation and sintering processes is important in a variety of research studies ranging from cell fusion and dust motion to aerosol formation applications. These processes are traditionally simulated using either Monte-Carlo methods or integro-differential equations for particle number density functions. In this paper, we present a computational technique for cases where we believe that accurate closed evolution equations for a finite number of moments of the density function exist in principle, but are not explicitly available. The so-called equation-free computational framework is then employed to numerically obtain the solution of these unavailable closed moment equations by exploiting (through intelligent design of computational experiments) the corresponding fine-scale (here, Monte-Carlo) simulation. We illustrate the use of this method by accelerating the computation of evolving moments of uni- and bivariate particle coagulation and sintering through short simulation bursts of a constant-number Monte-Carlo scheme.

  16. Optical tomographic imaging of activation of the infant auditory cortex using perturbation Monte Carlo with anatomical a priori information

    NASA Astrophysics Data System (ADS)

    Heiskala, Juha; Kotilahti, Kalle; Lipiäinen, Lauri; Hiltunen, Petri; Grant, P. Ellen; Nissilä, Ilkka

    2007-07-01

    We have developed a perturbation Monte Carlo method for calculating forward and inverse solutions to the optical tomography imaging problem in the presence of anatomical a priori information. The method uses frequency domain data. In the present work, we consider the problem of imaging hemodynamic changes due to brain activation in the infant brain. We test finite element method and Monte Carlo based implementations using a homogeneous model with the exterior of the domain warped to match digitized points on the skin. With the perturbation Monte Carlo model, we also test a heterogeneous model based on anatomical a priori information derived from a previously recorded infant T1 magnetic resonance (MR) image. Our simulations show that the anatomical information improves the accuracy of reconstructions quite significantly even if the anatomical MR images are based on another infant. This suggests that significant benefits can be obtained by the use of generic infant brain atlas information in near-infrared spectroscopy and optical tomography studies.

  17. PROBLEM DEPENDENT DOPPLER BROADENING OF CONTINUOUS ENERGY CROSS SECTIONS IN THE KENO MONTE CARLO COMPUTER CODE

    SciTech Connect

    Hart, S. W. D.; Maldonado, G. Ivan; Celik, Cihangir; Leal, Luiz C

    2014-01-01

    For many Monte Carlo codes cross sections are generally only created at a set of predetermined temperatures. This causes an increase in error as one moves further and further away from these temperatures in the Monte Carlo model. This paper discusses recent progress in the Scale Monte Carlo module KENO to create problem dependent, Doppler broadened, cross sections. Currently only broadening the 1D cross sections and probability tables is addressed. The approach uses a finite difference method to calculate the temperature dependent cross-sections for the 1D data, and a simple linear-logarithmic interpolation in the square root of temperature for the probability tables. Work is also ongoing to address broadening theS (alpha , beta) tables. With the current approach the temperature dependent cross sections are Doppler broadened before transport starts, and, for all but a few isotopes, the impact on cross section loading is negligible. Results can be compared with those obtained by using multigroup libraries, as KENO currently does interpolation on the multigroup cross sections to determine temperature dependent cross-sections. Current results compare favorably with these expected results.

  18. Exploring fluctuations and phase equilibria in fluid mixtures via Monte Carlo simulation

    NASA Astrophysics Data System (ADS)

    Denton, Alan R.; Schmidt, Michael P.

    2013-03-01

    Monte Carlo simulation provides a powerful tool for understanding and exploring thermodynamic phase equilibria in many-particle interacting systems. Among the most physically intuitive simulation methods is Gibbs ensemble Monte Carlo (GEMC), which allows direct computation of phase coexistence curves of model fluids by assigning each phase to its own simulation cell. When one or both of the phases can be modelled virtually via an analytic free energy function (Mehta and Kofke 1993 Mol. Phys. 79 39), the GEMC method takes on new pedagogical significance as an efficient means of analysing fluctuations and illuminating the statistical foundation of phase behaviour in finite systems. Here we extend this virtual GEMC method to binary fluid mixtures and demonstrate its implementation and instructional value with two applications: (1) a lattice model of simple mixtures and polymer blends and (2) a free-volume model of a complex mixture of colloids and polymers. We present algorithms for performing Monte Carlo trial moves in the virtual Gibbs ensemble, validate the method by computing fluid demixing phase diagrams, and analyse the dependence of fluctuations on system size. Our open-source simulation programs, coded in the platform-independent Java language, are suitable for use in classroom, tutorial, or computational laboratory settings.

  19. Monte Carlo simulation of superdiffusion and subdiffusion in macroscopically heterogeneous media

    NASA Astrophysics Data System (ADS)

    Zhang, Yong; Labolle, Eric M.; Pohlmann, Karl

    2009-10-01

    Monte Carlo simulations are developed to approximate one-dimensional superdiffusion and subdiffusion in macroscopically heterogeneous media with discontinuous or continuous transport parameters. For superdiffusion characterized by a space fractional (α-order) derivative model, one empirical reflection scheme is built to track particle trajectory across an interface with discontinuous dispersion coefficient D, where the reflection probability depends on both α and the ratio of D. Different from the superdiffusive case, anomalous diffusion described by a time fractional derivative model can be decomposed into a motion component and a hitting time process, where the discontinuity affects only the motion process, implying an efficient Monte Carlo simulation of decoupled continuous time random walks. The discontinuity of effective porosity n is also discussed, and results show the influence of the ratio of n on solute particle dynamics. In addition, for anomalous superdiffusion and subdiffusion in heterogeneous media with spatially continuous D and n, Langevin analysis reveals that the corresponding particle dynamics contain three independent stable Lévy noises scaled by D, the gradient of D, and the gradient of ln(n). A new implicit Eulerian finite difference method is also developed to solve the spatiotemporal fractional derivative models and then extensively cross verify the Lagrangian solutions. Further testing against one field example of mixed superdiffusion and subdiffusion reveals the applicability and flexibility of the novel Monte Carlo approach in simulating realistic plumes in macroscopically heterogeneous media with locally variable transport parameters.

  20. Theoretically informed Monte Carlo simulation of liquid crystals by sampling of alignment-tensor fields.

    SciTech Connect

    Armas-Perez, Julio C.; Londono-Hurtado, Alejandro; Guzman, Orlando; Hernandez-Ortiz, Juan P.; de Pablo, Juan J.

    2015-07-27

    A theoretically informed coarse-grained Monte Carlo method is proposed for studying liquid crystals. The free energy functional of the system is described in the framework of the Landau-de Gennes formalism. The alignment field and its gradients are approximated by finite differences, and the free energy is minimized through a stochastic sampling technique. The validity of the proposed method is established by comparing the results of the proposed approach to those of traditional free energy minimization techniques. Its usefulness is illustrated in the context of three systems, namely, a nematic liquid crystal confined in a slit channel, a nematic liquid crystal droplet, and a chiral liquid crystal in the bulk. It is found that for systems that exhibit multiple metastable morphologies, the proposed Monte Carlo method is generally able to identify lower free energy states that are often missed by traditional approaches. Importantly, the Monte Carlo method identifies such states from random initial configurations, thereby obviating the need for educated initial guesses that can be difficult to formulate.

  1. Theoretically informed Monte Carlo simulation of liquid crystals by sampling of alignment-tensor fields

    SciTech Connect

    Armas-Pérez, Julio C.; Londono-Hurtado, Alejandro; Guzmán, Orlando; Hernández-Ortiz, Juan P.; Pablo, Juan J. de

    2015-07-28

    A theoretically informed coarse-grained Monte Carlo method is proposed for studying liquid crystals. The free energy functional of the system is described in the framework of the Landau-de Gennes formalism. The alignment field and its gradients are approximated by finite differences, and the free energy is minimized through a stochastic sampling technique. The validity of the proposed method is established by comparing the results of the proposed approach to those of traditional free energy minimization techniques. Its usefulness is illustrated in the context of three systems, namely, a nematic liquid crystal confined in a slit channel, a nematic liquid crystal droplet, and a chiral liquid crystal in the bulk. It is found that for systems that exhibit multiple metastable morphologies, the proposed Monte Carlo method is generally able to identify lower free energy states that are often missed by traditional approaches. Importantly, the Monte Carlo method identifies such states from random initial configurations, thereby obviating the need for educated initial guesses that can be difficult to formulate.

  2. Theoretically informed Monte Carlo simulation of liquid crystals by sampling of alignment-tensor fields.

    PubMed

    Armas-Pérez, Julio C; Londono-Hurtado, Alejandro; Guzmán, Orlando; Hernández-Ortiz, Juan P; de Pablo, Juan J

    2015-07-28

    A theoretically informed coarse-grained Monte Carlo method is proposed for studying liquid crystals. The free energy functional of the system is described in the framework of the Landau-de Gennes formalism. The alignment field and its gradients are approximated by finite differences, and the free energy is minimized through a stochastic sampling technique. The validity of the proposed method is established by comparing the results of the proposed approach to those of traditional free energy minimization techniques. Its usefulness is illustrated in the context of three systems, namely, a nematic liquid crystal confined in a slit channel, a nematic liquid crystal droplet, and a chiral liquid crystal in the bulk. It is found that for systems that exhibit multiple metastable morphologies, the proposed Monte Carlo method is generally able to identify lower free energy states that are often missed by traditional approaches. Importantly, the Monte Carlo method identifies such states from random initial configurations, thereby obviating the need for educated initial guesses that can be difficult to formulate.

  3. CPMC-Lab: A MATLAB package for Constrained Path Monte Carlo calculations

    NASA Astrophysics Data System (ADS)

    Nguyen, Huy; Shi, Hao; Xu, Jie; Zhang, Shiwei

    2014-12-01

    We describe CPMC-Lab, a MATLAB program for the constrained-path and phaseless auxiliary-field Monte Carlo methods. These methods have allowed applications ranging from the study of strongly correlated models, such as the Hubbard model, to ab initio calculations in molecules and solids. The present package implements the full ground-state constrained-path Monte Carlo (CPMC) method in MATLAB with a graphical interface, using the Hubbard model as an example. The package can perform calculations in finite supercells in any dimensions, under periodic or twist boundary conditions. Importance sampling and all other algorithmic details of a total energy calculation are included and illustrated. This open-source tool allows users to experiment with various model and run parameters and visualize the results. It provides a direct and interactive environment to learn the method and study the code with minimal overhead for setup. Furthermore, the package can be easily generalized for auxiliary-field quantum Monte Carlo (AFQMC) calculations in many other models for correlated electron systems, and can serve as a template for developing a production code for AFQMC total energy calculations in real materials. Several illustrative studies are carried out in one- and two-dimensional lattices on total energy, kinetic energy, potential energy, and charge- and spin-gaps.

  4. Confirmation using Monte Carlo ground-state energies of the instability of free planar films of liquid 4He at T=0 K

    NASA Astrophysics Data System (ADS)

    Szybisz, Leszek

    1998-07-01

    The stability of free slabs of liquid 4He at T=0 K is studied by examining ground-state energies computed with Monte Carlo techniques. A stability condition derived by imposing a positive areal isothermal compressibility is applied. It is shown that Monte Carlo data clearly indicate that all finite films are unstable supporting the finding of previous investigations based on the analysis of values obtained from self-consistent microscopic calculations.

  5. Multiparticle Monte Carlo Code System for Shielding and Criticality Use.

    SciTech Connect

    2015-06-01

    Version 00 COG is a modern, full-featured Monte Carlo radiation transport code that provides accurate answers to complex shielding, criticality, and activation problems.COG was written to be state-of-the-art and free of physics approximations and compromises found in earlier codes. COG is fully 3-D, uses point-wise cross sections and exact angular scattering, and allows a full range of biasing options to speed up solutions for deep penetration problems. Additionally, a criticality option is available for computing Keff for assemblies of fissile materials. ENDL or ENDFB cross section libraries may be used. COG home page: http://cog.llnl.gov. Cross section libraries are included in the package. COG can use either the LLNL ENDL-90 cross section set or the ENDFB/VI set. Analytic surfaces are used to describe geometric boundaries. Parts (volumes) are described by a method of Constructive Solid Geometry. Surface types include surfaces of up to fourth order, and pseudo-surfaces such as boxes, finite cylinders, and figures of revolution. Repeated assemblies need be defined only once. Parts are visualized in cross-section and perspective picture views. A lattice feature simplifies the specification of regular arrays of parts. Parallel processing under MPI is supported for multi-CPU systems. Source and random-walk biasing techniques may be selected to improve solution statistics. These include source angular biasing, importance weighting, particle splitting and Russian roulette, pathlength stretching, point detectors, scattered direction biasing, and forced collisions. Criticality – For a fissioning system, COG will compute Keff by transporting batches of neutrons through the system. Activation – COG can compute gamma-ray doses due to neutron-activated materials, starting with just a neutron source. Coupled Problems – COG can solve coupled problems involving neutrons, photons, and electrons. COG 11.1 is an updated version of COG11.1 BETA 2 (RSICC C00777MNYCP02). New

  6. Phonon transport analysis of semiconductor nanocomposites using monte carlo simulations

    NASA Astrophysics Data System (ADS)

    Malladi, Mayank

    Nanocomposites are composite materials which incorporate nanosized particles, platelets or fibers. The addition of nanosized phases into the bulk matrix can lead to significantly different material properties compared to their macrocomposite counterparts. For nanocomposites, thermal conductivity is one of the most important physical properties. Manipulation and control of thermal conductivity in nanocomposites have impacted a variety of applications. In particular, it has been shown that the phonon thermal conductivity can be reduced significantly in nanocomposites due to the increase in phonon interface scattering while the electrical conductivity can be maintained. This extraordinary property of nanocomposites has been used to enhance the energy conversion efficiency of the thermoelectric devices which is proportional to the ratio of electrical to thermal conductivity. This thesis investigates phonon transport and thermal conductivity in Si/Ge semiconductor nanocomposites through numerical analysis. The Boltzmann transport equation (BTE) is adopted for description of phonon thermal transport in the nanocomposites. The BTE employs the particle-like nature of phonons to model heat transfer which accounts for both ballistic and diffusive transport phenomenon. Due to the implementation complexity and computational cost involved, the phonon BTE is difficult to solve in its most generic form. Gray media (frequency independent phonons) is often assumed in the numerical solution of BTE using conventional methods such as finite volume and discrete ordinates methods. This thesis solves the BTE using Monte Carlo (MC) simulation technique which is more convenient and efficient when non-gray media (frequency dependent phonons) is considered. In the MC simulation, phonons are displaced inside the computational domain under the various boundary conditions and scattering effects. In this work, under the relaxation time approximation, thermal transport in the nanocomposites are

  7. Three-state, steady-state Ising systems: Monte Carlo and Bragg-Williams treatments

    PubMed Central

    Hill, Terrell L.; Chen, Yi-Der

    1981-01-01

    In two earlier papers, the steady-state critical and phase-transition properties of a lattice of three-state enzyme molecules were studied by using the “closed” Bragg-Williams (BW), or mean field, approximation. The “open” BW and Monte Carlo methods are applied to the same problem in this paper by using finite lattices. The open BW treatment provides a way of locating the cut across a van der Waals type of loop encountered in a phase transition in the closed BW system. Thermodynamic-like methods cannot be used for this purpose as they can with two-state, steady-state systems. PMID:16592956

  8. High-precision Monte Carlo study of directed percolation in (d+1) dimensions.

    PubMed

    Wang, Junfeng; Zhou, Zongzheng; Liu, Qingquan; Garoni, Timothy M; Deng, Youjin

    2013-10-01

    We present a Monte Carlo study of the bond- and site-directed (oriented) percolation models in (d+1) dimensions on simple-cubic and body-centered-cubic lattices, with 2 ≤ d ≤ 7. A dimensionless ratio is defined, and an analysis of its finite-size scaling produces improved estimates of percolation thresholds. We also report improved estimates for the standard critical exponents. In addition, we study the probability distributions of the number of wet sites and radius of gyration, for 1 ≤ d ≤ 7.

  9. Monte Carlo renormalization-group investigation of the two-dimensional O(4) sigma model

    NASA Technical Reports Server (NTRS)

    Heller, Urs M.

    1988-01-01

    An improved Monte Carlo renormalization-group method is used to determine the beta function of the two-dimensional O(4) sigma model. While for (inverse) couplings beta = greater than about 2.2 agreement is obtained with asymptotic scaling according to asymptotic freedom, deviations from it are obtained at smaller couplings. They are, however, consistent with the behavior of the correlation length, indicating 'scaling' according to the full beta function. These results contradict recent claims that the model has a critical point at finite coupling.

  10. Characterization of dose in stereotactic body radiation therapy of lung lesions via Monte Carlo calculation

    NASA Astrophysics Data System (ADS)

    Rassiah, Premavathy

    Stereotactic Body Radiation Therapy is a new form of treatment where hypofractionated (i.e., large dose fractions), conformal doses are delivered to small extracranial target volumes. This technique has proven to be especially effective for treating lung lesions. The inability of most commercially available algorithms/treatment planning systems to accurately account for electron transport in regions of heterogeneous electron density and tissue interfaces make prediction of accurate doses especially challenging for such regions. Monte Carlo which a model based calculation algorithm has proven to be extremely accurate for dose calculation in both homogeneous and inhomogeneous environment. This study attempts to accurately characterize the doses received by static targets located in the lung, as well as critical structures (contra and ipsi -lateral lung, major airways, esophagus and spinal cord) for the serial tomotherapeutic intensity-modulated delivery method used for stereotactic body radiation therapy at the Cancer Therapy and Research Center. PEREGRINERTM (v 1.6. NOMOS) Monte Carlo, doses were compared to the Finite Sized Pencil Beam/Effective Path Length predicted values from the CORVUS 5.0 planning system. The Monte Carlo based treatment planning system was first validated in both homogenous and inhomogeneous environments. 77 stereotactic body radiation therapy lung patients previously treated with doses calculated using the Finite Sized Pencil Beam/Effective Path Length, algorithm were then retrieved and recalculated with Monte Carlo. All 77 patients plans were also recalculated without inhomogeneity correction in an attempt to counteract the known overestimation of dose at the periphery of the target by EPL with increased attenuation. The critical structures were delineated in order to standardize the contouring. Both the ipsi-lateral and contra-lateral lungs were contoured. The major airways were contoured from the apex of the lungs (trachea) to 4 cm below

  11. CSnrc: Correlated sampling Monte Carlo calculations using EGSnrc

    SciTech Connect

    Buckley, Lesley A.; Kawrakow, I.; Rogers, D.W.O.

    2004-12-01

    CSnrc, a new user-code for the EGSnrc Monte Carlo system is described. This user-code improves the efficiency when calculating ratios of doses from similar geometries. It uses a correlated sampling variance reduction technique. CSnrc is developed from an existing EGSnrc user-code CAVRZnrc and improves upon the correlated sampling algorithm used in an earlier version of the code written for the EGS4 Monte Carlo system. Improvements over the EGS4 version of the algorithm avoid repetition of sections of particle tracks. The new code includes a rectangular phantom geometry not available in other EGSnrc cylindrical codes. Comparison to CAVRZnrc shows gains in efficiency of up to a factor of 64 for a variety of test geometries when computing the ratio of doses to the cavity for two geometries. CSnrc is well suited to in-phantom calculations and is used to calculate the central electrode correction factor P{sub cel} in high-energy photon and electron beams. Current dosimetry protocols base the value of P{sub cel} on earlier Monte Carlo calculations. The current CSnrc calculations achieve 0.02% statistical uncertainties on P{sub cel}, much lower than those previously published. The current values of P{sub cel} compare well with the values used in dosimetry protocols for photon beams. For electrons beams, CSnrc calculations are reported at the reference depth used in recent protocols and show up to a 0.2% correction for a graphite electrode, a correction currently ignored by dosimetry protocols. The calculations show that for a 1 mm diameter aluminum central electrode, the correction factor differs somewhat from the values used in both the IAEA TRS-398 code of practice and the AAPM's TG-51 protocol.

  12. Anisotropic seismic inversion using a multigrid Monte Carlo approach

    NASA Astrophysics Data System (ADS)

    Mewes, Armin; Kulessa, Bernd; McKinley, John D.; Binley, Andrew M.

    2010-10-01

    We propose a new approach for the inversion of anisotropic P-wave data based on Monte Carlo methods combined with a multigrid approach. Simulated annealing facilitates objective minimization of the functional characterizing the misfit between observed and predicted traveltimes, as controlled by the Thomsen anisotropy parameters (ɛ, δ). Cycling between finer and coarser grids enhances the computational efficiency of the inversion process, thus accelerating the convergence of the solution while acting as a regularization technique of the inverse problem. Multigrid perturbation samples the probability density function without the requirements for the user to adjust tuning parameters. This increases the probability that the preferred global, rather than a poor local, minimum is attained. Undertaking multigrid refinement and Monte Carlo search in parallel produces more robust convergence than does the initially more intuitive approach of completing them sequentially. We demonstrate the usefulness of the new multigrid Monte Carlo (MGMC) scheme by applying it to (a) synthetic, noise-contaminated data reflecting an isotropic subsurface of constant slowness, horizontally layered geologic media and discrete subsurface anomalies; and (b) a crosshole seismic data set acquired by previous authors at the Reskajeage test site in Cornwall, UK. Inverted distributions of slowness (s) and the Thomson anisotropy parameters (ɛ, δ) compare favourably with those obtained previously using a popular matrix-based method. Reconstruction of the Thomsen ɛ parameter is particularly robust compared to that of slowness and the Thomsen δ parameter, even in the face of complex subsurface anomalies. The Thomsen ɛ and δ parameters have enhanced sensitivities to bulk-fabric and fracture-based anisotropies in the TI medium at Reskajeage. Because reconstruction of slowness (s) is intimately linked to that ɛ and δ in the MGMC scheme, inverted images of phase velocity reflect the integrated

  13. Properties of reactive oxygen species by quantum Monte Carlo.

    PubMed

    Zen, Andrea; Trout, Bernhardt L; Guidoni, Leonardo

    2014-07-07

    The electronic properties of the oxygen molecule, in its singlet and triplet states, and of many small oxygen-containing radicals and anions have important roles in different fields of chemistry, biology, and atmospheric science. Nevertheless, the electronic structure of such species is a challenge for ab initio computational approaches because of the difficulties to correctly describe the statical and dynamical correlation effects in presence of one or more unpaired electrons. Only the highest-level quantum chemical approaches can yield reliable characterizations of their molecular properties, such as binding energies, equilibrium structures, molecular vibrations, charge distribution, and polarizabilities. In this work we use the variational Monte Carlo (VMC) and the lattice regularized Monte Carlo (LRDMC) methods to investigate the equilibrium geometries and molecular properties of oxygen and oxygen reactive species. Quantum Monte Carlo methods are used in combination with the Jastrow Antisymmetrized Geminal Power (JAGP) wave function ansatz, which has been recently shown to effectively describe the statical and dynamical correlation of different molecular systems. In particular, we have studied the oxygen molecule, the superoxide anion, the nitric oxide radical and anion, the hydroxyl and hydroperoxyl radicals and their corresponding anions, and the hydrotrioxyl radical. Overall, the methodology was able to correctly describe the geometrical and electronic properties of these systems, through compact but fully-optimised basis sets and with a computational cost which scales as N(3) - N(4), where N is the number of electrons. This work is therefore opening the way to the accurate study of the energetics and of the reactivity of large and complex oxygen species by first principles.

  14. Properties of reactive oxygen species by quantum Monte Carlo

    NASA Astrophysics Data System (ADS)

    Zen, Andrea; Trout, Bernhardt L.; Guidoni, Leonardo

    2014-07-01

    The electronic properties of the oxygen molecule, in its singlet and triplet states, and of many small oxygen-containing radicals and anions have important roles in different fields of chemistry, biology, and atmospheric science. Nevertheless, the electronic structure of such species is a challenge for ab initio computational approaches because of the difficulties to correctly describe the statical and dynamical correlation effects in presence of one or more unpaired electrons. Only the highest-level quantum chemical approaches can yield reliable characterizations of their molecular properties, such as binding energies, equilibrium structures, molecular vibrations, charge distribution, and polarizabilities. In this work we use the variational Monte Carlo (VMC) and the lattice regularized Monte Carlo (LRDMC) methods to investigate the equilibrium geometries and molecular properties of oxygen and oxygen reactive species. Quantum Monte Carlo methods are used in combination with the Jastrow Antisymmetrized Geminal Power (JAGP) wave function ansatz, which has been recently shown to effectively describe the statical and dynamical correlation of different molecular systems. In particular, we have studied the oxygen molecule, the superoxide anion, the nitric oxide radical and anion, the hydroxyl and hydroperoxyl radicals and their corresponding anions, and the hydrotrioxyl radical. Overall, the methodology was able to correctly describe the geometrical and electronic properties of these systems, through compact but fully-optimised basis sets and with a computational cost which scales as N3 - N4, where N is the number of electrons. This work is therefore opening the way to the accurate study of the energetics and of the reactivity of large and complex oxygen species by first principles.

  15. Monte Carlo simulation of light propagation in the adult brain

    NASA Astrophysics Data System (ADS)

    Mudra, Regina M.; Nadler, Andreas; Keller, Emanuella; Niederer, Peter

    2004-06-01

    When near infrared spectroscopy (NIRS) is applied noninvasively to the adult head for brain monitoring, extra-cerebral bone and surface tissue exert a substantial influence on the cerebral signal. Most attempts to subtract extra-cerebral contamination involve spatially resolved spectroscopy (SRS). However, inter-individual variability of anatomy restrict the reliability of SRS. We simulated the light propagation with Monte Carlo techniques on the basis of anatomical structures determined from 3D-magnetic resonance imaging (MRI) exhibiting a voxel resolution of 0.8 x 0.8 x 0.8 mm3 for three different pairs of T1/T2 values each. The MRI data were used to define the material light absorption and dispersion coefficient for each voxel. The resulting spatial matrix was applied in the Monte Carlo Simulation to determine the light propagation in the cerebral cortex and overlaying structures. The accuracy of the Monte Carlo Simulation was furthermore increased by using a constant optical path length for the photons which was less than the median optical path length of the different materials. Based on our simulations we found a differential pathlength factor (DPF) of 6.15 which is close to with the value of 5.9 found in the literature for a distance of 4.5cm between the external sensors. Furthermore, we weighted the spatial probability distribution of the photons within the different tissues with the probabilities of the relative blood volume within the tissue. The results show that 50% of the NIRS signal is determined by the grey matter of the cerebral cortex which allows us to conclude that NIRS can produce meaningful cerebral blood flow measurements providing that the necessary corrections for extracerebral contamination are included.

  16. Quantum Monte Carlo: Faster, More Reliable, And More Accurate

    NASA Astrophysics Data System (ADS)

    Anderson, Amos Gerald

    2010-06-01

    The Schrodinger Equation has been available for about 83 years, but today, we still strain to apply it accurately to molecules of interest. The difficulty is not theoretical in nature, but practical, since we're held back by lack of sufficient computing power. Consequently, effort is applied to find acceptable approximations to facilitate real time solutions. In the meantime, computer technology has begun rapidly advancing and changing the way we think about efficient algorithms. For those who can reorganize their formulas to take advantage of these changes and thereby lift some approximations, incredible new opportunities await. Over the last decade, we've seen the emergence of a new kind of computer processor, the graphics card. Designed to accelerate computer games by optimizing quantity instead of quality in processor, they have become of sufficient quality to be useful to some scientists. In this thesis, we explore the first known use of a graphics card to computational chemistry by rewriting our Quantum Monte Carlo software into the requisite "data parallel" formalism. We find that notwithstanding precision considerations, we are able to speed up our software by about a factor of 6. The success of a Quantum Monte Carlo calculation depends on more than just processing power. It also requires the scientist to carefully design the trial wavefunction used to guide simulated electrons. We have studied the use of Generalized Valence Bond wavefunctions to simply, and yet effectively, captured the essential static correlation in atoms and molecules. Furthermore, we have developed significantly improved two particle correlation functions, designed with both flexibility and simplicity considerations, representing an effective and reliable way to add the necessary dynamic correlation. Lastly, we present our method for stabilizing the statistical nature of the calculation, by manipulating configuration weights, thus facilitating efficient and robust calculations. Our

  17. Properties of reactive oxygen species by quantum Monte Carlo

    SciTech Connect

    Zen, Andrea; Trout, Bernhardt L.; Guidoni, Leonardo

    2014-07-07

    The electronic properties of the oxygen molecule, in its singlet and triplet states, and of many small oxygen-containing radicals and anions have important roles in different fields of chemistry, biology, and atmospheric science. Nevertheless, the electronic structure of such species is a challenge for ab initio computational approaches because of the difficulties to correctly describe the statical and dynamical correlation effects in presence of one or more unpaired electrons. Only the highest-level quantum chemical approaches can yield reliable characterizations of their molecular properties, such as binding energies, equilibrium structures, molecular vibrations, charge distribution, and polarizabilities. In this work we use the variational Monte Carlo (VMC) and the lattice regularized Monte Carlo (LRDMC) methods to investigate the equilibrium geometries and molecular properties of oxygen and oxygen reactive species. Quantum Monte Carlo methods are used in combination with the Jastrow Antisymmetrized Geminal Power (JAGP) wave function ansatz, which has been recently shown to effectively describe the statical and dynamical correlation of different molecular systems. In particular, we have studied the oxygen molecule, the superoxide anion, the nitric oxide radical and anion, the hydroxyl and hydroperoxyl radicals and their corresponding anions, and the hydrotrioxyl radical. Overall, the methodology was able to correctly describe the geometrical and electronic properties of these systems, through compact but fully-optimised basis sets and with a computational cost which scales as N{sup 3} − N{sup 4}, where N is the number of electrons. This work is therefore opening the way to the accurate study of the energetics and of the reactivity of large and complex oxygen species by first principles.

  18. Monte Carlo Estimate to Improve Photon Energy Spectrum Reconstruction

    NASA Astrophysics Data System (ADS)

    Sawchuk, S.

    Improvements to planning radiation treatment for cancer patients and quality control of medical linear accelerators (linacs) can be achieved with the explicit knowledge of the photon energy spectrum. Monte Carlo (MC) simulations of linac treatment heads and experimental attenuation analysis are among the most popular ways of obtaining these spectra. Attenuation methods which combine measurements under narrow beam geometry and the associated calculation techniques to reconstruct the spectrum from the acquired data are very practical in a clinical setting and they can also serve to validate MC simulations. A novel reconstruction method [1] which has been modified [2] utilizes a Simpson's rule (SR) to approximate and discretize (1)

  19. Monte-Carlo histories of refractory interstellar dust

    NASA Technical Reports Server (NTRS)

    Clayton, D. D.; Liffman, K.

    1988-01-01

    Monte-carlo histories of 6 x 10 to the 6th individual dust particles injected uniformly from stars into the interstellar medium during a 6 x 10 to the 9th year history are calculated. The particles are given a two-phase internal structure of successive thermal condensates, and are distributed in initial radius as 1/a-cubed over the value of a between 0.01 and 0.1 micron. The evolution of this system illustrates the distinction between several different lifetimes for interstellar dust. Most are destroyed, but some grow in size. Several important consequences for interstellar dust are described.

  20. Novel extrapolation method in the Monte Carlo shell model

    SciTech Connect

    Shimizu, Noritaka; Abe, Takashi; Utsuno, Yutaka; Mizusaki, Takahiro; Otsuka, Takaharu; Honma, Michio

    2010-12-15

    We propose an extrapolation method utilizing energy variance in the Monte Carlo shell model to estimate the energy eigenvalue and observables accurately. We derive a formula for the energy variance with deformed Slater determinants, which enables us to calculate the energy variance efficiently. The feasibility of the method is demonstrated for the full pf-shell calculation of {sup 56}Ni, and the applicability of the method to a system beyond the current limit of exact diagonalization is shown for the pf+g{sub 9/2}-shell calculation of {sup 64}Ge.

  1. Morphological evolution of growing crystals - A Monte Carlo simulation

    NASA Technical Reports Server (NTRS)

    Xiao, Rong-Fu; Alexander, J. Iwan D.; Rosenberger, Franz

    1988-01-01

    The combined effects of nutrient diffusion and surface kinetics on the crystal morphology were investigated using a Monte Carlo model to simulate the evolving morphology of a crystal growing from a two-component gaseous nutrient phase. The model combines nutrient diffusion, based on a modified diffusion-limited aggregation process, with anisotropic surface-attachment kinetics and surface diffusion. A variety of conditions, ranging from kinetic-controlled to diffusion-controlled growth, were examined. Successive transitions from compact faceted (dominant surface kinetics) to open dendritic morphologies (dominant volume diffusion) were obtained.

  2. Metrics for Diagnosing Undersampling in Monte Carlo Tally Estimates

    SciTech Connect

    Perfetti, Christopher M.; Rearden, Bradley T.

    2015-01-01

    This study explored the potential of using Markov chain convergence diagnostics to predict the prevalence and magnitude of biases due to undersampling in Monte Carlo eigenvalue and flux tally estimates. Five metrics were applied to two models of pressurized water reactor fuel assemblies and their potential for identifying undersampling biases was evaluated by comparing the calculated test metrics with known biases in the tallies. Three of the five undersampling metrics showed the potential to accurately predict the behavior of undersampling biases in the responses examined in this study.

  3. More about Zener drag studies with Monte Carlo simulations

    NASA Astrophysics Data System (ADS)

    Di Prinzio, Carlos L.; Druetta, Esteban; Nasello, Olga Beatriz

    2013-03-01

    Grain growth (GG) processes in the presence of second-phase and stationary particles have been widely studied but the results found are inconsistent. We present new GG simulations in two- and three-dimensional (2D and 3D) polycrystalline samples with second phase stationary particles, using the Monte Carlo technique. Simulations using values of particle concentration greater than 15% and particle radii different from 1 or 3 are performed, thus covering a range of particle radii and concentrations not previously studied. It is shown that only the results for 3D samples follow Zener's law.

  4. Current status of the PSG Monte Carlo neutron transport code

    SciTech Connect

    Leppaenen, J.

    2006-07-01

    PSG is a new Monte Carlo neutron transport code, developed at the Technical Research Centre of Finland (VTT). The code is mainly intended for fuel assembly-level reactor physics calculations, such as group constant generation for deterministic reactor simulator codes. This paper presents the current status of the project and the essential capabilities of the code. Although the main application of PSG is in lattice calculations, the geometry is not restricted in two dimensions. This paper presents the validation of PSG against the experimental results of the three-dimensional MOX fuelled VENUS-2 reactor dosimetry benchmark. (authors)

  5. Monte Carlo simulations of charge transport in heterogeneous organic semiconductors

    NASA Astrophysics Data System (ADS)

    Aung, Pyie Phyo; Khanal, Kiran; Luettmer-Strathmann, Jutta

    2015-03-01

    The efficiency of organic solar cells depends on the morphology and electronic properties of the active layer. Research teams have been experimenting with different conducting materials to achieve more efficient solar panels. In this work, we perform Monte Carlo simulations to study charge transport in heterogeneous materials. We have developed a coarse-grained lattice model of polymeric photovoltaics and use it to generate active layers with ordered and disordered regions. We determine carrier mobilities for a range of conditions to investigate the effect of the morphology on charge transport.

  6. Continuous-Estimator Representation for Monte Carlo Criticality Diagnostics

    SciTech Connect

    Kiedrowski, Brian C.; Brown, Forrest B.

    2012-06-18

    An alternate means of computing diagnostics for Monte Carlo criticality calculations is proposed. Overlapping spherical regions or estimators are placed covering the fissile material with a minimum center-to-center separation of the 'fission distance', which is defined herein, and a radius that is some multiple thereof. Fission neutron production is recorded based upon a weighted average of proximities to centers for all the spherical estimators. These scores are used to compute the Shannon entropy, and shown to reproduce the value, to within an additive constant, determined from a well-placed mesh by a user. The spherical estimators are also used to assess statistical coverage.

  7. 3D Monte Carlo radiation transfer modelling of photodynamic therapy

    NASA Astrophysics Data System (ADS)

    Campbell, C. Louise; Christison, Craig; Brown, C. Tom A.; Wood, Kenneth; Valentine, Ronan M.; Moseley, Harry

    2015-06-01

    The effects of ageing and skin type on Photodynamic Therapy (PDT) for different treatment methods have been theoretically investigated. A multilayered Monte Carlo Radiation Transfer model is presented where both daylight activated PDT and conventional PDT are compared. It was found that light penetrates deeper through older skin with a lighter complexion, which translates into a deeper effective treatment depth. The effect of ageing was found to be larger for darker skin types. The investigation further strengthens the usage of daylight as a potential light source for PDT where effective treatment depths of about 2 mm can be achieved.

  8. PREFACE: First European Workshop on Monte Carlo Treatment Planning

    NASA Astrophysics Data System (ADS)

    Reynaert, Nick

    2007-07-01

    The "First European Workshop on Monte Carlo treatment planning", was an initiative of the European working group on Monte Carlo treatment planning (EWG-MCTP). It was organised at Ghent University (Belgium) on 22-25October 2006. The meeting was very successful and was attended by 150 participants. The impressive list of invited speakers and the scientific contributions (posters and oral presentations) have led to a very interesting program, that was well appreciated by all attendants. In addition, the presence of seven vendors of commercial MCTP software systems provided serious added value to the workshop. For each vendor, a representative has given a presentation in a dedicated session, explaining the current status of their system. It is clear that, for "traditional" radiotherapy applications (using photon or electron beams), Monte Carlo dose calculations have become the state of the art, and are being introduced into almost all commercial treatment planning systems. Invited lectures illustrated that scientific challenges are currently associated with 4D applications (e.g. respiratory motion) and the introduction of MC dose calculations in inverse planning. But it was striking that the Monte Carlo technique is also becoming very important in more novel treatment modalities such as BNCT, hadron therapy, stereotactic radiosurgery, Tomotherapy, etc. This emphasizes the continuous growing interest in MCTP. The people who attended the dosimetry session will certainly remember the high level discussion on the determination of correction factors for different ion chambers, used in small fields. The following proceedings will certainly confirm the high scientific level of the meeting. I would like to thank the members of the local organizing committee for all the hard work done before, during and after this meeting. The organisation of such an event is not a trivial task and it would not have been possible without the help of all my colleagues. I would also like to thank

  9. Dynamic Load Balancing of Parallel Monte Carlo Transport Calculations

    SciTech Connect

    O'Brien, M; Taylor, J; Procassini, R

    2004-12-22

    The performance of parallel Monte Carlo transport calculations which use both spatial and particle parallelism is increased by dynamically assigning processors to the most worked domains. Since the particle work load varies over the course of the simulation, this algorithm determines each cycle if dynamic load balancing would speed up the calculation. If load balancing is required, a small number of particle communications are initiated in order to achieve load balance. This method has decreased the parallel run time by more than a factor of three for certain criticality calculations.

  10. Monte Carlo Simulations and Generation of the SPI Response

    NASA Technical Reports Server (NTRS)

    Sturner, S. J.; Shrader, C. R.; Weidenspointner, G.; Teegarden, B. J.; Attie, D.; Diehl, R.; Ferguson, C.; Jean, P.; vonKienlin, A.

    2003-01-01

    In this paper we discuss the methods developed for the production of the INTEGRAL/SPI instrument response. The response files were produced using a suite of Monte Carlo simulation software developed at NASA/GSFC based on the GEANT-3 package available from CERN. The production of the INTEGRAL/SPI instrument response also required the development of a detailed computer mass model for SPI. We discuss our extensive investigations into methods to reduce both the computation time and storage requirements for the SPI response. We also discuss corrections to the simulated response based on our comparison of ground and inflight calibration data with MGEANT simulation.

  11. Monte Carlo Simulations and Generation of the SPI Response

    NASA Technical Reports Server (NTRS)

    Sturner, S. J.; Shrader, C. R.; Weidenspointner, G.; Teegarden, B. J.; Attie, D.; Cordier, B.; Diehl, R.; Ferguson, C.; Jean, P.; vonKienlin, A.

    2003-01-01

    In this paper we discuss the methods developed for the production of the INTEGRAL/SPI instrument response. The response files were produced using a suite of Monte Carlo simulation software developed at NASA/GSFC based on the GEANT-3 package available from CERN. The production of the INTEGRAL/SPI instrument response also required the development of a detailed computer mass model for SPI. We discuss ow extensive investigations into methods to reduce both the computation time and storage requirements for the SPI response. We also discuss corrections to the simulated response based on our comparison of ground and infiight Calibration data with MGEANT simulations.

  12. Computational radiology and imaging with the MCNP Monte Carlo code

    SciTech Connect

    Estes, G.P.; Taylor, W.M.

    1995-05-01

    MCNP, a 3D coupled neutron/photon/electron Monte Carlo radiation transport code, is currently used in medical applications such as cancer radiation treatment planning, interpretation of diagnostic radiation images, and treatment beam optimization. This paper will discuss MCNP`s current uses and capabilities, as well as envisioned improvements that would further enhance MCNP role in computational medicine. It will be demonstrated that the methodology exists to simulate medical images (e.g. SPECT). Techniques will be discussed that would enable the construction of 3D computational geometry models of individual patients for use in patient-specific studies that would improve the quality of care for patients.

  13. Implict Monte Carlo Radiation Transport Simulations of Four Test Problems

    SciTech Connect

    Gentile, N

    2007-08-01

    Radiation transport codes, like almost all codes, are difficult to develop and debug. It is helpful to have small, easy to run test problems with known answers to use in development and debugging. It is also prudent to re-run test problems periodically during development to ensure that previous code capabilities have not been lost. We describe four radiation transport test problems with analytic or approximate analytic answers. These test problems are suitable for use in debugging and testing radiation transport codes. We also give results of simulations of these test problems performed with an Implicit Monte Carlo photonics code.

  14. Communication: Water on hexagonal boron nitride from diffusion Monte Carlo

    SciTech Connect

    Al-Hamdani, Yasmine S.; Ma, Ming; Michaelides, Angelos; Alfè, Dario; Lilienfeld, O. Anatole von

    2015-05-14

    Despite a recent flurry of experimental and simulation studies, an accurate estimate of the interaction strength of water molecules with hexagonal boron nitride is lacking. Here, we report quantum Monte Carlo results for the adsorption of a water monomer on a periodic hexagonal boron nitride sheet, which yield a water monomer interaction energy of −84 ± 5 meV. We use the results to evaluate the performance of several widely used density functional theory (DFT) exchange correlation functionals and find that they all deviate substantially. Differences in interaction energies between different adsorption sites are however better reproduced by DFT.

  15. Cluster Monte Carlo simulations of the nematic-isotropic transition

    NASA Astrophysics Data System (ADS)

    Priezjev, N. V.; Pelcovits, Robert A.

    2001-06-01

    We report the results of simulations of the three-dimensional Lebwohl-Lasher model of the nematic-isotropic transition using a single cluster Monte Carlo algorithm. The algorithm, first introduced by Kunz and Zumbach to study two-dimensional nematics, is a modification of the Wolff algorithm for spin systems, and greatly reduces critical slowing down. We calculate the free energy in the neighborhood of the transition for systems up to linear size 70. We find a double well structure with a barrier that grows with increasing system size. We thus obtain an upper estimate of the value of the transition temperature in the thermodynamic limit.

  16. Monte Carlo study of neutrino acceleration in supernova shocks

    NASA Technical Reports Server (NTRS)

    Kazanas, D.; Ellison, D. C.

    1981-01-01

    The first order Fermi acceleration mechanism of cosmic rays in shocks may be at work for neutrinos in supernova shocks when the latter are at densities greater than 10 to the 13th g/cu cm, at which the core material is opaque to neutrinos. A Monte Carlo approach to study this effect is employed, and the emerging neutrino power law spectra are presented. The increased energy acquired by the neutrinos may facilitate their detection in supernova explosions and provide information about the physics of collapse.

  17. Markov chain Monte Carlo method without detailed balance.

    PubMed

    Suwa, Hidemaro; Todo, Synge

    2010-09-17

    We present a specific algorithm that generally satisfies the balance condition without imposing the detailed balance in the Markov chain Monte Carlo. In our algorithm, the average rejection rate is minimized, and even reduced to zero in many relevant cases. The absence of the detailed balance also introduces a net stochastic flow in a configuration space, which further boosts up the convergence. We demonstrate that the autocorrelation time of the Potts model becomes more than 6 times shorter than that by the conventional Metropolis algorithm. Based on the same concept, a bounce-free worm algorithm for generic quantum spin models is formulated as well.

  18. Exploring mass perception with Markov chain Monte Carlo.

    PubMed

    Cohen, Andrew L; Ross, Michael G

    2009-12-01

    Several previous studies have examined the ability to judge the relative mass of objects in idealized collisions. With a newly developed technique of psychological Markov chain Monte Carlo sampling (A. N. Sanborn & T. L. Griffiths, 2008), this work explores participants' perceptions of different collision mass ratios. The results reveal interparticipant differences and a qualitative distinction between the perception of 1:1 and 1:2 ratios. The results strongly suggest that participants' perceptions of 1:1 collisions are described by simple heuristics. The evidence for 1:2 collisions favors heuristic perception models that are sensitive to the sign but not the magnitude of perceived mass differences.

  19. Multidimensional master equation and its Monte-Carlo simulation.

    PubMed

    Pang, Juan; Bai, Zhan-Wu; Bao, Jing-Dong

    2013-02-28

    We derive an integral form of multidimensional master equation for a markovian process, in which the transition function is obtained in terms of a set of discrete Langevin equations. The solution of master equation, namely, the probability density function is calculated by using the Monte-Carlo composite sampling method. In comparison with the usual Langevin-trajectory simulation, the present approach decreases effectively coarse-grained error. We apply the master equation to investigate time-dependent barrier escape rate of a particle from a two-dimensional metastable potential and show the advantage of this approach in the calculations of quantities that depend on the probability density function.

  20. Parallelized quantum Monte Carlo algorithm with nonlocal worm updates.

    PubMed

    Masaki-Kato, Akiko; Suzuki, Takafumi; Harada, Kenji; Todo, Synge; Kawashima, Naoki

    2014-04-11

    Based on the worm algorithm in the path-integral representation, we propose a general quantum Monte Carlo algorithm suitable for parallelizing on a distributed-memory computer by domain decomposition. Of particular importance is its application to large lattice systems of bosons and spins. A large number of worms are introduced and its population is controlled by a fictitious transverse field. For a benchmark, we study the size dependence of the Bose-condensation order parameter of the hard-core Bose-Hubbard model with L×L×βt=10240×10240×16, using 3200 computing cores, which shows good parallelization efficiency.

  1. Communication: Variation after response in quantum Monte Carlo

    NASA Astrophysics Data System (ADS)

    Neuscamman, Eric

    2016-08-01

    We present a new method for modeling electronically excited states that overcomes a key failing of linear response theory by allowing the underlying ground state ansatz to relax in the presence of an excitation. The method is variational, has a cost similar to ground state variational Monte Carlo, and admits both open and periodic boundary conditions. We present preliminary numerical results showing that, when paired with the Jastrow antisymmetric geminal power ansatz, the variation-after-response formalism delivers accuracies for valence and charge transfer single excitations on par with equation of motion coupled cluster, while surpassing coupled cluster's accuracy for excitations with significant doubly excited character.

  2. Active neutron multiplicity analysis and Monte Carlo calculations

    NASA Astrophysics Data System (ADS)

    Krick, M. S.; Ensslin, N.; Langner, D. G.; Miller, M. C.; Siebelist, R.; Stewart, J. E.; Ceo, R. N.; May, P. K.; Collins, L. L., Jr.

    Active neutron multiplicity measurements of high-enrichment uranium metal and oxide samples have been made at Los Alamos and Y-12. The data from the measurements of standards at Los Alamos were analyzed to obtain values for neutron multiplication and source-sample coupling. These results are compared to equivalent results obtained from Monte Carlo calculations. An approximate relationship between coupling and multiplication is derived and used to correct doubles rates for multiplication and coupling. The utility of singles counting for uranium samples is also examined.

  3. Non-Boltzmann Ensembles and Monte Carlo Simulations

    NASA Astrophysics Data System (ADS)

    Murthy, K. P. N.

    2016-10-01

    Boltzmann sampling based on Metropolis algorithm has been extensively used for simulating a canonical ensemble and for calculating macroscopic properties of a closed system at desired temperatures. An estimate of a mechanical property, like energy, of an equilibrium system, is made by averaging over a large number microstates generated by Boltzmann Monte Carlo methods. This is possible because we can assign a numerical value for energy to each microstate. However, a thermal property like entropy, is not easily accessible to these methods. The reason is simple. We can not assign a numerical value for entropy, to a microstate. Entropy is not a property associated with any single microstate. It is a collective property of all the microstates. Toward calculating entropy and other thermal properties, a non-Boltzmann Monte Carlo technique called Umbrella sampling was proposed some forty years ago. Umbrella sampling has since undergone several metamorphoses and we have now, multi-canonical Monte Carlo, entropic sampling, flat histogram methods, Wang-Landau algorithm etc. This class of methods generates non-Boltzmann ensembles which are un-physical. However, physical quantities can be calculated as follows. First un-weight a microstates of the entropic ensemble; then re-weight it to the desired physical ensemble. Carry out weighted average over the entropic ensemble to estimate physical quantities. In this talk I shall tell you of the most recent non- Boltzmann Monte Carlo method and show how to calculate free energy for a few systems. We first consider estimation of free energy as a function of energy at different temperatures to characterize phase transition in an hairpin DNA in the presence of an unzipping force. Next we consider free energy as a function of order parameter and to this end we estimate density of states g(E, M), as a function of both energy E, and order parameter M. This is carried out in two stages. We estimate g(E) in the first stage. Employing g

  4. Graphics Processing Unit Accelerated Hirsch-Fye Quantum Monte Carlo

    NASA Astrophysics Data System (ADS)

    Moore, Conrad; Abu Asal, Sameer; Rajagoplan, Kaushik; Poliakoff, David; Caprino, Joseph; Tomko, Karen; Thakur, Bhupender; Yang, Shuxiang; Moreno, Juana; Jarrell, Mark

    2012-02-01

    In Dynamical Mean Field Theory and its cluster extensions, such as the Dynamic Cluster Algorithm, the bottleneck of the algorithm is solving the self-consistency equations with an impurity solver. Hirsch-Fye Quantum Monte Carlo is one of the most commonly used impurity and cluster solvers. This work implements optimizations of the algorithm, such as enabling large data re-use, suitable for the Graphics Processing Unit (GPU) architecture. The GPU's sheer number of concurrent parallel computations and large bandwidth to many shared memories takes advantage of the inherent parallelism in the Green function update and measurement routines, and can substantially improve the efficiency of the Hirsch-Fye impurity solver.

  5. MONTE CARLO ADVANCES FOR THE EOLUS ASCI PROJECT

    SciTech Connect

    J. S. HENDRICK; G. W. MCKINNEY; L. J. COX

    2000-01-01

    The Eolus ASCI project includes parallel, 3-D transport simulation for various nuclear applications. The codes developed within this project provide neutral and charged particle transport, detailed interaction physics, numerous source and tally capabilities, and general geometry packages. One such code is MCNPW which is a general purpose, 3-dimensional, time-dependent, continuous-energy Monte Carlo fully-coupled N-Particle transport code. Significant advances are also being made in the areas of modern software engineering and parallel computing. These advances are described in detail.

  6. Application of Monte Carlo methods in tomotherapy and radiation biophysics

    NASA Astrophysics Data System (ADS)

    Hsiao, Ya-Yun

    Helical tomotherapy is an attractive treatment for cancer therapy because highly conformal dose distributions can be achieved while the on-board megavoltage CT provides simultaneous images for accurate patient positioning. The convolution/superposition (C/S) dose calculation methods typically used for Tomotherapy treatment planning may overestimate skin (superficial) doses by 3-13%. Although more accurate than C/S methods, Monte Carlo (MC) simulations are too slow for routine clinical treatment planning. However, the computational requirements of MC can be reduced by developing a source model for the parts of the accelerator that do not change from patient to patient. This source model then becomes the starting point for additional simulations of the penetration of radiation through patient. In the first section of this dissertation, a source model for a helical tomotherapy is constructed by condensing information from MC simulations into series of analytical formulas. The MC calculated percentage depth dose and beam profiles computed using the source model agree within 2% of measurements for a wide range of field sizes, which suggests that the proposed source model provides an adequate representation of the tomotherapy head for dose calculations. Monte Carlo methods are a versatile technique for simulating many physical, chemical and biological processes. In the second major of this thesis, a new methodology is developed to simulate of the induction of DNA damage by low-energy photons. First, the PENELOPE Monte Carlo radiation transport code is used to estimate the spectrum of initial electrons produced by photons. The initial spectrum of electrons are then combined with DNA damage yields for monoenergetic electrons from the fast Monte Carlo damage simulation (MCDS) developed earlier by Semenenko and Stewart (Purdue University). Single- and double-strand break yields predicted by the proposed methodology are in good agreement (1%) with the results of published

  7. Monte Carlo simulation of vibrational relaxation in nitrogen

    NASA Technical Reports Server (NTRS)

    Olynick, David P.; Hassan, H. A.; Moss, James N.

    1990-01-01

    Monte Carlo simulation of nonequilibrium vibrational relaxation of (rotationless) N2 using transition probabilities form an extended SSH theory is presented. For the range of temperatures considered, 4000-8000 K, the vibrational levels were found to be reasonably close to an equilibrium distribution at an average vibrational temperature based on the vibrational energy of the gas. As a result, they do not show any statistically significant evidence of the bottleneck observed in earlier studies of N2. Based on this finding, it appears that, for the temperature range considered, dissociation commences after all vibrational levels equilibrate at the translational temperature.

  8. Analysis of real-time networks with monte carlo methods

    NASA Astrophysics Data System (ADS)

    Mauclair, C.; Durrieu, G.

    2013-12-01

    Communication networks in embedded systems are ever more large and complex. A better understanding of the dynamics of these networks is necessary to use them at best and lower costs. Todays tools are able to compute upper bounds of end-to-end delays that a packet being sent through the network could suffer. However, in the case of asynchronous networks, those worst end-to-end delay (WEED) cases are rarely observed in practice or through simulations due to the scarce situations that lead to worst case scenarios. A novel approach based on Monte Carlo methods is suggested to study the effects of the asynchrony on the performances.

  9. Monte Carlo simulation of photon scattering in biological tissue models.

    PubMed

    Kumar, D; Chacko, S; Singh, M

    1999-10-01

    Monte Carlo simulation of photon scattering, with and without abnormal tissue placed at various locations in the rectangular, semi-circular and semi-elliptical tissue models, has been carried out. The absorption coefficient of the tissue considered as abnormal is high and its scattering coefficient low compared to that of the control tissue. The placement of the abnormality at various locations within the models affects the transmission and surface emission of photons at various locations. The scattered photons originating from deeper layers make the maximum contribution at farther distances from the beam entry point. The contribution of various layers to photon scattering provides valuable data on variability of internal composition. Introduction.

  10. Error propagation in first-principles kinetic Monte Carlo simulation

    NASA Astrophysics Data System (ADS)

    Döpking, Sandra; Matera, Sebastian

    2017-04-01

    First-principles kinetic Monte Carlo models allow for the modeling of catalytic surfaces with predictive quality. This comes at the price of non-negligible errors induced by the underlying approximate density functional calculation. On the example of CO oxidation on RuO2(110), we demonstrate a novel, efficient approach to global sensitivity analysis, with which we address the error propagation in these multiscale models. We find, that we can still derive the most important atomistic factors for reactivity, albeit the errors in the simulation results are sizable. The presented approach might also be applied in the hierarchical model construction or computational catalyst screening.

  11. Positronic molecule calculations using Monte Carlo configuration interaction

    NASA Astrophysics Data System (ADS)

    Coe, Jeremy P.; Paterson, Martin J.

    2016-02-01

    We modify the Monte Carlo configuration interaction procedure to model atoms and molecules combined with a positron. We test this method with standard quantum chemistry basis sets on a number of positronic systems and compare results with the literature and full configuration interaction when appropriate. We consider positronium hydride, positronium hydroxide, lithium positride and a positron interacting with lithium, magnesium or lithium hydride. We demonstrate that we can capture much of the full configuration interaction results, but often require less than 10% of the configurations of these multireference wavefunctions. The effect of the number of frozen orbitals is also discussed.

  12. Studying the information content of TMDs using Monte Carlo generators

    SciTech Connect

    Avakian, H.; Matevosyan, H.; Pasquini, B.; Schweitzer, P.

    2015-02-05

    Theoretical advances in studies of the nucleon structure have been spurred by recent measurements of spin and/or azimuthal asymmetries worldwide. One of the main challenges still remaining is the extraction of the parton distribution functions, generalized to describe transverse momentum and spatial distributions of partons from these observables with no or minimal model dependence. In this topical review we present the latest developments in the field with emphasis on requirements for Monte Carlo event generators, indispensable for studies of the complex 3D nucleon structure, and discuss examples of possible applications.

  13. Sequential Monte Carlo hydraulic state estimation of an irrigation canal

    NASA Astrophysics Data System (ADS)

    Sau, Jacques; Malaterre, Pierre-Olivier; Baume, Jean-Pierre

    2010-04-01

    The estimation in real time of the hydraulic state of irrigation canals is becoming one of the major concerns of network managers. With this end in view, this Note presents a new approach based on the combination of a numerical solution of the open channel Saint-Venant PDE with a sequential Monte Carlo state-space estimation. We shall show that discharges and elevations along the canal are successfully estimated, and also that, concurrently, model parameters identification, such as the Manning-Strickler friction coefficient, can be performed.

  14. Cluster Monte Carlo simulations of the nematic-isotropic transition.

    PubMed

    Priezjev, N V; Pelcovits, R A

    2001-06-01

    We report the results of simulations of the three-dimensional Lebwohl-Lasher model of the nematic-isotropic transition using a single cluster Monte Carlo algorithm. The algorithm, first introduced by Kunz and Zumbach to study two-dimensional nematics, is a modification of the Wolff algorithm for spin systems, and greatly reduces critical slowing down. We calculate the free energy in the neighborhood of the transition for systems up to linear size 70. We find a double well structure with a barrier that grows with increasing system size. We thus obtain an upper estimate of the value of the transition temperature in the thermodynamic limit.

  15. MCNP{trademark} Monte Carlo: A precis of MCNP

    SciTech Connect

    Adams, K.J.

    1996-06-01

    MCNP{trademark} is a general purpose three-dimensional time-dependent neutron, photon, and electron transport code. It is highly portable and user-oriented, and backed by stringent software quality assurance practices and extensive experimental benchmarks. The cross section database is based upon the best evaluations available. MCNP incorporates state-of-the-art analog and adaptive Monte Carlo techniques. The code is documented in a 600 page manual which is augmented by numerous Los Alamos technical reports which detail various aspects of the code. MCNP represents over a megahour of development and refinement over the past 50 years and an ongoing commitment to excellence.

  16. FZ2MC: A Tool for Monte Carlo Transport Code Geometry Manipulation

    SciTech Connect

    Hackel, B M; Nielsen Jr., D E; Procassini, R J

    2009-02-25

    The process of creating and validating combinatorial geometry representations of complex systems for use in Monte Carlo transport simulations can be both time consuming and error prone. To simplify this process, a tool has been developed which employs extensions of the Form-Z commercial solid modeling tool. The resultant FZ2MC (Form-Z to Monte Carlo) tool permits users to create, modify and validate Monte Carlo geometry and material composition input data. Plugin modules that export this data to an input file, as well as parse data from existing input files, have been developed for several Monte Carlo codes. The FZ2MC tool is envisioned as a 'universal' tool for the manipulation of Monte Carlo geometry and material data. To this end, collaboration on the development of plug-in modules for additional Monte Carlo codes is desired.

  17. Modeling focusing Gaussian beams in a turbid medium with Monte Carlo simulations.

    PubMed

    Hokr, Brett H; Bixler, Joel N; Elpers, Gabriel; Zollars, Byron; Thomas, Robert J; Yakovlev, Vladislav V; Scully, Marlan O

    2015-04-06

    Monte Carlo techniques are the gold standard for studying light propagation in turbid media. Traditional Monte Carlo techniques are unable to include wave effects, such as diffraction; thus, these methods are unsuitable for exploring focusing geometries where a significant ballistic component remains at the focal plane. Here, a method is presented for accurately simulating photon propagation at the focal plane, in the context of a traditional Monte Carlo simulation. This is accomplished by propagating ballistic photons along trajectories predicted by Gaussian optics until they undergo an initial scattering event, after which, they are propagated through the medium by a traditional Monte Carlo technique. Solving a known problem by building upon an existing Monte Carlo implementation allows this method to be easily implemented in a wide variety of existing Monte Carlo simulations, greatly improving the accuracy of those models for studying dynamics in a focusing geometry.

  18. The many-body Wigner Monte Carlo method for time-dependent ab-initio quantum simulations

    SciTech Connect

    Sellier, J.M. Dimov, I.

    2014-09-15

    The aim of ab-initio approaches is the simulation of many-body quantum systems from the first principles of quantum mechanics. These methods are traditionally based on the many-body Schrödinger equation which represents an incredible mathematical challenge. In this paper, we introduce the many-body Wigner Monte Carlo method in the context of distinguishable particles and in the absence of spin-dependent effects. Despite these restrictions, the method has several advantages. First of all, the Wigner formalism is intuitive, as it is based on the concept of a quasi-distribution function. Secondly, the Monte Carlo numerical approach allows scalability on parallel machines that is practically unachievable by means of other techniques based on finite difference or finite element methods. Finally, this method allows time-dependent ab-initio simulations of strongly correlated quantum systems. In order to validate our many-body Wigner Monte Carlo method, as a case study we simulate a relatively simple system consisting of two particles in several different situations. We first start from two non-interacting free Gaussian wave packets. We, then, proceed with the inclusion of an external potential barrier, and we conclude by simulating two entangled (i.e. correlated) particles. The results show how, in the case of negligible spin-dependent effects, the many-body Wigner Monte Carlo method provides an efficient and reliable tool to study the time-dependent evolution of quantum systems composed of distinguishable particles.

  19. A Monte Carlo Method for Multi-Objective Correlated Geometric Optimization

    DTIC Science & Technology

    2014-05-01

    PAGES 19b. TELEPHONE NUMBER (Include area code) Standard Form 298 (Rev. 8/98) Prescribed by ANSI Std. Z39.18 May 2014 Final A Monte Carlo Method for...requiring computationally intensive algorithms for optimization. This report presents a method developed for solving such systems using a Monte Carlo...performs a Monte Carlo optimization to provide geospatial intelligence on entity placement using OpenCL framework. The solutions for optimal

  20. A Monte Carlo Radiation Model for Simulating Rarefied Multiphase Plume Flows

    DTIC Science & Technology

    2005-05-01

    Paper 2005-0964, 2005. 9Farmer, J. T., and Howell, J. R., “ Monte Carlo Prediction of Radiative Heat Transfer in Inhomogeneous, Anisotropic...Spectroscopy and Radiative Transfer , Vol. 50, No. 5, 1993, pp. 511-530. 14Everson, J., and Nelson, H. F., “Development and Application of a Reverse Monte Carlo ...Engineering University of Michigan, Ann Arbor, MI 48109 A Monte Carlo ray trace radiation model is presented for the determination of radiative

  1. Automated Monte Carlo biasing for photon-generated electrons near surfaces.

    SciTech Connect

    Franke, Brian Claude; Crawford, Martin James; Kensek, Ronald Patrick

    2009-09-01

    This report describes efforts to automate the biasing of coupled electron-photon Monte Carlo particle transport calculations. The approach was based on weight-windows biasing. Weight-window settings were determined using adjoint-flux Monte Carlo calculations. A variety of algorithms were investigated for adaptivity of the Monte Carlo tallies. Tree data structures were used to investigate spatial partitioning. Functional-expansion tallies were used to investigate higher-order spatial representations.

  2. Quasi-Monte Carlo methods for lattice systems: A first look

    NASA Astrophysics Data System (ADS)

    Jansen, K.; Leovey, H.; Ammon, A.; Griewank, A.; Müller-Preussker, M.

    2014-03-01

    We investigate the applicability of quasi-Monte Carlo methods to Euclidean lattice systems for quantum mechanics in order to improve the asymptotic error behavior of observables for such theories. In most cases the error of an observable calculated by averaging over random observations generated from an ordinary Markov chain Monte Carlo simulation behaves like N, where N is the number of observations. By means of quasi-Monte Carlo methods it is possible to improve this behavior for certain problems to N-1, or even further if the problems are regular enough. We adapted and applied this approach to simple systems like the quantum harmonic and anharmonic oscillator and verified an improved error scaling. Catalogue identifier: AERJ_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AERJ_v1_0.html Program obtainable from: CPC Program Library, Queen’s University, Belfast, N. Ireland Licensing provisions: GNU General Public Licence version 3 No. of lines in distributed program, including test data, etc.: 67759 No. of bytes in distributed program, including test data, etc.: 2165365 Distribution format: tar.gz Programming language: C and C++. Computer: PC. Operating system: Tested on GNU/Linux, should be portable to other operating systems with minimal efforts. Has the code been vectorized or parallelized?: No RAM: The memory usage directly scales with the number of samples and dimensions: Bytes used = “number of samples” × “number of dimensions” × 8 Bytes (double precision). Classification: 4.13, 11.5, 23. External routines: FFTW 3 library (http://www.fftw.org) Nature of problem: Certain physical models formulated as a quantum field theory through the Feynman path integral, such as quantum chromodynamics, require a non-perturbative treatment of the path integral. The only known approach that achieves this is the lattice regularization. In this formulation the path integral is discretized to a finite, but very high dimensional integral. So far only Monte

  3. A Monte Carlo Dispersion Analysis of the X-33 Simulation Software

    NASA Technical Reports Server (NTRS)

    Williams, Peggy S.

    2001-01-01

    A Monte Carlo dispersion analysis has been completed on the X-33 software simulation. The simulation is based on a preliminary version of the software and is primarily used in an effort to define and refine how a Monte Carlo dispersion analysis would have been done on the final flight-ready version of the software. This report gives an overview of the processes used in the implementation of the dispersions and describes the methods used to accomplish the Monte Carlo analysis. Selected results from 1000 Monte Carlo runs are presented with suggestions for improvements in future work.

  4. Computation of electron diode characteristics by monte carlo method including effect of collisions.

    NASA Technical Reports Server (NTRS)

    Goldstein, C. M.

    1964-01-01

    Consistent field Monte Carlo method calculation for collision effect on electron-ion diode characteristics and for hard sphere electron- neutral collision effect for monoenergetic- thermionic emission

  5. Monte Carlo analysis of energy dependent anisotropy of bremsstrahlung x-ray spectra

    SciTech Connect

    Kakonyi, Robert; Erdelyi, Miklos; Szabo, Gabor

    2009-09-15

    The energy resolved emission angle dependence of x-ray spectra was analyzed by MCNPX (Monte Carlo N particle Monte Carlo) simulator. It was shown that the spectral photon flux had a maximum at a well-defined emission angle due to the anisotropy of the bremsstrahlung process. The higher the relative photon energy, the smaller the emission angle belonging to the maximum was. The trends predicted by the Monte Carlo simulations were experimentally verified. The Monte Carlo results were compared to both the Institute of Physics and Engineering in Medicine spectra table and the SPEKCALCV1.0 code.

  6. Monte Carlo study of Siemens PRIMUS photoneutron production.

    PubMed

    Pena, J; Franco, L; Gómez, F; Iglesias, A; Pardo, J; Pombar, M

    2005-12-21

    Neutron production in radiotherapy facilities has been studied from the early days of modern linacs. Detailed studies are now possible using photoneutron capabilities of general-purpose Monte Carlo codes at energies of interest in medical physics. The present work studies the effects of modelling different accelerator head and room geometries on the neutron fluence and spectra predicted via Monte Carlo. The results from the simulation of a 15 MV Siemens PRIMUS linac show an 80% increase in the fluence scored at the isocentre when, besides modelling the components necessary for electron/photon simulations, other massive accelerator head components are included. Neutron fluence dependence on inner treatment room volume is analysed showing that thermal neutrons have a 'gaseous' behaviour and then a 1/V dependence. Neutron fluence maps for three energy ranges, fast (E > 0.1 MeV), epithermal (1 eV < E < 0.1 MeV) and thermal (E < 1 eV), are also presented and the influence of the head components on them is discussed.

  7. Monte Carlo study of Siemens PRIMUS photoneutron production

    NASA Astrophysics Data System (ADS)

    Pena, J.; Franco, L.; Gómez, F.; Iglesias, A.; Pardo, J.; Pombar, M.

    2005-12-01

    Neutron production in radiotherapy facilities has been studied from the early days of modern linacs. Detailed studies are now possible using photoneutron capabilities of general-purpose Monte Carlo codes at energies of interest in medical physics. The present work studies the effects of modelling different accelerator head and room geometries on the neutron fluence and spectra predicted via Monte Carlo. The results from the simulation of a 15 MV Siemens PRIMUS linac show an 80% increase in the fluence scored at the isocentre when, besides modelling the components neccessary for electron/photon simulations, other massive accelerator head components are included. Neutron fluence dependence on inner treatment room volume is analysed showing that thermal neutrons have a 'gaseous' behaviour and then a 1/V dependence. Neutron fluence maps for three energy ranges, fast (E > 0.1 MeV), epithermal (1 eV < E < 0.1 MeV) and thermal (E < 1 eV), are also presented and the influence of the head components on them is discussed.

  8. Application of Monte Carlo Methods in Molecular Targeted Radionuclide Therapy

    SciTech Connect

    Hartmann Siantar, C; Descalle, M-A; DeNardo, G L; Nigg, D W

    2002-02-19

    Targeted radionuclide therapy promises to expand the role of radiation beyond the treatment of localized tumors. This novel form of therapy targets metastatic cancers by combining radioactive isotopes with tumor-seeking molecules such as monoclonal antibodies and custom-designed synthetic agents. Ultimately, like conventional radiotherapy, the effectiveness of targeted radionuclide therapy is limited by the maximum dose that can be given to a critical, normal tissue, such as bone marrow, kidneys, and lungs. Because radionuclide therapy relies on biological delivery of radiation, its optimization and characterization are necessarily different than for conventional radiation therapy. We have initiated the development of a new, Monte Carlo transport-based treatment planning system for molecular targeted radiation therapy as part of the MINERVA treatment planning system. This system calculates patient-specific radiation dose estimates using a set of computed tomography scans to describe the 3D patient anatomy, combined with 2D (planar image) and 3D (SPECT, or single photon emission computed tomography) to describe the time-dependent radiation source. The accuracy of such a dose calculation is limited primarily by the accuracy of the initial radiation source distribution, overlaid on the patient's anatomy. This presentation provides an overview of MINERVA functionality for molecular targeted radiation therapy, and describes early validation and implementation results of Monte Carlo simulations.

  9. On the time scale associated with Monte Carlo simulations

    SciTech Connect

    Bal, Kristof M. Neyts, Erik C.

    2014-11-28

    Uniform-acceptance force-bias Monte Carlo (fbMC) methods have been shown to be a powerful technique to access longer timescales in atomistic simulations allowing, for example, phase transitions and growth. Recently, a new fbMC method, the time-stamped force-bias Monte Carlo (tfMC) method, was derived with inclusion of an estimated effective timescale; this timescale, however, does not seem able to explain some of the successes the method. In this contribution, we therefore explicitly quantify the effective timescale tfMC is able to access for a variety of systems, namely a simple single-particle, one-dimensional model system, the Lennard-Jones liquid, an adatom on the Cu(100) surface, a silicon crystal with point defects and a highly defected graphene sheet, in order to gain new insights into the mechanisms by which tfMC operates. It is found that considerable boosts, up to three orders of magnitude compared to molecular dynamics, can be achieved for solid state systems by lowering of the apparent activation barrier of occurring processes, while not requiring any system-specific input or modifications of the method. We furthermore address the pitfalls of using the method as a replacement or complement of molecular dynamics simulations, its ability to explicitly describe correct dynamics and reaction mechanisms, and the association of timescales to MC simulations in general.

  10. High-Fidelity Coupled Monte-Carlo/Thermal-Hydraulics Calculations

    NASA Astrophysics Data System (ADS)

    Ivanov, Aleksandar; Sanchez, Victor; Ivanov, Kostadin

    2014-06-01

    Monte Carlo methods have been used as reference reactor physics calculation tools worldwide. The advance in computer technology allows the calculation of detailed flux distributions in both space and energy. In most of the cases however, those calculations are done under the assumption of homogeneous material density and temperature distributions. The aim of this work is to develop a consistent methodology for providing realistic three-dimensional thermal-hydraulic distributions by coupling the in-house developed sub-channel code SUBCHANFLOW with the standard Monte-Carlo transport code MCNP. In addition to the innovative technique of on-the fly material definition, a flux-based weight-window technique has been introduced to improve both the magnitude and the distribution of the relative errors. Finally, a coupled code system for the simulation of steady-state reactor physics problems has been developed. Besides the problem of effective feedback data interchange between the codes, the treatment of temperature dependence of the continuous energy nuclear data has been investigated.

  11. Improved criticality convergence via a modified Monte Carlo iteration method

    SciTech Connect

    Booth, Thomas E; Gubernatis, James E

    2009-01-01

    Nuclear criticality calculations with Monte Carlo codes are normally done using a power iteration method to obtain the dominant eigenfunction and eigenvalue. In the last few years it has been shown that the power iteration method can be modified to obtain the first two eigenfunctions. This modified power iteration method directly subtracts out the second eigenfunction and thus only powers out the third and higher eigenfunctions. The result is a convergence rate to the dominant eigenfunction being |k{sub 3}|/k{sub 1} instead of |k{sub 2}|/k{sub 1}. One difficulty is that the second eigenfunction contains particles of both positive and negative weights that must sum somehow to maintain the second eigenfunction. Summing negative and positive weights can be done using point detector mechanics, but this sometimes can be quite slow. We show that an approximate cancellation scheme is sufficient to accelerate the convergence to the dominant eigenfunction. A second difficulty is that for some problems the Monte Carlo implementation of the modified power method has some stability problems. We also show that a simple method deals with this in an effective, but ad hoc manner.

  12. Path integral Monte Carlo on a lattice. II. Bound states

    NASA Astrophysics Data System (ADS)

    O'Callaghan, Mark; Miller, Bruce N.

    2016-07-01

    The equilibrium properties of a single quantum particle (qp) interacting with a classical gas for a wide range of temperatures that explore the system's behavior in the classical as well as in the quantum regime is investigated. Both the qp and the atoms are restricted to sites on a one-dimensional lattice. A path integral formalism developed within the context of the canonical ensemble is utilized, where the qp is represented by a closed, variable-step random walk on the lattice. Monte Carlo methods are employed to determine the system's properties. To test the usefulness of the path integral formalism, the Metropolis algorithm is employed to determine the equilibrium properties of the qp in the context of a square well potential, forcing the qp to occupy bound states. We consider a one-dimensional square well potential where all atoms on the lattice are occupied with one atom with an on-site potential except for a contiguous set of sites of various lengths centered at the middle of the lattice. Comparison of the potential energy, the energy fluctuations, and the correlation function are made between the results of the Monte Carlo simulations and the numerical calculations.

  13. Treatment planning for a small animal using Monte Carlo simulation

    SciTech Connect

    Chow, James C. L.; Leung, Michael K. K.

    2007-12-15

    The development of a small animal model for radiotherapy research requires a complete setup of customized imaging equipment, irradiators, and planning software that matches the sizes of the subjects. The purpose of this study is to develop and demonstrate the use of a flexible in-house research environment for treatment planning on small animals. The software package, called DOSCTP, provides a user-friendly platform for DICOM computed tomography-based Monte Carlo dose calculation using the EGSnrcMP-based DOSXYZnrc code. Validation of the treatment planning was performed by comparing the dose distributions for simple photon beam geometries calculated through the Pinnacle3 treatment planning system and measurements. A treatment plan for a mouse based on a CT image set by a 360-deg photon arc is demonstrated. It is shown that it is possible to create 3D conformal treatment plans for small animals with consideration of inhomogeneities using small photon beam field sizes in the diameter range of 0.5-5 cm, with conformal dose covering the target volume while sparing the surrounding critical tissue. It is also found that Monte Carlo simulation is suitable to carry out treatment planning dose calculation for small animal anatomy with voxel size about one order of magnitude smaller than that of the human.

  14. Treatment planning for a small animal using Monte Carlo simulation.

    PubMed

    Chow, James C L; Leung, Michael K K

    2007-12-01

    The development of a small animal model for radiotherapy research requires a complete setup of customized imaging equipment, irradiators, and planning software that matches the sizes of the subjects. The purpose of this study is to develop and demonstrate the use of a flexible in-house research environment for treatment planning on small animals. The software package, called DOSCTP, provides a user-friendly platform for DICOM computed tomography-based Monte Carlo dose calculation using the EGSnrcMP-based DOSXYZnrc code. Validation of the treatment planning was performed by comparing the dose distributions for simple photon beam geometries calculated through the Pinnacle3 treatment planning system and measurements. A treatment plan for a mouse based on a CT image set by a 360-deg photon arc is demonstrated. It is shown that it is possible to create 3D conformal treatment plans for small animals with consideration of inhomogeneities using small photon beam field sizes in the diameter range of 0.5-5 cm, with conformal dose covering the target volume while sparing the surrounding critical tissue. It is also found that Monte Carlo simulation is suitable to carry out treatment planning dose calculation for small animal anatomy with voxel size about one order of magnitude smaller than that of the human.

  15. Multicanonical Monte Carlo for Simulation of Optical Links

    NASA Astrophysics Data System (ADS)

    Bononi, Alberto; Rusch, Leslie A.

    Multicanonical Monte Carlo (MMC) is a simulation-acceleration technique for the estimation of the statistical distribution of a desired system output variable, given the known distribution of the system input variables. MMC, similarly to the powerful and well-studied method of importance sampling (IS) [1], is a useful method to efficiently simulate events occurring with probabilities smaller than ˜ 10 - 6, such as bit error rate (BER) and system outage probability. Modern telecommunications systems often employ forward error correcting (FEC) codes that allow pre-decoded channel error rates higher than 10 - 3; these systems are well served by traditional Monte-Carlo error counting. MMC and IS are, nonetheless, fundamental tools to both understand the statistics of the decision variable (as well as of any physical parameter of interest) and to validate any analytical or semianalytical BER calculation model. Several examples of such use will be provided in this chapter. As a case in point, outage probabilities are routinely below 10 - 6, a sweet spot where MMC and IS provide the most efficient (sometimes the only) solution to estimate outages.

  16. Utilizing Monte Carlo Simulations to Optimize Institutional Empiric Antipseudomonal Therapy

    PubMed Central

    Tennant, Sarah J.; Burgess, Donna R.; Rybak, Jeffrey M.; Martin, Craig A.; Burgess, David S.

    2015-01-01

    Pseudomonas aeruginosa is a common pathogen implicated in nosocomial infections with increasing resistance to a limited arsenal of antibiotics. Monte Carlo simulation provides antimicrobial stewardship teams with an additional tool to guide empiric therapy. We modeled empiric therapies with antipseudomonal β-lactam antibiotic regimens to determine which were most likely to achieve probability of target attainment (PTA) of ≥90%. Microbiological data for P. aeruginosa was reviewed for 2012. Antibiotics modeled for intermittent and prolonged infusion were aztreonam, cefepime, meropenem, and piperacillin/tazobactam. Using minimum inhibitory concentrations (MICs) from institution-specific isolates, and pharmacokinetic and pharmacodynamic parameters from previously published studies, a 10,000-subject Monte Carlo simulation was performed for each regimen to determine PTA. MICs from 272 isolates were included in this analysis. No intermittent infusion regimens achieved PTA ≥90%. Prolonged infusions of cefepime 2000 mg Q8 h, meropenem 1000 mg Q8 h, and meropenem 2000 mg Q8 h demonstrated PTA of 93%, 92%, and 100%, respectively. Prolonged infusions of piperacillin/tazobactam 4.5 g Q6 h and aztreonam 2 g Q8 h failed to achieved PTA ≥90% but demonstrated PTA of 81% and 73%, respectively. Standard doses of β-lactam antibiotics as intermittent infusion did not achieve 90% PTA against P. aeruginosa isolated at our institution; however, some prolonged infusions were able to achieve these targets. PMID:27025644

  17. Monte Carlo track structure for radiation biology and space applications

    NASA Technical Reports Server (NTRS)

    Nikjoo, H.; Uehara, S.; Khvostunov, I. G.; Cucinotta, F. A.; Wilson, W. E.; Goodhead, D. T.

    2001-01-01

    Over the past two decades event by event Monte Carlo track structure codes have increasingly been used for biophysical modelling and radiotherapy. Advent of these codes has helped to shed light on many aspects of microdosimetry and mechanism of damage by ionising radiation in the cell. These codes have continuously been modified to include new improved cross sections and computational techniques. This paper provides a summary of input data for ionizations, excitations and elastic scattering cross sections for event by event Monte Carlo track structure simulations for electrons and ions in the form of parametric equations, which makes it easy to reproduce the data. Stopping power and radial distribution of dose are presented for ions and compared with experimental data. A model is described for simulation of full slowing down of proton tracks in water in the range 1 keV to 1 MeV. Modelling and calculations are presented for the response of a TEPC proportional counter irradiated with 5 MeV alpha-particles. Distributions are presented for the wall and wall-less counters. Data shows contribution of indirect effects to the lineal energy distribution for the wall counters responses even at such a low ion energy.

  18. On the time scale associated with Monte Carlo simulations.

    PubMed

    Bal, Kristof M; Neyts, Erik C

    2014-11-28

    Uniform-acceptance force-bias Monte Carlo (fbMC) methods have been shown to be a powerful technique to access longer timescales in atomistic simulations allowing, for example, phase transitions and growth. Recently, a new fbMC method, the time-stamped force-bias Monte Carlo (tfMC) method, was derived with inclusion of an estimated effective timescale; this timescale, however, does not seem able to explain some of the successes the method. In this contribution, we therefore explicitly quantify the effective timescale tfMC is able to access for a variety of systems, namely a simple single-particle, one-dimensional model system, the Lennard-Jones liquid, an adatom on the Cu(100) surface, a silicon crystal with point defects and a highly defected graphene sheet, in order to gain new insights into the mechanisms by which tfMC operates. It is found that considerable boosts, up to three orders of magnitude compared to molecular dynamics, can be achieved for solid state systems by lowering of the apparent activation barrier of occurring processes, while not requiring any system-specific input or modifications of the method. We furthermore address the pitfalls of using the method as a replacement or complement of molecular dynamics simulations, its ability to explicitly describe correct dynamics and reaction mechanisms, and the association of timescales to MC simulations in general.

  19. Subtle Monte Carlo Updates in Dense Molecular Systems.

    PubMed

    Bottaro, Sandro; Boomsma, Wouter; E Johansson, Kristoffer; Andreetta, Christian; Hamelryck, Thomas; Ferkinghoff-Borg, Jesper

    2012-02-14

    Although Markov chain Monte Carlo (MC) simulation is a potentially powerful approach for exploring conformational space, it has been unable to compete with molecular dynamics (MD) in the analysis of high density structural states, such as the native state of globular proteins. Here, we introduce a kinetic algorithm, CRISP, that greatly enhances the sampling efficiency in all-atom MC simulations of dense systems. The algorithm is based on an exact analytical solution to the classic chain-closure problem, making it possible to express the interdependencies among degrees of freedom in the molecule as correlations in a multivariate Gaussian distribution. We demonstrate that our method reproduces structural variation in proteins with greater efficiency than current state-of-the-art Monte Carlo methods and has real-time simulation performance on par with molecular dynamics simulations. The presented results suggest our method as a valuable tool in the study of molecules in atomic detail, offering a potential alternative to molecular dynamics for probing long time-scale conformational transitions.

  20. Energy Modulated Photon Radiotherapy: A Monte Carlo Feasibility Study

    PubMed Central

    Zhang, Ying; Feng, Yuanming; Ming, Xin

    2016-01-01

    A novel treatment modality termed energy modulated photon radiotherapy (EMXRT) was investigated. The first step of EMXRT was to determine beam energy for each gantry angle/anatomy configuration from a pool of photon energy beams (2 to 10 MV) with a newly developed energy selector. An inverse planning system using gradient search algorithm was then employed to optimize photon beam intensity of various beam energies based on presimulated Monte Carlo pencil beam dose distributions in patient anatomy. Finally, 3D dose distributions in six patients of different tumor sites were simulated with Monte Carlo method and compared between EMXRT plans and clinical IMRT plans. Compared to current IMRT technique, the proposed EMXRT method could offer a better paradigm for the radiotherapy of lung cancers and pediatric brain tumors in terms of normal tissue sparing and integral dose. For prostate, head and neck, spine, and thyroid lesions, the EMXRT plans were generally comparable to the IMRT plans. Our feasibility study indicated that lower energy (<6 MV) photon beams could be considered in modern radiotherapy treatment planning to achieve a more personalized care for individual patient with dosimetric gains. PMID:26977413