V. N. Yatsenko; O. A. Kochetkov; N. M. Borisov; I. A. Gusev; P. A. Vlasov; V. S. Kalistratova; P. G. Nisimov; F. K. Levochkin; M. V. Borovkov; V. P. Stolyarov; S. Ts. Tsedish; I. N. Tyurin; D. Franck; L. de Carlan
2005-01-01
This paper is devoted to an experimental check of a method for performing individual calibration of a spectrometer for human body radiation, using the Monte Carlo method, on 35–40 kg pigs. The 241Am content was measured on a low-energy ? spectrometer with a detector based on highly pure germanium. Tomographic images of pigs were used to calculate, using the MCNP4c2
Monte Carlo methods Sequential Monte Carlo
Doucet, Arnaud
Monte Carlo methods Sequential Monte Carlo A. Doucet Carcans Sept. 2011 A. Doucet (MLSS Sept. 2011) Sequential Monte Carlo Sept. 2011 1 / 85 #12;Generic Problem Consider a sequence of probability distributions, Fn = Fn 1 F. A. Doucet (MLSS Sept. 2011) Sequential Monte Carlo Sept. 2011 2 / 85 #12;Generic Problem
Sang Hyun Cho; Warren D. Reece; Chan-Hyeong Kim
2004-01-01
Dose calculations around electron-emitting metallic spherical sources were performed up to the X90 distance of each electron energy ranging from 0.5 to 3.0MeV using the MCNP 4C Monte Carlo code and the dose point kernel (DPK) method with the DPKs rescaled using the linear range ratio and physical density ratio, respectively. The results show that the discrepancy between the MCNP
Sang Hyun Cho; Warren D. Reece; Chan-Hyeong Kim
2004-01-01
Dose calculations around electron-emitting metallic spherical sources were performed up to the X90 distance of each electron energy ranging from 0.5 to 3.0MeV using the MCNP 4C Monte Carlo code and the dose point kernel (DPK) method with the DPKs rescaled using the linear range ratio and physical density ratio, respectively. The results show that the discrepancy between the MCNP
V. D. Rusova; V. A. Tarasova; D. A. Litvinova; S. V. Iaroshenkob
2004-01-01
The electron energy spectra, not connected to b-decay, of 235U- and\\u000a239Pu-films, irradiated by thermal neutrons, obtained by a Monte Carlo method\\u000ais presented in the given work. The modelling was performed with the help of a\\u000acomputer code MCNP4C (Monte Carlo Neutron Photon transport code system),\\u000aallowing to carry out the computer experiments on joint transport of neutrons,\\u000aphotons
Monte Carlo methods Monte Carlo Principle and MCMC
Doucet, Arnaud
Monte Carlo methods Monte Carlo Principle and MCMC A. Doucet Carcans Sept. 2011 A. Doucet (MLSS Sept. 2011) MCMC Sept. 2011 1 / 91 #12;Overview of the Lectures 1 Monte Carlo Principles A. Doucet (MLSS Sept. 2011) MCMC Sept. 2011 2 / 91 #12;Overview of the Lectures 1 Monte Carlo Principles 2 Markov
S. Ulam
1949-01-01
We shall present here the motivation and a general description of a method dealing with a class of problems in mathematical physics. The method is, essentially, a statistical approach to the study of differential equations, or more generally, of integro-differential equations that occur in various branches of the natural sciences.
Shell model Monte Carlo methods
Koonin, S.E. [California Inst. of Tech., Pasadena, CA (United States). W.K. Kellogg Radiation Lab.; Dean, D.J. [Oak Ridge National Lab., TN (United States)
1996-10-01
We review quantum Monte Carlo methods for dealing with large shell model problems. These methods reduce the imaginary-time many-body evolution operator to a coherent superposition of one-body evolutions in fluctuating one-body fields; resultant path integral is evaluated stochastically. We first discuss the motivation, formalism, and implementation of such Shell Model Monte Carlo methods. There then follows a sampler of results and insights obtained from a number of applications. These include the ground state and thermal properties of pf-shell nuclei, thermal behavior of {gamma}-soft nuclei, and calculation of double beta-decay matrix elements. Finally, prospects for further progress in such calculations are discussed. 87 refs.
Monte Carlo Methods for Inference and Learning
Hinton, Geoffrey E.
Monte Carlo Methods for Inference and Learning Guest Lecturer: Ryan Adams CSC 2535 http://www.cs.toronto.edu/~rpa #12;Overview ·Monte Carlo basics ·Rejection and Importance sampling ·Markov chain Monte Carlo ·Metropolis-Hastings and Gibbs sampling ·Slice sampling ·Hamiltonian Monte Carlo #12;Computing Expectations We
Zimmerman, G.B.
1997-06-24
Monte Carlo methods appropriate to simulate the transport of x-rays, neutrons, ion and electrons in Inertial Confinement Fusion targets are described and analyzed. The Implicit Monte Carlo method of x-ray transport handles symmetry within indirect drive ICF hohlraums well, but can be improved 50X in efficiency by angular biasing the x-rays towards the fuel capsule. Accurate simulation of thermonuclear burns nd burn diagnostics involves detailed particle source spectra, charged particle ranges, inflight reaction kinematics, corrections for bulk and thermal Doppler effects and variance reduction to obtain adequate statistics for rare events. It is found that the effects of angular Coulomb scattering must be included in models of charged particle transport through heterogeneous materials.
NASA Astrophysics Data System (ADS)
Pauzi, A. M.
2013-06-01
The neutron transport code, Monte Carlo N-Particle (MCNP) which was wellkown as the gold standard in predicting nuclear reaction was used to model the small nuclear reactor core called "U-batteryTM", which was develop by the University of Manchester and Delft Institute of Technology. The paper introduces on the concept of modeling the small reactor core, a high temperature reactor (HTR) type with small coated TRISO fuel particle in graphite matrix using the MCNPv4C software. The criticality of the core were calculated using the software and analysed by changing key parameters such coolant type, fuel type and enrichment levels, cladding materials, and control rod type. The criticality results from the simulation were validated using the SCALE 5.1 software by [1] M Ding and J L Kloosterman, 2010. The data produced from these analyses would be used as part of the process of proposing initial core layout and a provisional list of materials for newly design reactor core. In the future, the criticality study would be continued with different core configurations and geometries.
Population Monte Carlo Methods/OFPR/CREST/May 5, 2003 1 Population Monte Carlo Methods
Robert, Christian P.
Population Monte Carlo Methods/OFPR/CREST/May 5, 2003 1 Population Monte Carlo Methods Christian P. Robert Universit´e Paris Dauphine #12;Population Monte Carlo Methods/OFPR/CREST/May 5, 2003 2 1 A Benchmark example #12;Population Monte Carlo Methods/OFPR/CREST/May 5, 2003 3 Even simple models may lead
Monte Carlo Methods in Statistics Christian Robert
Boyer, Edmond
Monte Carlo Methods in Statistics Christian Robert Universit´e Paris Dauphine and CREST, INSEE September 2, 2009 Monte Carlo methods are now an essential part of the statistician's toolbox, to the point! We recall in this note some of the advances made in the design of Monte Carlo techniques towards
MONTE CARLO METHOD AND SENSITIVITY ESTIMATIONS
Dufresne, Jean-Louis
MONTE CARLO METHOD AND SENSITIVITY ESTIMATIONS A. de Lataillade a;#3; , S. Blanco b , Y. Clergent b on a formal basis and simple radiative transfer examples are used for illustration. Key words: Monte Carlo submitted to Elsevier Science 18 February 2002 #12; 1 Introduction Monte Carlo methods are commonly used
Monte Carlo methods for security pricing
Phelim Boyle; Mark Broadie; Paul Glasserman
1997-01-01
The Monte Carlo approach has proved to be a valuable and flexible computational tool in modern finance. This paper discusses some of the recent applications of the Monte Carlo method to security pricing problems, with emphasis on improvements in efficiency. We first review some variance reduction methods that have proved useful in finance. Then we describe the use of deterministic
A BOLEWSKIJR; M. Ciechanowski; A. Dydejczyk; A. Kreft
2008-01-01
The effect of the detector characteristics on the performance of an isotopic neutron source device for measuring thermal neutron absorption cross section (?) has been examined by means of Monte Carlo simulations. Three specific experimental arrangements, alternately with BF3 counters and 3He counters of the same sizes, have been modelled using the MCNP-4C code. Results of Monte Carlo calculations show
Advanced Monte Carlo Methods: General Principles of the Monte
Mascagni, Michael
Advanced Monte Carlo Methods: General Principles of the Monte Carlo Method Prof. Dr. Michael of Monte CarloProf. Dr. Michael Mascagni: Advanced Monte Carlo Methods Slide 2 of 61 Numerical Integration: The Canonical Monte Carlo Application Numerical integration is a simple problem to explain and thoroughly
MONTE CARLO METHODS IN GEOPHYSICAL INVERSE Malcolm Sambridge
Sambridge, Malcolm
MONTE CARLO METHODS IN GEOPHYSICAL INVERSE PROBLEMS Malcolm Sambridge Research School Earth 27 2000; revised 15 December 2001; accepted 9 September published 5 December Monte Carlo inversion encountered in exploration seismology. traces development application Monte Carlo methods inverse problems
Monte Carlo methods for fissured porous media: gridless approaches
Paris-Sud XI, UniversitÃ© de
Monte Carlo methods for fissured porous media: gridless approaches Antoine Lejay1, -- Projet OMEGA (INRIA / Institut Â´Elie Cartan, Nancy) Abstract: In this article, we present two Monte Carlo methods) Published in Monte Carlo Methods and Applications. Proc. of the IV IMACS Seminar on Monte Carlo Methods
The Monte Carlo Method and Software Reliability Theory
Pratt, Vaughan
1 The Monte Carlo Method and Software Reliability Theory Brian Korver1 briank@cs.pdx.edu TR 94-1. February 18, 1994 1.0 Abstract The Monte Carlo method of reliability prediction is useful when system for valid, nontrivial input data and an external oracle. 2.0 The Monte Carlo Method The Monte Carlo method
An Introduction to Monte Carlo Methods
ERIC Educational Resources Information Center
Raeside, D. E.
1974-01-01
Reviews the principles of Monte Carlo calculation and random number generation in an attempt to introduce the direct and the rejection method of sampling techniques as well as the variance-reduction procedures. Indicates that the increasing availability of computers makes it possible for a wider audience to learn about these powerful methods. (CC)
Sequential Monte Carlo Methods for Dynamic Systems
Jun S. Liu; Rong Chen
1998-01-01
We provide a general framework for using Monte Carlo methods in dynamic systems and discuss its wide applications. Under this framework, several currently available techniques are studied and generalized to accommodate more complex features. All of these methods are partial combinations of three ingredients: importance sampling and resampling, rejection sampling, and Markov chain iterations. We provide guidelines on how they
Improved Monte Carlo Renormalization Group Method
Gupta, R.; Wilson, K.G.; Umrigar, C.
1985-01-01
An extensive program to analyse critical systems using an Improved Monte Carlo Renormalization Group Method (IMCRG) being undertaken at LANL and Cornell is described. Here we first briefly review the method and then list some of the topics being investigated. 9 refs.
Quantum speedup of Monte Carlo methods
Ashley Montanaro
2015-04-27
Monte Carlo methods use random sampling to estimate numerical quantities which are hard to compute deterministically. One important example is the use in statistical physics of rapidly mixing Markov chains to approximately compute partition functions. In this work we describe a quantum algorithm which can accelerate Monte Carlo methods in a very general setting. The algorithm estimates the expected output value of an arbitrary randomised or quantum subroutine with bounded variance, achieving a near-quadratic speedup over the best possible classical algorithm. Combining the algorithm with the use of quantum walks gives a quantum speedup of the fastest known classical algorithms with rigorous performance bounds for computing partition functions, which use multiple-stage Markov chain Monte Carlo techniques. The quantum algorithm can also be used to estimate the total variation distance between probability distributions efficiently.
Exploring Probability and the Monte Carlo Method
NSDL National Science Digital Library
2003-01-01
This multimedia mathematics resource examines probability. A video illustrates how math is used to evaluate the danger of avalanches in the mountains of Alberta. An interactive component allows students to compare theoretical and experimental probabilities, as well as explore the Monte Carlo method. A probability print activity is also included.
MONTE-CARLO METHODS IN GLOBAL ILLUMINATION
Frey, Pascal
MONTE-CARLO METHODS IN GLOBAL ILLUMINATION Script written by Szirmay-Kalos LÂ´aszlÂ´o in WS of 1999 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2 2 Global illumination problem 5 2.1 The rendering equation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 24 4 Solution strategies for the global illumination problem 28 4.1 Inversion
NASA Astrophysics Data System (ADS)
Cho, Sang Hyun; Reece, Warren D.; Kim, Chan-Hyeong
2004-03-01
Dose calculations around electron-emitting metallic spherical sources were performed up to the X90 distance of each electron energy ranging from 0.5 to 3.0 MeV using the MCNP 4C Monte Carlo code and the dose point kernel (DPK) method with the DPKs rescaled using the linear range ratio and physical density ratio, respectively. The results show that the discrepancy between the MCNP and DPK results increases with the atomic number of the source (i.e., heterogeneity in source-target geometry), regardless of the rescaling method used. The observed discrepancies between the MCNP and DPK results were up to 100% for extreme cases such as a platinum source immersed in water.
Monte Carlo methods for pricing financial options
N. Bolia; S. Juneja
2005-01-01
Pricing financial options is amongst the most important and challenging problems in the modern financial industry. Except\\u000a in the simplest cases, the prices of options do not have a simple closed form solution and efficient computational methods\\u000a are needed to determine them. Monte Carlo methods have increasingly become a popular computational tool to price complex financial\\u000a options, especially when the
A Monte Carlo method for solving unsteady adjoint equations
Wang, Qiqi
A Monte Carlo method for solving unsteady adjoint equations Qiqi Wang a,*, David Gleich a , Amin on this technique and uses a Monte Carlo linear solver. The Monte Carlo solver yields a forward-time algorithm' equation, the Monte Carlo approach is faster for a large class of problems while preserving sufficient
Hybrid Radiosity\\/Monte Carlo Methods
Peter Shirley
1994-01-01
this document said that absorb and reemit wasasymptotically equivalent to the photon tracking model.8 Hybrid Radiosity\\/Monte Carlo MethodsRadiositySolutionGather fromsolution for smallarea zonesFinal (corrected)solutionFigure 4: Zones with small areas have their radiance recalculated more accurately in a postprocess.iteration (each ray carries approximately the same amount of power). The other is that, unlike in[7], the zone with the most power is not
Path Integral Monte Carlo Methods for Fermions
NASA Astrophysics Data System (ADS)
Ethan, Ethan; Dubois, Jonathan; Ceperley, David
2014-03-01
In general, Quantum Monte Carlo methods suffer from a sign problem when simulating fermionic systems. This causes the efficiency of a simulation to decrease exponentially with the number of particles and inverse temperature. To circumvent this issue, a nodal constraint is often implemented, restricting the Monte Carlo procedure from sampling paths that cause the many-body density matrix to change sign. Unfortunately, this high-dimensional nodal surface is not a priori known unless the system is exactly solvable, resulting in uncontrolled errors. We will discuss two possible routes to extend the applicability of finite-temperatue path integral Monte Carlo. First we extend the regime where signful simulations are possible through a novel permutation sampling scheme. Afterwards, we discuss a method to variationally improve the nodal surface by minimizing a free energy during simulation. Applications of these methods will include both free and interacting electron gases, concluding with discussion concerning extension to inhomogeneous systems. Support from DOE DE-FG52-09NA29456, DE-AC52-07NA27344, LLNL LDRD 10- ERD-058, and the Lawrence Scholar program.
Variance minimization variational Monte Carlo method
Khan, I; Gao, Bo; Khan, Imran
2007-01-01
We present a variational Monte Carlo (VMC) method that works equally well for the ground and the excited states of a quantum system. The method is based on the minimization of the variance of energy, as opposed to the energy itself in standard methods. As a test, it is applied to the investigation of the universal spectrum at the van der Waals length scale for two identical Bose atoms in a symmetric harmonic trap, with results compared to the basically exact results obtained from a multiscale quantum-defect theory.
Monte Carlo methods to calculate impact probabilities
NASA Astrophysics Data System (ADS)
Rickman, H.; Wi?niowski, T.; Wajer, P.; Gabryszewski, R.; Valsecchi, G. B.
2014-09-01
Context. Unraveling the events that took place in the solar system during the period known as the late heavy bombardment requires the interpretation of the cratered surfaces of the Moon and terrestrial planets. This, in turn, requires good estimates of the statistical impact probabilities for different source populations of projectiles, a subject that has received relatively little attention, since the works of Öpik (1951, Proc. R. Irish Acad. Sect. A, 54, 165) and Wetherill (1967, J. Geophys. Res., 72, 2429). Aims: We aim to work around the limitations of the Öpik and Wetherill formulae, which are caused by singularities due to zero denominators under special circumstances. Using modern computers, it is possible to make good estimates of impact probabilities by means of Monte Carlo simulations, and in this work, we explore the available options. Methods: We describe three basic methods to derive the average impact probability for a projectile with a given semi-major axis, eccentricity, and inclination with respect to a target planet on an elliptic orbit. One is a numerical averaging of the Wetherill formula; the next is a Monte Carlo super-sizing method using the target's Hill sphere. The third uses extensive minimum orbit intersection distance (MOID) calculations for a Monte Carlo sampling of potentially impacting orbits, along with calculations of the relevant interval for the timing of the encounter allowing collision. Numerical experiments are carried out for an intercomparison of the methods and to scrutinize their behavior near the singularities (zero relative inclination and equal perihelion distances). Results: We find an excellent agreement between all methods in the general case, while there appear large differences in the immediate vicinity of the singularities. With respect to the MOID method, which is the only one that does not involve simplifying assumptions and approximations, the Wetherill averaging impact probability departs by diverging toward infinity, while the Hill sphere method results in a severely underestimated probability. We provide a discussion of the reasons for these differences, and we finally present the results of the MOID method in the form of probability maps for the Earth and Mars on their current orbits. These maps show a relatively flat probability distribution, except for the occurrence of two ridges found at small inclinations and for coinciding projectile/target perihelion distances. Conclusions: Our results verify the standard formulae in the general case, away from the singularities. In fact, severe shortcomings are limited to the immediate vicinity of those extreme orbits. On the other hand, the new Monte Carlo methods can be used without excessive consumption of computer time, and the MOID method avoids the problems associated with the other methods. Appendices are available in electronic form at http://www.aanda.org
Sequential Monte Carlo Methods for Statistical Analysis of Tables
Liu, Jun
Sequential Monte Carlo Methods for Statistical Analysis of Tables Yuguo CHEN, Persi DIACONIS, Susan- butions. Our method produces Monte Carlo samples that are remarkably close to the uniform distribution. Our method compares favorably with other existing Monte Carlo- based algorithms, and sometimes
An introduction to Monte Carlo methods
NASA Astrophysics Data System (ADS)
Walter, J.-C.; Barkema, G. T.
2015-01-01
Monte Carlo simulations are methods for simulating statistical systems. The aim is to generate a representative ensemble of configurations to access thermodynamical quantities without the need to solve the system analytically or to perform an exact enumeration. The main principles of Monte Carlo simulations are ergodicity and detailed balance. The Ising model is a lattice spin system with nearest neighbor interactions that is appropriate to illustrate different examples of Monte Carlo simulations. It displays a second order phase transition between disordered (high temperature) and ordered (low temperature) phases, leading to different strategies of simulations. The Metropolis algorithm and the Glauber dynamics are efficient at high temperature. Close to the critical temperature, where the spins display long range correlations, cluster algorithms are more efficient. We introduce the rejection free (or continuous time) algorithm and describe in details an interesting alternative representation of the Ising model using graphs instead of spins with the so-called Worm algorithm. We conclude with an important discussion of the dynamical effects such as thermalization and correlation time.
TECHNICAL MATERIAL New Hybrid Monte Carlo Methods for Efficient Sampling
Reich, Sebastian
TECHNICAL MATERIAL New Hybrid Monte Carlo Methods for Efficient Sampling: from Physics to Biology in physics, biology, materials science and statistics. These generalized shadow Hybrid Monte Carlo (GSHMC known methods in sampling efficiency by an order of magnitude4) . KEYWORDS: Hybrid, Monte Carlo
TOWARDS A HYBRID MONTE CARLO METHOD FOR RAREFIED GAS DYNAMICS
Pareschi, Lorenzo
TOWARDS A HYBRID MONTE CARLO METHOD FOR RAREFIED GAS DYNAMICS RUSSEL E. CAFLISCH #3; AND LORENZO PARESCHI y Abstract. For the Boltzmann equation, we present a hybrid Monte Carlo method that is robust-equilibrium particle distribution and a Maxwellian. The hybrid distribution is then evolved by Monte Carlo
Multigroup cross section generation via Monte Carlo methods
Everett Lee Redmond II
1997-01-01
Monte Carlo methods of performing radiation transport calculations are heavily used in many different applications. However, despite their prevalence, Monte Carlo codes do not eliminate the need for other methods of analysis like discrete ordinates transport codes or even diffusion theory codes. For example: current Monte Carlo codes are not capable of performing transient analysis or continuous energy adjoint calculations.
MONTE CARLO METHODS IN GEOPHYSICAL INVERSE Malcolm Sambridge
Sambridge, Malcolm
MONTE CARLO METHODS IN GEOPHYSICAL INVERSE PROBLEMS Malcolm Sambridge Research School of Earth 2002. [1] Monte Carlo inversion techniques were first used by Earth scientists more than 30 years ago in exploration seismology. This pa- per traces the development and application of Monte Carlo methods for inverse
MONTE CARLO ANALYSIS: ESTIMATING GPP WITH THE CANOPY CONDUCTANCE METHOD
DeLucia, Evan H.
MONTE CARLO ANALYSIS: ESTIMATING GPP WITH THE CANOPY CONDUCTANCE METHOD 1. Overview A novel method performed a Monte Carlo Analysis to investigate the power of our statistical approach: i.e. what and Assumptions The Monte Carlo Analysis was performed as follows: · Natural variation. The only study to date
Ming-hua Hsieh
2002-01-01
We review two types of adaptive Monte Carlo methods for rare event simulations. These methods are based on importance sampling. The first approach selects importance sampling distributions by minimizing the variance of importance sampling estimator. The second approach selects importance sampling distributions by minimizing the cross entropy to the optimal importance sampling distribution. We also review the basic concepts of
Density-matrix quantum Monte Carlo method
NASA Astrophysics Data System (ADS)
Blunt, N. S.; Rogers, T. W.; Spencer, J. S.; Foulkes, W. M. C.
2014-06-01
We present a quantum Monte Carlo method capable of sampling the full density matrix of a many-particle system at finite temperature. This allows arbitrary reduced density matrix elements and expectation values of complicated nonlocal observables to be evaluated easily. The method resembles full configuration interaction quantum Monte Carlo but works in the space of many-particle operators instead of the space of many-particle wave functions. One simulation provides the density matrix at all temperatures simultaneously, from T =? to T =0, allowing the temperature dependence of expectation values to be studied. The direct sampling of the density matrix also allows the calculation of some previously inaccessible entanglement measures. We explain the theory underlying the method, describe the algorithm, and introduce an importance-sampling procedure to improve the stochastic efficiency. To demonstrate the potential of our approach, the energy and staggered magnetization of the isotropic antiferromagnetic Heisenberg model on small lattices, the concurrence of one-dimensional spin rings, and the Renyi S2 entanglement entropy of various sublattices of the 6×6 Heisenberg model are calculated. The nature of the sign problem in the method is also investigated.
Quantum Monte Carlo methods for nuclear physics
Carlson, Joseph A. [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Gandolfi, Stefano [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Pederiva, Francesco [Univ. of Trento (Italy); Pieper, Steven C. [Argonne National Lab. (ANL), Argonne, IL (United States); Schiavilla, Rocco [Thomas Jefferson National Accelerator Facility (TJNAF), Newport News, VA (United States); Old Dominion Univ., Norfolk, VA (United States); Schmidt, K. E, [Arizona State Univ., Tempe, AZ (United States); Wiringa, Robert B. [Argonne National Lab. (ANL), Argonne, IL (United States)
2012-01-01
Quantum Monte Carlo methods have proved very valuable to study the structure and reactions of light nuclei and nucleonic matter starting from realistic nuclear interactions and currents. These ab-initio calculations reproduce many low-lying states, moments and transitions in light nuclei, and simultaneously predict many properties of light nuclei and neutron matter over a rather wide range of energy and momenta. We review the nuclear interactions and currents, and describe the continuum Quantum Monte Carlo methods used in nuclear physics. These methods are similar to those used in condensed matter and electronic structure but naturally include spin-isospin, tensor, spin-orbit, and three-body interactions. We present a variety of results including the low-lying spectra of light nuclei, nuclear form factors, and transition matrix elements. We also describe low-energy scattering techniques, studies of the electroweak response of nuclei relevant in electron and neutrino scattering, and the properties of dense nucleonic matter as found in neutron stars. A coherent picture of nuclear structure and dynamics emerges based upon rather simple but realistic interactions and currents.
Quantum Monte Carlo methods for nuclear physics
J. Carlson; S. Gandolfi; F. Pederiva; Steven C. Pieper; R. Schiavilla; K. E. Schmidt; R. B. Wiringa
2015-04-29
Quantum Monte Carlo methods have proved very valuable to study the structure and reactions of light nuclei and nucleonic matter starting from realistic nuclear interactions and currents. These ab-initio calculations reproduce many low-lying states, moments and transitions in light nuclei, and simultaneously predict many properties of light nuclei and neutron matter over a rather wide range of energy and momenta. We review the nuclear interactions and currents, and describe the continuum Quantum Monte Carlo methods used in nuclear physics. These methods are similar to those used in condensed matter and electronic structure but naturally include spin-isospin, tensor, spin-orbit, and three-body interactions. We present a variety of results including the low-lying spectra of light nuclei, nuclear form factors, and transition matrix elements. We also describe low-energy scattering techniques, studies of the electroweak response of nuclei relevant in electron and neutrino scattering, and the properties of dense nucleonic matter as found in neutron stars. A coherent picture of nuclear structure and dynamics emerges based upon rather simple but realistic interactions and currents.
Quantum Monte Carlo methods for nuclear physics
Carlson, Joseph A.; Gandolfi, Stefano; Pederiva, Francesco; Pieper, Steven C.; Schiavilla, Rocco; Schmidt, K. E,; Wiringa, Robert B.
2012-01-01
Quantum Monte Carlo methods have proved very valuable to study the structure and reactions of light nuclei and nucleonic matter starting from realistic nuclear interactions and currents. These ab-initio calculations reproduce many low-lying states, moments and transitions in light nuclei, and simultaneously predict many properties of light nuclei and neutron matter over a rather wide range of energy and momenta. We review the nuclear interactions and currents, and describe the continuum Quantum Monte Carlo methods used in nuclear physics. These methods are similar to those used in condensed matter and electronic structure but naturally include spin-isospin, tensor, spin-orbit, and three-bodymore »interactions. We present a variety of results including the low-lying spectra of light nuclei, nuclear form factors, and transition matrix elements. We also describe low-energy scattering techniques, studies of the electroweak response of nuclei relevant in electron and neutrino scattering, and the properties of dense nucleonic matter as found in neutron stars. A coherent picture of nuclear structure and dynamics emerges based upon rather simple but realistic interactions and currents.« less
Quantum Monte Carlo methods for nuclear physics
Carlson, Joseph A. [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Gandolfi, Stefano [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Pederiva, Francesco [Univ. of Trento (Italy); Pieper, Steven C. [Argonne National Lab. (ANL), Argonne, IL (United States); Schiavilla, Rocco [Thomas Jefferson National Accelerator Facility (TJNAF), Newport News, VA (United States); Old Dominion Univ., Norfolk, VA (United States); Schmidt, K. E, [Arizona State Univ., Tempe, AZ (United States); Wiringa, Robert B. [Argonne National Lab. (ANL), Argonne, IL (United States)
2012-01-01
Quantum Monte Carlo methods have proved very valuable to study the structure and reactions of light nuclei and nucleonic matter starting from realistic nuclear interactions and currents. These ab-initio calculations reproduce many low-lying states, moments and transitions in light nuclei, and simultaneously predict many properties of light nuclei and neutron matter over a rather wide range of energy and momenta. We review the nuclear interactions and currents, and describe the continuum Quantum Monte Carlo methods used in nuclear physics. These methods are similar to those used in condensed matter and electronic structure but naturally include spin-isospin, tensor, spin-orbit, and three-body interactions. We present a variety of results including the low-lying spectra of light nuclei, nuclear form factors, and transition matrix elements. We also describe low-energy scattering techniques, studies of the electroweak response of nuclei relevant in electron and neutrino scattering, and the properties of dense nucleonic matter as found in neutron stars. A coherent picture of nuclear structure and dynamics emerges based upon rather simple but realistic interactions and currents.
Dennis R Schaart; Adrie J J Bos; August J M Winkelman; Martijn C Clarijs
2002-01-01
The radial depth–dose distribution of a prototype 188W\\/188Re ? particle line source of known activity has been measured in a PMMA phantom, using a novel, ultra-thin type of LiF:Mg,Cu,P thermoluminescent detector (TLD). The measured radial dose function of this intravascular brachytherapy source agrees well with MCNP4C Monte Carlo simulations, which indicate that 188Re accounts for ?99% of the dose between
4 Monte Carlo Methods in Classical Statistical Physics
Janke, Wolfhard
4 Monte Carlo Methods in Classical Statistical Physics Wolfhard Janke Institut f¨ur Theoretische update algorithms (Metropolis, heat-bath, Glauber). Then methods for the statistical analysis of the thus Carlo Methods in Classical Statistical Physics, Lect. Notes Phys. 739, 79140 (2008) DOI 10
Monte Carlo mean-field method for spin systems
NASA Astrophysics Data System (ADS)
Henriques, Eduardo F.; Henriques, Vera B.; Salinas, S. R.
1995-04-01
We use a Monte Carlo mean-field method proposed by Netz and Berker to analyze the critical behavior of an Ising square lattice. We show that this method demands longer sampling times as compared with the conventional Monte Carlo simulations. Also, similar mean-field results can be obtained from self-consistent analytic calculations for small clusters of spins.
Monte Carlo Methods: A Computational Pattern for Our Pattern Language
California at Berkeley, University of
Monte Carlo Methods: A Computational Pattern for Our Pattern Language Jike Chong University The Monte Carlo Methods pattern is a computational software programming pattern in Our Pattern Language (OPL tacit knowledge about software design. One can construct a pattern language using a set of related
Vectorized Monte Carlo methods for reactor lattice analysis
Brown, F.B.
1984-03-01
Some of the new computational methods and equivalent mathematical representations of physics models used in the MCV code, a vectorized continuous-energy Monte Carlo code for use on the CYBER-205 computer are discussed. While the principal application of MCV is the neutronics analysis of repeating reactor lattices, the new methods used in MCV should be generally useful for vectorizing Monte Carlo for other applications. For background, a brief overview of the vector processing features of the CYBER-205 is included, followed by a discussion of the fundamentals of Monte Carlo vectorization. The physics models used in the MCV vectorized Monte Carlo code are then summarized. The new methods used in scattering analysis are presented along with details of several key, highly specialized computational routines. Finally, speedups relative to CDC-7600 scalar Monte Carlo are discussed.
Vectorized Monte Carlo methods for reactor lattice analysis
NASA Technical Reports Server (NTRS)
Brown, F. B.
1984-01-01
Some of the new computational methods and equivalent mathematical representations of physics models used in the MCV code, a vectorized continuous-enery Monte Carlo code for use on the CYBER-205 computer are discussed. While the principal application of MCV is the neutronics analysis of repeating reactor lattices, the new methods used in MCV should be generally useful for vectorizing Monte Carlo for other applications. For background, a brief overview of the vector processing features of the CYBER-205 is included, followed by a discussion of the fundamentals of Monte Carlo vectorization. The physics models used in the MCV vectorized Monte Carlo code are then summarized. The new methods used in scattering analysis are presented along with details of several key, highly specialized computational routines. Finally, speedups relative to CDC-7600 scalar Monte Carlo are discussed.
RADIATIVE HEAT TRANSFER WITH QUASI-MONTE CARLO METHODS
RADIATIVE HEAT TRANSFER WITH QUASI-MONTE CARLO METHODS A. Kersch1 W. Moroko2 A. Schuster1 1Siemens of Quasi-Monte Carlo to this problem. 1.1 Radiative Heat Transfer Reactors In the manufacturing of the problems which can be solved by such a simulation is high accuracy modeling of the radiative heat transfer
RADIATIVE HEAT TRANSFER WITH QUASIMONTE CARLO METHODS \\Lambda
RADIATIVE HEAT TRANSFER WITH QUASIMONTE CARLO METHODS \\Lambda A. Kersch 1 W. Morokoff 2 A accuracy modeling of the radiative heat transfer from the heater to the wafer. Figure 1 shows the draft Carlo simulation is often used to solve radiative transfer problems where complex physical phenomena
Monte Carlo Methods for the Linearized Poisson-Boltzmann Equation
Simonov, Nikolai Aleksandrovich
algo- rithm, another, related, Monte Carlo algorithm is presented. This modified Monte Carlo method- timating certain Gaussian path integrals without the need for simulating Brownian trajectories in detail. We then similarly interpret the exponential weight in the Feynman-Kac formula as a survival
Monte Carlo Methods for Portfolio Credit Risk Tim J. Brereton
Kroese, Dirk P.
Monte Carlo Methods for Portfolio Credit Risk Tim J. Brereton Dirk P. Kroese School of Mathematics of this chapter is to survey the Monte Carlo techniques that are used in portfolio credit risk modeling. We discuss various approaches for modeling the dependencies between individual components of a portfolio
Calculating Air Resistance using the Monte Carlo Method
NSDL National Science Digital Library
Students will discover the terminal velocity to mass relationship and use this information to calculate the air resistance constant. They will evaluate the accuracy of their lab using the Monte Carlo method.
Low variance methods for Monte Carlo simulation of phonon transport
Péraud, Jean-Philippe M. (Jean-Philippe Michel)
2011-01-01
Computational studies in kinetic transport are of great use in micro and nanotechnologies. In this work, we focus on Monte Carlo methods for phonon transport, intended for studies in microscale heat transfer. After reviewing ...
Methods for calculating forces within quantum Monte Carlo simulations.
Badinski, A; Haynes, P D; Trail, J R; Needs, R J
2010-02-24
Atomic force calculations within the variational and diffusion quantum Monte Carlo methods are described. The advantages of calculating diffusion quantum Monte Carlo forces with the 'pure' rather than the 'mixed' probability distribution are discussed. An accurate and practical method for calculating forces using the pure distribution is presented and tested for the SiH molecule. The statistics of force estimators are explored and violations of the central limit theorem are found in some cases. PMID:21386380
A hybrid Monte Carlo and response matrix Monte Carlo method in criticality calculation
Li, Z.; Wang, K. [Dept. of Engineering Physics, Tsinghua Univ., Beijing, 100084 (China)
2012-07-01
Full core calculations are very useful and important in reactor physics analysis, especially in computing the full core power distributions, optimizing the refueling strategies and analyzing the depletion of fuels. To reduce the computing time and accelerate the convergence, a method named Response Matrix Monte Carlo (RMMC) method based on analog Monte Carlo simulation was used to calculate the fixed source neutron transport problems in repeated structures. To make more accurate calculations, we put forward the RMMC method based on non-analog Monte Carlo simulation and investigate the way to use RMMC method in criticality calculations. Then a new hybrid RMMC and MC (RMMC+MC) method is put forward to solve the criticality problems with combined repeated and flexible geometries. This new RMMC+MC method, having the advantages of both MC method and RMMC method, can not only increase the efficiency of calculations, also simulate more complex geometries rather than repeated structures. Several 1-D numerical problems are constructed to test the new RMMC and RMMC+MC method. The results show that RMMC method and RMMC+MC method can efficiently reduce the computing time and variations in the calculations. Finally, the future research directions are mentioned and discussed at the end of this paper to make RMMC method and RMMC+MC method more powerful. (authors)
Study of the Transition Flow Regime using Monte Carlo Methods
NASA Technical Reports Server (NTRS)
Hassan, H. A.
1999-01-01
This NASA Cooperative Agreement presents a study of the Transition Flow Regime Using Monte Carlo Methods. The topics included in this final report are: 1) New Direct Simulation Monte Carlo (DSMC) procedures; 2) The DS3W and DS2A Programs; 3) Papers presented; 4) Miscellaneous Applications and Program Modifications; 5) Solution of Transitional Wake Flows at Mach 10; and 6) Turbulence Modeling of Shock-Dominated Fows with a k-Enstrophy Formulation.
Mengkuo Wang
2006-01-01
In particle transport computations, the Monte Carlo simulation method is a widely used algorithm. There are several Monte Carlo codes available that perform particle transport simulations. However the geometry packages and geometric modeling capability of Monte Carlo codes are limited as they can not handle complicated geometries made up of complex surfaces. Previous research exists that take advantage of the
The Monte Carlo method in quantum field theory
Colin Morningstar
2007-02-20
This series of six lectures is an introduction to using the Monte Carlo method to carry out nonperturbative studies in quantum field theories. Path integrals in quantum field theory are reviewed, and their evaluation by the Monte Carlo method with Markov-chain based importance sampling is presented. Properties of Markov chains are discussed in detail and several proofs are presented, culminating in the fundamental limit theorem for irreducible Markov chains. The example of a real scalar field theory is used to illustrate the Metropolis-Hastings method and to demonstrate the effectiveness of an action-preserving (microcanonical) local updating algorithm in reducing autocorrelations. The goal of these lectures is to provide the beginner with the basic skills needed to start carrying out Monte Carlo studies in quantum field theories, as well as to present the underlying theoretical foundations of the method.
Mosleh-Shirazi, M. A.; Hadad, K.; Faghihi, R.; Baradaran-Ghahfarokhi, M.; Naghshnezhad, Z.; Meigooni, A. S. [Center for Research in Medical Physics and Biomedical Engineering and Physics Unit, Radiotherapy Department, Shiraz University of Medical Sciences, Shiraz 71936-13311 (Iran, Islamic Republic of); Radiation Research Center and Medical Radiation Department, School of Engineering, Shiraz University, Shiraz 71936-13311 (Iran, Islamic Republic of); Comprehensive Cancer Center of Nevada, Las Vegas, Nevada 89169 (United States)
2012-08-15
This study primarily aimed to obtain the dosimetric characteristics of the Model 6733 {sup 125}I seed (EchoSeed) with improved precision and accuracy using a more up-to-date Monte-Carlo code and data (MCNP5) compared to previously published results, including an uncertainty analysis. Its secondary aim was to compare the results obtained using the MCNP5, MCNP4c2, and PTRAN codes for simulation of this low-energy photon-emitting source. The EchoSeed geometry and chemical compositions together with a published {sup 125}I spectrum were used to perform dosimetric characterization of this source as per the updated AAPM TG-43 protocol. These simulations were performed in liquid water material in order to obtain the clinically applicable dosimetric parameters for this source model. Dose rate constants in liquid water, derived from MCNP4c2 and MCNP5 simulations, were found to be 0.993 cGyh{sup -1} U{sup -1} ({+-}1.73%) and 0.965 cGyh{sup -1} U{sup -1} ({+-}1.68%), respectively. Overall, the MCNP5 derived radial dose and 2D anisotropy functions results were generally closer to the measured data (within {+-}4%) than MCNP4c and the published data for PTRAN code (Version 7.43), while the opposite was seen for dose rate constant. The generally improved MCNP5 Monte Carlo simulation may be attributed to a more recent and accurate cross-section library. However, some of the data points in the results obtained from the above-mentioned Monte Carlo codes showed no statistically significant differences. Derived dosimetric characteristics in liquid water are provided for clinical applications of this source model.
The All Particle Monte Carlo Method: 1990 Status Report
J. A. Rathkopf; C. T. Ballinger; D. E. Cullen; S. T. Perkins; E. F. Plechaty
1990-01-01
Development of the All Particle Method, a project to simulate the transport of particles via the Monte Carlo method, has proceeded on two fronts: data collection and algorithm development. In this paper we report on the status of both these aspects. The data collection is nearly complete with the addition of electron and atomic data libraries and a newly revised
Adaptive Monte Carlo methods for rare event simulations
Ming-hua Hsieh
2002-01-01
We review two types of adaptive Monte Carlo methods for rare event simulations. These methods are based on importance sampling. The first approach selects importance sampling distributions by minimizing the variance of importance sampling estimator. The second approach selects importance sampling distributions by minimizing the cross entropy to the optimal importance sampling distribution. We also review the basic concepts of
Zeinali-Rafsanjani, B; Mosleh-Shirazi, M A; Faghihi, R; Karbasi, S; Mosalaei, A
2015-01-01
To accurately recompute dose distributions in chest-wall radiotherapy with 120 kVp kilovoltage X-rays, an MCNP4C Monte Carlo model is presented using a fast method that obviates the need to fully model the tube components. To validate the model, half-value layer (HVL), percentage depth doses (PDDs) and beam profiles were measured. Dose measurements were performed for a more complex situation using thermoluminescence dosimeters (TLDs) placed within a Rando phantom. The measured and computed first and second HVLs were 3.8, 10.3 mm Al and 3.8, 10.6 mm Al, respectively. The differences between measured and calculated PDDs and beam profiles in water were within 2 mm/2% for all data points. In the Rando phantom, differences for majority of data points were within 2%. The proposed model offered an approximately 9500-fold reduced run time compared to the conventional full simulation. The acceptable agreement, based on international criteria, between the simulations and the measurements validates the accuracy of the model for its use in treatment planning and radiobiological modeling studies of superficial therapies including chest-wall irradiation using kilovoltage beam. PMID:26170553
Zeinali-Rafsanjani, B.; Mosleh-Shirazi, M. A.; Faghihi, R.; Karbasi, S.; Mosalaei, A.
2015-01-01
To accurately recompute dose distributions in chest-wall radiotherapy with 120 kVp kilovoltage X-rays, an MCNP4C Monte Carlo model is presented using a fast method that obviates the need to fully model the tube components. To validate the model, half-value layer (HVL), percentage depth doses (PDDs) and beam profiles were measured. Dose measurements were performed for a more complex situation using thermoluminescence dosimeters (TLDs) placed within a Rando phantom. The measured and computed first and second HVLs were 3.8, 10.3 mm Al and 3.8, 10.6 mm Al, respectively. The differences between measured and calculated PDDs and beam profiles in water were within 2 mm/2% for all data points. In the Rando phantom, differences for majority of data points were within 2%. The proposed model offered an approximately 9500-fold reduced run time compared to the conventional full simulation. The acceptable agreement, based on international criteria, between the simulations and the measurements validates the accuracy of the model for its use in treatment planning and radiobiological modeling studies of superficial therapies including chest-wall irradiation using kilovoltage beam. PMID:26170553
Monte Carlo Methods for Uncertainty Quantification Mathematical Institute, University of Oxford
Giles, Mike
Monte Carlo Methods for Uncertainty Quantification Mike Giles Mathematical Institute, University October 25, 2013 Mike Giles (Oxford) Monte Carlo methods October 25, 2013 1 / 28 Lecture outline Lecture 2 Hypercube randomised quasi-Monte Carlo Mike Giles (Oxford) Monte Carlo methods October 25, 2013 2 / 28
Monte Carlo Methods for Uncertainty Quantification Mathematical Institute, University of Oxford
Giles, Mike
Monte Carlo Methods for Uncertainty Quantification Mike Giles Mathematical Institute, University October 25, 2013 Mike Giles (Oxford) Monte Carlo methods October 25, 2013 1 / 28 #12;Lecture outline-Monte Carlo Mike Giles (Oxford) Monte Carlo methods October 25, 2013 2 / 28 #12;Lecture outline Lecture 3
Monte Carlo Methods for Uncertainty Quantification Mathematical Institute, University of Oxford
Giles, Mike
Monte Carlo Methods for Uncertainty Quantification Mike Giles Mathematical Institute, University October 25, 2013 Mike Giles (Oxford) Monte Carlo methods October 25, 2013 1 / 28 #12;Lecture outline sampling Latin Hypercube randomised quasi-Monte Carlo Mike Giles (Oxford) Monte Carlo methods October 25
A Monte Carlo method to compute the exchange coefficient in the double porosity model
Paris-Sud XI, UniversitÃ© de
A Monte Carlo method to compute the exchange coefficient in the double porosity model Fabien: Monte Carlo methods, double porosity model, ran- dom walk on squares, fissured media AMS Classification: 76S05 (65C05 76M35) Published in Monte Carlo Methods Appl.. Proc. of Monte Carlo and probabilistic
Neves, Lucio P; Silva, Eric A B; Perini, Ana P; Maidana, Nora L; Caldas, Linda V E
2012-07-01
The extrapolation chamber is a parallel-plate ionization chamber that allows variation of its air-cavity volume. In this work, an experimental study and MCNP-4C Monte Carlo code simulations of an ionization chamber designed and constructed at the Calibration Laboratory at IPEN to be used as a secondary dosimetry standard for low-energy X-rays are reported. The results obtained were within the international recommendations, and the simulations showed that the components of the extrapolation chamber may influence its response up to 11.0%. PMID:22182629
A new method to assess Monte Carlo convergence
Forster, R.A.; Booth, T.E.; Pederson, S.P.
1993-05-01
The central limit theorem can be applied to a Monte Carlo solution if the following two requirements are satisfied: (1) the random variable has a finite mean and a finite variance; and (2) the number N of independent observations grows large. When these are satisfied, a confidence interval based on the normal distribution with a specified coverage probability can be formed. The first requirement is generally satisfied by the knowledge of the type of Monte Carlo tally being used. The Monte Carlo practitioner has only a limited number of marginally quantifiable methods that use sampled values to assess the fulfillment of the second requirement; e.g., statistical error reduction proportional to 1{radical}N with error magnitude guidelines. No consideration is given to what has not yet been sampled. A new method is presented here to assess the convergence of Monte Carlo solutions by analyzing the shape of the empirical probability density function (PDF) of history scores, f(x), where the random variable x is the score from one particle history and {integral}{sub {minus}{infinity}}{sup {infinity}} f(x) dx = 1. Since f(x) is seldom known explicitly, Monte Carlo particle random walks sample f(x) implicitly. Unless there is a largest possible history score, the empirical f(x) must eventually decrease more steeply than l/x{sup 3} for the second moment ({integral}{sub {minus}{infinity}}{sup {infinity}} x{sup 2}f(x) dx) to exist.
Monte Carlo methods in applied mathematics and computational aerodynamics
O. M. Belotserkovskii; Yu. I. Khlopkov
2006-01-01
A survey of the Monte Carlo methods developed in the computational aerodynamics of rarefied gases is given, and application\\u000a of these methods in unconventional fields is described. A short history of these methods is presented, and their advantages\\u000a and drawbacks are discussed. A relationship of the direct statistical simulation of aerodynamical processes with the solution\\u000a of kinetic equations is established;
Markov Chain Monte Carlo Methods in Biostatistics Andrew Gelman
Gelman, Andrew
Markov Chain Monte Carlo Methods in Biostatistics Andrew Gelman Department of Statistics Columbia May 21, 1996 1 Introduction Appropriate models in biostatistics are often quite complicated, re ecting in biostatistics. These readers can use this article as an introduction to the ways in which Markov chain Monte
Efficient Monte Carlo Methods for Conditional Logistic Regression
Lee, Stephen
Efficient Monte Carlo Methods for Conditional Logistic Regression Cyrus R. MEHTA, Nitin R. PATEL, and Pralay SENCHAUDHURI Exact inference for the logistic regression model is based on generating the permutation distribution of the sufficient statistics for the regression parameters of interest conditional
Yet another variance reduction method for direct monte carlo
Gabrieli, John
Yet another variance reduction method for direct monte carlo simulations of low-signal flows H. AHSignalL " Signal Presentation.nb 3 #12;1.1 Previous Work ü Baker & Hadjiconstantinou: Variance Reduction.nb #12;1.2 Objective ü Can we develop a variance-reduction technique that: ü Uses DSMC as its main
Mammography X-Ray Spectra Simulated with Monte Carlo
Vega-Carrillo, H. R.; Gonzalez, J. Ramirez; Manzanares-Acuna, E.; Hernandez-Davila, V. M.; Villasana, R. Hernandez; Mercado, G. A. [Universidad Autonoma de Zacatecas Apdo. Postal 336, 98000 Zacatecas, Zac. Mexico (Mexico)
2008-08-11
Monte Carlo calculations have been carried out to obtain the x-ray spectra of various target-filter combinations for a mammography unit. Mammography is widely used to diagnose breast cancer. Further to Mo target with Mo filter combination, Rh/Rh, Mo/Rh, Mo/Al, Rh/Al, and W/Rh are also utilized. In this work Monte Carlo calculations, using MCNP 4C code, were carried out to estimate the x-ray spectra produced when a beam of 28 keV electrons did collide with Mo, Rh and W targets. Resulting x-ray spectra show characteristic x-rays and continuous bremsstrahlung. Spectra were also calculated including filters.
Coherent multiple scattering effects and Monte Carlo method
V. L. Kuzmin; I. V. Meglinski
2004-01-01
Based on the comparison of the iteration procedure of solving the Bethe-Salpeter equation and the Monte Carlo method, we developed\\u000a a method for simulating coherent multiple-scattering effects within the framework of a unified stochastic approach. The time\\u000a correlation function and the interference component were calculated for the coherent backscattering from a multiply scattering\\u000a medium.
A multilayer Monte Carlo method with free phase function choice
NASA Astrophysics Data System (ADS)
Watté, R.; Aernouts, B.; Saeys, W.
2012-06-01
This paper presents an adaptation of the widely accepted Monte Carlo method for Multi-layered media (MCML). Its original Henyey-Greenstein phase function is an interesting approach for describing how light scattering inside biological tissues occurs. It has the important advantage of generating deflection angles in an efficient - and therefore computationally fast- manner. However, in order to allow the fast generation of the phase function, the MCML code generates a distribution for the cosine of the deflection angle instead of generating a distribution for the deflection angle, causing a bias in the phase function. Moreover, other, more elaborate phase functions are not available in the MCML code. To overcome these limitations of MCML, it was adapted to allow the use of any discretized phase function. An additional tool allows generating a numerical approximation for the phase function for every layer. This could either be a discretized version of (1) the Henyey-Greenstein phase function, (2) a modified Henyey-Greenstein phase function or (3) a phase function generated from the Mie theory. These discretized phase functions are then stored in a look-up table, which can be used by the adapted Monte Carlo code. The Monte Carlo code with flexible phase function choice (fpf-MC) was compared and validated with the original MCML code. The novelty of the developed program is the generation of a user-friendly algorithm, which allows several types of phase functions to be generated and applied into a Monte Carlo method, without compromising the computational performance.
A separable shadow Hamiltonian hybrid Monte Carlo method.
Sweet, Christopher R; Hampton, Scott S; Skeel, Robert D; Izaguirre, Jesús A
2009-11-01
Hybrid Monte Carlo (HMC) is a rigorous sampling method that uses molecular dynamics (MD) as a global Monte Carlo move. The acceptance rate of HMC decays exponentially with system size. The shadow hybrid Monte Carlo (SHMC) was previously introduced to reduce this performance degradation by sampling instead from the shadow Hamiltonian defined for MD when using a symplectic integrator. SHMC's performance is limited by the need to generate momenta for the MD step from a nonseparable shadow Hamiltonian. We introduce the separable shadow Hamiltonian hybrid Monte Carlo (S2HMC) method based on a formulation of the leapfrog/Verlet integrator that corresponds to a separable shadow Hamiltonian, which allows efficient generation of momenta. S2HMC gives the acceptance rate of a fourth order integrator at the cost of a second-order integrator. Through numerical experiments we show that S2HMC consistently gives a speedup greater than two over HMC for systems with more than 4000 atoms for the same variance. By comparison, SHMC gave a maximum speedup of only 1.6 over HMC. S2HMC has the additional advantage of not requiring any user parameters beyond those of HMC. S2HMC is available in the program PROTOMOL 2.1. A Python version, adequate for didactic purposes, is also in MDL (http://mdlab.sourceforge.net/s2hmc). PMID:19894997
A new method to assess Monte Carlo convergence
Forster, R.A.; Booth, T.E.; Pederson, S.P.
1993-01-01
The central limit theorem can be applied to a Monte Carlo solution if the following two requirements are satisfied: (1) the random variable has a finite mean and a finite variance; and (2) the number N of independent observations grows large. When these are satisfied, a confidence interval based on the normal distribution with a specified coverage probability can be formed. The first requirement is generally satisfied by the knowledge of the type of Monte Carlo tally being used. The Monte Carlo practitioner has only a limited number of marginally quantifiable methods that use sampled values to assess the fulfillment of the second requirement; e.g., statistical error reduction proportional to 1[radical]N with error magnitude guidelines. No consideration is given to what has not yet been sampled. A new method is presented here to assess the convergence of Monte Carlo solutions by analyzing the shape of the empirical probability density function (PDF) of history scores, f(x), where the random variable x is the score from one particle history and [integral][sub [minus][infinity
Quasicontinuum Monte Carlo: A method for surface growth simulations
NASA Astrophysics Data System (ADS)
Russo, G.; Sander, L. M.; Smereka, P.
2004-03-01
We introduce an algorithm for treating growth on surfaces which combines important features of continuum methods (such as the level-set method) and kinetic Monte Carlo (KMC) simulations. We treat the motion of adatoms in continuum theory, but attach them to islands one atom at a time. The technique is borrowed from the dielectric breakdown model. Our method allows us to give a realistic account of fluctuations in island shape, which is lacking in deterministic continuum treatments and which is an important physical effect. Our method should be most important for problems close to equilibrium where KMC becomes impractically slow.
Protein Folding with the Adaptive Tempering Monte Carlo Method
NASA Astrophysics Data System (ADS)
Dong, X.; Klimov, D.; Blaisten-Barojas, E.
Characterization of the folding transition in a model protein was achieved with the recent multicanonical tempering method implemented with Monte Carlo the adaptive tempering Monte Carlo (ATMC) (X. Dong and E. Blaisten-Barojas. Adaptive tempering Monte Carlo method. J. Comput. Theor. Nanosci., 3, p. 118 2006). The folding transition temperature was successfully determined and a spread of states was observed around the interface between native and folded regions. Energy states collected from all tempering events in a series of parallel runs were used in the calculation of the free energy, internal energy, order parameter and radius of gyration as a function of temperature through the weighted histogram method. Not only the calculated thermodynamic properties are in good agreement with results from Langevin dynamics simulations (D. K. Klimov and D. Thirumalai. Native topology determines forced-induced pathways in global proteins. Proc. Natl. Acad. Sci. USA. 97, p. 7254 2000), but also this multicanonical approach is noticeably more efficient because of the adaptive manner in which the system visits states near a transition in the interface between two phases. Additionally, the ATMC is advantageous for protein simulation over regular single canonical ensemble methods because it accelerates the hopping between local energy minima on the potential energy surface.
Monte Carlo Methods and Applications for the Nuclear Shell Model
Dean, D.J.; White, J.A.
1998-08-10
The shell-model Monte Carlo (SMMC) technique transforms the traditional nuclear shell-model problem into a path-integral over auxiliary fields. We describe below the method and its applications to four physics issues: calculations of sd-pf-shell nuclei, a discussion of electron-capture rates in pf-shell nuclei, exploration of pairing correlations in unstable nuclei, and level densities in rare earth systems.
Direct simulation Monte Carlo method for an arbitrary intermolecular potential
NASA Astrophysics Data System (ADS)
Sharipov, Felix; Strapasson, José L.
2012-01-01
A scheme to implement an arbitrary intermolecular potential into the direct simulation Monte Carlo method is proposed. To illustrate the scheme, two benchmark problems are solved employing the Lennard-Jones potential. Since the computational effort of the new scheme is comparable with that of the hard sphere model of molecules, it can completely substitute the widely used models such as variable hard spheres and variable soft spheres.
Calculations of pair production by Monte Carlo methods
Bottcher, C.; Strayer, M.R.
1991-01-01
We describe some of the technical design issues associated with the production of particle-antiparticle pairs in very large accelerators. To answer these questions requires extensive calculation of Feynman diagrams, in effect multi-dimensional integrals, which we evaluate by Monte Carlo methods on a variety of supercomputers. We present some portable algorithms for generating random numbers on vector and parallel architecture machines. 12 refs., 14 figs.
Statistical error of reactor calculations by the Monte Carlo method
Kalugin, M. A.; Oleynik, D. S.; Sukhino-Khomenko, E. A., E-mail: sukhino-khomenko@adis.vver.kiae.ru [Russian Research Centre Kurchatov Institute (Russian Federation)
2011-12-15
Algorithms for calculating the statistical error with allowance for intergenerational correlations are described. The algorithms are constructed on the basis of statistical analysis of the results of computations by the Monte Carlo method. As a result, simple rules for choosing the parameters of the computational techniques, such as the number of simulated generations necessary for attaining the required accuracy and the number of first skipped generations, are elaborated.
Monte Carlo Methods for Uncertainty Quantification Mathematical Institute, University of Oxford
Giles, Mike
Monte Carlo Methods for Uncertainty Quantification Mike Giles Mathematical Institute, University October 25, 2013 Mike Giles (Oxford) Monte Carlo methods October 25, 2013 1 / 32 Lecture outline Lecture 3 an SPDE example Mike Giles (Oxford) Monte Carlo methods October 25, 2013 2 / 32 Weak convergence So far
Path-integral Monte Carlo method for Rényi entanglement entropies
NASA Astrophysics Data System (ADS)
Herdman, C. M.; Inglis, Stephen; Roy, P.-N.; Melko, R. G.; Del Maestro, A.
2014-07-01
We introduce a quantum Monte Carlo algorithm to measure the Rényi entanglement entropies in systems of interacting bosons in the continuum. This approach is based on a path-integral ground state method that can be applied to interacting itinerant bosons in any spatial dimension with direct relevance to experimental systems of quantum fluids. We demonstrate how it may be used to compute spatial mode entanglement, particle partitioned entanglement, and the entanglement of particles, providing insights into quantum correlations generated by fluctuations, indistinguishability, and interactions. We present proof-of-principle calculations and benchmark against an exactly soluble model of interacting bosons in one spatial dimension. As this algorithm retains the fundamental polynomial scaling of quantum Monte Carlo when applied to sign-problem-free models, future applications should allow for the study of entanglement entropy in large-scale many-body systems of interacting bosons.
Multivariate Monte Carlo methods with clusters of galaxies
NASA Astrophysics Data System (ADS)
Peterson, J. R.; Jernigan, J. G.; Kahn, S. M.; Paerels, F. B. S.; Kaastra, J. S.; Miller, A.; Carlstrom, J.
We describe a novel Monte Carlo approach to both spectral fitting and spatial/spectral inversion of X-ray astronomy data, and illustrate its application in the analysis of observations of clusters of galaxies. The X-ray events are directly compared with simulations using multivariate generalizations of the Kolmogorov-Smirnov and Cramér-von Mises statistic. We demonstrate this method in studying the soft X-ray spectra of cooling-flow clusters with the Reflection Grating Spectrometers (RGS) on the SMM-Newton observatory. We also show preliminary results on simultaneously inverting X-ray and interferometric microwave Sunyaev-Zeldovich cluster data using a Monte Carlo technique. Various techniques are applied to simulate radiative transfer effects, model spatially-resolved sources, and simulate instrument response. We then apply statistical tests in the multi-dimensional data space.
Low energy photon dosimetry using Monte Carlo and convolution methods
NASA Astrophysics Data System (ADS)
Modrick, Joseph Michael
Low energy photon dosimetry was investigated using Monte Carlo and convolution methods. Photon energy deposition kernels describing the three dimensional distribution of energy deposition about a primary photon interaction site were computed using EGS4 Monte Carlo. These photon energy deposition kernels were utilized as the convolution kernel in convolution/superposition dose calculations. A Monte Carlo bench mark describing the energy deposition about an isotropic photon point source model was developed. The effect of the inclusion of low energy photon interaction physics on the Monte Carlo and convolution calculations was investigated. A generalized convolution/superposition algorithm was developed to explicitly account for the orientation of the energy deposition kernel for an isotropic photon point source in a brachytherapy geometry. Energy deposition kernels calculations using the EGS4 ``scatter sphere'' code SCASPH were extended to low photon energy. Convolution/superposition dose calculations using these kernels for external beam geometries demonstrated agreement with measurements for low energy diagnostic x-ray beam spectra. The effect of the inclusion of Rayleigh scattering using atomic and molecular coherent scattering form factor data on the kernel calculations was shown to result in an angular distribution of energy deposition consistent with the angular distribution of photon scattering described by the form factor data. Convolution/superposition dose calculations using these kernels did not exhibit any effect of the angular distribution of the kernel. Monte Carlo calculations for an isotropic photon point source including the effects of Rayleigh scatter in a homogeneous medium did not demonstrate any effect of the angular distribution of Rayleigh scattering. Calculations in heterogeneous geometries also did not exhibit any effect of the angular distribution of Rayleigh scattering at low photon energy. Convolution dose calculations using the generalized algorithm demonstrated agreement with the results of the Monte Carlo bench mark. The necessity of applying a correction factor when properly accounting for the orientation of the energy deposition kernel was also demonstrated. The generalized algorithm was also demonstrated to exhibit a discretization artifact from utilizing the discrete energy deposition kernel data.
Improved criticality convergence via a modified Monte Carlo iteration method
Booth, Thomas E [Los Alamos National Laboratory; Gubernatis, James E [Los Alamos National Laboratory
2009-01-01
Nuclear criticality calculations with Monte Carlo codes are normally done using a power iteration method to obtain the dominant eigenfunction and eigenvalue. In the last few years it has been shown that the power iteration method can be modified to obtain the first two eigenfunctions. This modified power iteration method directly subtracts out the second eigenfunction and thus only powers out the third and higher eigenfunctions. The result is a convergence rate to the dominant eigenfunction being |k{sub 3}|/k{sub 1} instead of |k{sub 2}|/k{sub 1}. One difficulty is that the second eigenfunction contains particles of both positive and negative weights that must sum somehow to maintain the second eigenfunction. Summing negative and positive weights can be done using point detector mechanics, but this sometimes can be quite slow. We show that an approximate cancellation scheme is sufficient to accelerate the convergence to the dominant eigenfunction. A second difficulty is that for some problems the Monte Carlo implementation of the modified power method has some stability problems. We also show that a simple method deals with this in an effective, but ad hoc manner.
Monte Carlo Radiation-Hydrodynamics With Implicit Methods
NASA Astrophysics Data System (ADS)
Roth, Nathaniel; Kasen, Daniel
2015-03-01
We explore the application of Monte Carlo transport methods to solving coupled radiation-hydrodynamics (RHD) problems. We use a time-dependent, frequency-dependent, three-dimensional radiation transport code that is special relativistic and includes some detailed microphysical interactions such as resonant line scattering. We couple the transport code to two different one-dimensional (non-relativistic) hydrodynamics solvers: a spherical Lagrangian scheme and a Eulerian Godunov solver. The gas-radiation energy coupling is treated implicitly, allowing us to take hydrodynamical time-steps that are much longer than the radiative cooling time. We validate the code and assess its performance using a suite of radiation hydrodynamical test problems, including ones in the radiation energy dominated regime. We also develop techniques that reduce the noise of the Monte Carlo estimated radiation force by using the spatial divergence of the radiation pressure tensor. The results suggest that Monte Carlo techniques hold promise for simulating the multi-dimensional RHD of astrophysical systems.
Scoring methods for implicit Monte Carlo radiation transport
Edwards, A.L.
1981-01-01
Analytical and numerical tests were made of a number of possible methods for scoring the energy exchange between radiation and matter in the implicit Monte Carlo (IMC) radiation transport scheme of Fleck and Cummings. The interactions considered were effective absorption, elastic scattering, and Compton scattering. The scoring methods tested were limited to simple combinations of analogue, linear expected value, and exponential expected value scoring. Only two scoring methods were found that produced the same results as a pure analogue method. These are a combination of exponential expected value absorption and deposition and analogue Compton scattering of the particle, with either linear expected value Compton deposition or analogue Compton deposition. In both methods, the collision distance is based on the total scattering cross section.
A simple eigenfunction convergence acceleration method for Monte Carlo
Booth, Thomas E [Los Alamos National Laboratory
2010-11-18
Monte Carlo transport codes typically use a power iteration method to obtain the fundamental eigenfunction. The standard convergence rate for the power iteration method is the ratio of the first two eigenvalues, that is, k{sub 2}/k{sub 1}. Modifications to the power method have accelerated the convergence by explicitly calculating the subdominant eigenfunctions as well as the fundamental. Calculating the subdominant eigenfunctions requires using particles of negative and positive weights and appropriately canceling the negative and positive weight particles. Incorporating both negative weights and a {+-} weight cancellation requires a significant change to current transport codes. This paper presents an alternative convergence acceleration method that does not require modifying the transport codes to deal with the problems associated with tracking and cancelling particles of {+-} weights. Instead, only positive weights are used in the acceleration method.
Comparison of vectorization methods used in a Monte Carlo code
Nakagawa, M.; Mori, T.; Sasaki, M. (Japan Atomic Energy Research Inst., Tokai Establishment Tokai-mura, Ibaraki-ken 319-11 (JP))
1991-01-01
This paper examines vectorization methods used in Monte Carlo codes for particle transport calculations. Event and zone selection methods developed from conventional all-zone and one-zone algorithms have been implemented in a general-purpose vectorized code, GMVP. Moreover, a vectorization procedure to treat multiple-lattice geometry has been developed using these methods. Use of lattice geometry can reduce the computation cost for a typical pressurized water reactor fuel subassembly calculation, especially when the zone selection method is used. Sample calculations for external and fission source problems are used to compare the performances of both methods with the results of conventional scalar codes. Though the speedup resulting from vectorization depends on the problem solved, a factor of 7 to 10 is obtained for practical problems on the FACOM VP-100 computer compared with the conventional scalar code, MORSE-CG.
Monte Carlo Methods for Uncertainty Quantification Mathematical Institute, University of Oxford
Giles, Mike
Monte Carlo Methods for Uncertainty Quantification Mike Giles Mathematical Institute, University of Oxford KU Leuven Summer School on Uncertainty Quantification May 3031, 2013 Mike Giles (Oxford) Monte sampling Latin Hypercube randomised quasi-Monte Carlo Mike Giles (Oxford) Monte Carlo methods May 30
Monte Carlo Methods for Uncertainty Quantification Mathematical Institute, University of Oxford
Giles, Mike
In computational finance, stochastic differential equations are used to model the behaviour of stocks interest Carlo Mike Giles (Oxford) Monte Carlo methods May 3031, 2013 2 / 33 #12;SDEs in Finance Carlo methods May 3031, 2013 3 / 33 #12;SDEs in Finance Stochastic differential equations are just
Monte Carlo Methods for Computation and Optimization (048715) Winter 2013/4
Shimkin, Nahum
Monte Carlo Methods for Computation and Optimization (048715) Winter 2013/4 Lecture Notes Nahum Shimkin i #12;PREFACE These lecture notes are intended for a first, graduate-level, course on Monte-Carlo, Simulation and the Monte Carlo Method, Wiley, 2008. (2) S. Asmussen and P. Glynn, Stochastic Simulation
Monte Carlo Methods for Pricing and Hedging American Options in High Dimension
Caramellino, Lucia
Monte Carlo Methods for Pricing and Hedging American Options in High Dimension Lucia Caramellino1.zanette@uniud.it Summary. We numerically compare some recent Monte Carlo algorithms devoted to the pricing and hedging with respect to other Monte Carlo methods in terms of computing time. Here, we propose to suitably combine
Monte Carlo methods designed for parallel computation Sheldon B. Opps and Jeremy Scho eld
Schofield, Jeremy
Monte Carlo methods designed for parallel computation Sheldon B. Opps and Jeremy Scho#12;eld of these methods is that individual Monte Carlo chains, which are run on a separate nodes, are coupled together- rate calculation, for example to improve the statistics of a Monte Carlo simulation, one inherent bene
Monte-Carlo valorisation of American options: facts and new algorithms to improve existing methods
Boyer, Edmond
Monte-Carlo valorisation of American options: facts and new algorithms to improve existing methods is to discuss efficient algorithms for the pricing of American options by two recently proposed Monte-Carlo type the quantization approach, are performed. Key words: American Options, Monte Carlo methods. 1. Introduction
Monte Carlo Methods for Exact & Efficient Solution of the Generalized Optimality Equations
Monte Carlo Methods for Exact & Efficient Solution of the Generalized Optimality Equations Pedro A to the complexity of planning. In this paper, we introduce Monte Carlo methods to solve the generalized optimality of Monte Carlo proposals. In particular, it is seen that the number of proposals is essentially independent
Physics-based Predictive Time Propagation Method for Monte Carlo Coupled Depletion Simulations
Johns, Jesse Merlin
2014-12-18
The Monte Carlo method for solving reactor physics problems is one of the most robust numerical techniques for analyzing a wide variety of systems in the realm of reactor engineering. Monte Carlo simulations sample on fundamental physical processes...
Liang, Y. Daniel
Case Study: Monte Carlo Simulation Monte Carlo simulation uses random numbers and probability, chemistry, and finance. This section gives an example of using Monte Carlo simulation for estimating . To estimate using the Monte Carlo method, draw a circle with its bounding square as shown below. x y 1-1 1 -1
Analysis of real-time networks with monte carlo methods
NASA Astrophysics Data System (ADS)
Mauclair, C.; Durrieu, G.
2013-12-01
Communication networks in embedded systems are ever more large and complex. A better understanding of the dynamics of these networks is necessary to use them at best and lower costs. Todays tools are able to compute upper bounds of end-to-end delays that a packet being sent through the network could suffer. However, in the case of asynchronous networks, those worst end-to-end delay (WEED) cases are rarely observed in practice or through simulations due to the scarce situations that lead to worst case scenarios. A novel approach based on Monte Carlo methods is suggested to study the effects of the asynchrony on the performances.
Application of Monte Carlo methods in tomotherapy and radiation biophysics
NASA Astrophysics Data System (ADS)
Hsiao, Ya-Yun
Helical tomotherapy is an attractive treatment for cancer therapy because highly conformal dose distributions can be achieved while the on-board megavoltage CT provides simultaneous images for accurate patient positioning. The convolution/superposition (C/S) dose calculation methods typically used for Tomotherapy treatment planning may overestimate skin (superficial) doses by 3-13%. Although more accurate than C/S methods, Monte Carlo (MC) simulations are too slow for routine clinical treatment planning. However, the computational requirements of MC can be reduced by developing a source model for the parts of the accelerator that do not change from patient to patient. This source model then becomes the starting point for additional simulations of the penetration of radiation through patient. In the first section of this dissertation, a source model for a helical tomotherapy is constructed by condensing information from MC simulations into series of analytical formulas. The MC calculated percentage depth dose and beam profiles computed using the source model agree within 2% of measurements for a wide range of field sizes, which suggests that the proposed source model provides an adequate representation of the tomotherapy head for dose calculations. Monte Carlo methods are a versatile technique for simulating many physical, chemical and biological processes. In the second major of this thesis, a new methodology is developed to simulate of the induction of DNA damage by low-energy photons. First, the PENELOPE Monte Carlo radiation transport code is used to estimate the spectrum of initial electrons produced by photons. The initial spectrum of electrons are then combined with DNA damage yields for monoenergetic electrons from the fast Monte Carlo damage simulation (MCDS) developed earlier by Semenenko and Stewart (Purdue University). Single- and double-strand break yields predicted by the proposed methodology are in good agreement (1%) with the results of published experimental and theoretical studies for 60Co gamma-rays and low-energy x-rays. The reported studies provide new information about the potential biological consequences of diagnostic x-rays and selected gamma-emitting radioisotopes used in brachytherapy for the treatment of cancer. The proposed methodology is computationally efficient and may also be useful in proton therapy, space applications or internal dosimetry.
Application of Exchange Monte Carlo Method to Ordering Dynamics
NASA Astrophysics Data System (ADS)
Okabe, Yutaka
1997-08-01
The ordering dynamics in the spinodal decomposition is an interesting problem. Especially, for the case of the conserved order parameter, it is difficult to determine the late-stage growth law due to the slow dynamics. We apply the exchange Monte Carlo method [1] to the ordering dynamics of the three-state Potts model with the conserved order parameter. Even for the deeply quenched case to low tempreatures, we have observed a rapid domain growth; we have proved the efficiency of the exchange Monte Carlo method for the ordering process. Although the exchange dynamics is not considered to be related to a real one, we have found that a domain growth is controlled by a simple algebraic growth law, R(t) ~ t^1/3. The value is consistent with a direct simulation [2] for the same model. [1] K. Hukushima and K. Nemoto, J. Phys. Soc. Jpn. 65, 1604 (1996). [2] C. Jeppesen and O. G. Mouritsen, Phys. Rev. B47, 14724 (1993).
Monte Carlo methods for short polypeptides Jeremy Schofield a) and Mark A. Ratner
Schofield, Jeremy
Monte Carlo methods for short polypeptides Jeremy Schofield a) and Mark A. Ratner Department! Nonphysical sampling Monte Carlo techniques that enable average structural properties of short in vacuo polypeptide chains to be calculated accurately are discussed. Updating algorithms developed for Monte Carlo
Monte Carlo method for determining earthquake recurrence parameters from short paleoseismic paleoseismic series. From repeated Monte Carlo draws, it becomes possible to quantitatively estimate most to an overestimate of the hazard should they be used in probability calculations. Therefore a Monte Carlo approach
ITER Neutronics Modeling Using Hybrid Monte Carlo/Deterministic and CAD-Based Monte Carlo Methods
Ibrahim, A. [University of Wisconsin; Mosher, Scott W [ORNL; Evans, Thomas M [ORNL; Peplow, Douglas E. [ORNL; Sawan, M. [University of Wisconsin; Wilson, P. [University of Wisconsin; Wagner, John C [ORNL; Heltemes, Thad [University of Wisconsin, Madison
2011-01-01
The immense size and complex geometry of the ITER experimental fusion reactor require the development of special techniques that can accurately and efficiently perform neutronics simulations with minimal human effort. This paper shows the effect of the hybrid Monte Carlo (MC)/deterministic techniques - Consistent Adjoint Driven Importance Sampling (CADIS) and Forward-Weighted CADIS (FW-CADIS) - in enhancing the efficiency of the neutronics modeling of ITER and demonstrates the applicability of coupling these methods with computer-aided-design-based MC. Three quantities were calculated in this analysis: the total nuclear heating in the inboard leg of the toroidal field coils (TFCs), the prompt dose outside the biological shield, and the total neutron and gamma fluxes over a mesh tally covering the entire reactor. The use of FW-CADIS in estimating the nuclear heating in the inboard TFCs resulted in a factor of ~ 275 increase in the MC figure of merit (FOM) compared with analog MC and a factor of ~ 9 compared with the traditional methods of variance reduction. By providing a factor of ~ 21 000 increase in the MC FOM, the radiation dose calculation showed how the CADIS method can be effectively used in the simulation of problems that are practically impossible using analog MC. The total flux calculation demonstrated the ability of FW-CADIS to simultaneously enhance the MC statistical precision throughout the entire ITER geometry. Collectively, these calculations demonstrate the ability of the hybrid techniques to accurately model very challenging shielding problems in reasonable execution times.
Sadeghi, Mahdi; Hamed Hosseini, S
2010-01-01
Monte Carlo calculations have been performed using the MCNP4C code for an iodine seed design. As the ADVANTAGE I-125, Model IAI-125 source is commercially available for interstitial brachytherapy treatment, dosimetric characteristics (dose rate constant, radial dose function and anisotropy function) of this source were theoretically determined following the updated AAPM task group 43 (TG-43U1) recommendations. The dose distribution around the seed was calculated with Monte Carlo simulation in liquid water. The Monte Carlo calculated dose rate constant of this source in water was found to be 0.986cGyh(-1)U(-1), with an approximate uncertainty of 0.4%. The obtained result has been compared with the previous study. Comparison of the calculated dose rate constant with the value presented by Meigooni et al. shows a very good agreement. Also the anisotropy function and the radial dose function for this source are graphically compared. PMID:19762248
Applications of the Fixed-Node Quantum Monte Carlo Method
NASA Astrophysics Data System (ADS)
Kulahlioglu, Adem Halil
Quantum Monte Carlo (QMC) is a highly sophisticated quantum many-body method. Diffusion Monte Carlo (DMC), a projected QMC method, is a stochastic solution of the stationary Schrodinger's equation. It is, in principle, an exact method. However, in dealing with fermions, since trial wave functions meet the antisymmetric condition of the many-body fermionic systems, it inevitably encounters the fermion sign problem. One of the ways to circumvent the sign problem is by imposing the so-called fixed-node approximation. The fixed-node DMC (FN-DMC), a highly promising method, is emerging as the method of choice for correlated treatment of many-body electronic structure problems since it is much more accurate than Khon-Sham DFT and has a competitive accuracy with CCSD(T) but scales better with system size than CCSD(T). An important drawback in FN-DMC is the fixed-node bias introduced by the approximate nature of the trial wave function nodes. In this dissertation, we examine the fixed-node bias and its restrictive impact on the accuracy of FN-DMC. Also, electron density dependence of the fixed-node bias is discussed by taking a relatively small atomic system. In our dissertation, we also applied FN-DMC in a relatively large molecular system with a transition metal, Zinc-porphyrin, to calculate the excitation energy in an adiabatic limit (vertical excitation). We found that FN-DMC results agree well with experimental values as well as with results obtained by some other correlated ab initio methods such as CCSD. In addition, we used FN-DMC to study a transition metal dimer, Mo 2, which is a challenging system for theoretical studies since there is large amount of many-body correlation effects. We constructed the antisymmetric part (Slater part) of the trial wave function by means of the Selected-CI method. Moreover, we carried out CCSD(T) calculations in order to be able to compare FN-DMC energies with another correlated method energies. FN-DMC and CCSD(T) calculations in Mo2, which is dominant with d-d bondings, enabled us to make comparisons between these two competitive methods and investigate the limitations impairing FN-DMC accuracy.
NASA Astrophysics Data System (ADS)
Lodwick, Camille J.
This research utilized Monte Carlo N-Particle version 4C (MCNP4C) to simulate K X-ray fluorescent (K XRF) measurements of stable lead in bone. Simulations were performed to investigate the effects that overlying tissue thickness, bone-calcium content, and shape of the calibration standard have on detector response in XRF measurements at the human tibia. Additional simulations of a knee phantom considered uncertainty associated with rotation about the patella during XRF measurements. Simulations tallied the distribution of energy deposited in a high-purity germanium detector originating from collimated 88 keV 109Cd photons in backscatter geometry. Benchmark measurements were performed on simple and anthropometric XRF calibration phantoms of the human leg and knee developed at the University of Cincinnati with materials proven to exhibit radiological characteristics equivalent to human tissue and bone. Initial benchmark comparisons revealed that MCNP4C limits coherent scatter of photons to six inverse angstroms of momentum transfer and a Modified MCNP4C was developed to circumvent the limitation. Subsequent benchmark measurements demonstrated that Modified MCNP4C adequately models photon interactions associated with in vivo K XRF of lead in bone. Further simulations of a simple leg geometry possessing tissue thicknesses from 0 to 10 mm revealed increasing overlying tissue thickness from 5 to 10 mm reduced predicted lead concentrations an average 1.15% per 1 mm increase in tissue thickness (p < 0.0001). An anthropometric leg phantom was mathematically defined in MCNP to more accurately reflect the human form. A simulated one percent increase in calcium content (by mass) of the anthropometric leg phantom's cortical bone demonstrated to significantly reduce the K XRF normalized ratio by 4.5% (p < 0.0001). Comparison of the simple and anthropometric calibration phantoms also suggested that cylindrical calibration standards can underestimate lead content of a human leg up to 4%. The patellar bone structure in which the fluorescent photons originate was found to vary dramatically with measurement angle. The relative contribution of lead signal from the patella declined from 65% to 27% when rotated 30°. However, rotation of the source-detector about the patella from 0 to 45° demonstrated no significant effect on the net K XRF response at the knee.
A modified Monte Carlo 'local importance function transform' method
Keady, K. P.; Larsen, E. W. [University of Michigan, Department of Nuclear Engineering and Radiological Sciences, 2355 Bonisteel Blvd., Ann Arbor, MI 48109 (United States)
2013-07-01
The Local Importance Function Transform (LIFT) method uses an approximation of the contribution transport problem to bias a forward Monte-Carlo (MC) source-detector simulation [1-3]. Local (cell-based) biasing parameters are calculated from an inexpensive deterministic adjoint solution and used to modify the physics of the forward transport simulation. In this research, we have developed a new expression for the LIFT biasing parameter, which depends on a cell-average adjoint current to scalar flux (J{sup *}/{phi}{sup *}) ratio. This biasing parameter differs significantly from the original expression, which uses adjoint cell-edge scalar fluxes to construct a finite difference estimate of the flux derivative; the resulting biasing parameters exhibit spikes in magnitude at material discontinuities, causing the original LIFT method to lose efficiency in problems with high spatial heterogeneity. The new J{sup *}/{phi}{sup *} expression, while more expensive to obtain, generates biasing parameters that vary smoothly across the spatial domain. The result is an improvement in simulation efficiency. A representative test problem has been developed and analyzed to demonstrate the advantage of the updated biasing parameter expression with regards to solution figure of merit (FOM). For reference, the two variants of the LIFT method are compared to a similar variance reduction method developed by Depinay [4, 5], as well as MC with deterministic adjoint weight windows (WW). (authors)
Wave-function Monte Carlo method for simulating conditional master equations
Jacobs, Kurt [Department of Physics, University of Massachusetts at Boston, Boston, Massachusetts 02125 (United States) and Hearne Institute for Theoretical Physics, Louisiana State University, Baton Rouge, Louisiana 70803 (United States)
2010-04-15
Wave-function Monte Carlo methods are an important tool for simulating quantum systems, but the standard method cannot be used to simulate decoherence in continuously measured systems. Here I present a Monte Carlo method for such systems. This was used to perform the simulations of a continuously measured nanoresonator in [Phys. Rev. Lett. 102, 057208 (2009)].
Cool walking: a new Markov chain Monte Carlo sampling method.
Brown, Scott; Head-Gordon, Teresa
2003-01-15
Effective relaxation processes for difficult systems like proteins or spin glasses require special simulation techniques that permit barrier crossing to ensure ergodic sampling. Numerous adaptations of the venerable Metropolis Monte Carlo (MMC) algorithm have been proposed to improve its sampling efficiency, including various hybrid Monte Carlo (HMC) schemes, and methods designed specifically for overcoming quasi-ergodicity problems such as Jump Walking (J-Walking), Smart Walking (S-Walking), Smart Darting, and Parallel Tempering. We present an alternative to these approaches that we call Cool Walking, or C-Walking. In C-Walking two Markov chains are propagated in tandem, one at a high (ergodic) temperature and the other at a low temperature. Nonlocal trial moves for the low temperature walker are generated by first sampling from the high-temperature distribution, then performing a statistical quenching process on the sampled configuration to generate a C-Walking jump move. C-Walking needs only one high-temperature walker, satisfies detailed balance, and offers the important practical advantage that the high and low-temperature walkers can be run in tandem with minimal degradation of sampling due to the presence of correlations. To make the C-Walking approach more suitable to real problems we decrease the required number of cooling steps by attempting to jump at intermediate temperatures during cooling. We further reduce the number of cooling steps by utilizing "windows" of states when jumping, which improves acceptance ratios and lowers the average number of cooling steps. We present C-Walking results with comparisons to J-Walking, S-Walking, Smart Darting, and Parallel Tempering on a one-dimensional rugged potential energy surface in which the exact normalized probability distribution is known. C-Walking shows superior sampling as judged by two ergodic measures. PMID:12483676
Underwater Optical Wireless Channel Modeling Using Monte-Carlo Method
NASA Astrophysics Data System (ADS)
Saini, P. Sri; Prince, Shanthi
2011-10-01
At present, there is a lot of interest in the functioning of the marine environment. Unmanned or Autonomous Underwater Vehicles (UUVs or AUVs) are used in the exploration of the underwater resources, pollution monitoring, disaster prevention etc. Underwater, where radio waves do not propagate, acoustic communication is being used. But, underwater communication is moving towards Optical Communication which has higher bandwidth when compared to Acoustic Communication but has shorter range comparatively. Underwater Optical Wireless Communication (OWC) is mainly affected by the absorption and scattering of the optical signal. In coastal waters, both inherent and apparent optical properties (IOPs and AOPs) are influenced by a wide array of physical, biological and chemical processes leading to optical variability. The scattering effect has two effects: the attenuation of the signal and the Inter-Symbol Interference (ISI) of the signal. However, the Inter-Symbol Interference is ignored in the present paper. Therefore, in order to have an efficient underwater OWC link it is necessary to model the channel efficiently. In this paper, the underwater optical channel is modeled using Monte-Carlo method. The Monte Carlo approach provides the most general and most flexible technique for numerically solving the equations of Radiative transfer. The attenuation co-efficient of the light signal is studied as a function of the absorption (a) and scattering (b) coefficients. It has been observed that for pure sea water and for less chlorophyll conditions blue wavelength is less absorbed whereas for chlorophyll rich environment red wavelength signal is absorbed less comparative to blue and green wavelength.
Hierarchical Monte Carlo methods for fractal random fields
Elliott, F.W. Jr.; Majda, A.J.; Horntrop, D.J. [New York Univ., NY (United States)] [and others
1995-11-01
Two hierarchical Monte Carlo methods for the generation of self-similar fractal random fields are compared and contrasted. The first technique, successive random addition (SRA), is currently popular in the physics community. Despite the intuitive appeal of SRA, rigorous mathematical reasoning reveals that SRA cannot be consistent with any stationary power-law Gaussian random field for any Hurst exponent; furthermore, there is an inherent ratio of largest to smallest putative scaling constant necessarily exceeding a factor of 2 for a wide range of Hurst exponents H, with 0.30 < H < 0.85. Thus, SRA is inconsistent with a stationary power-law fractal random field and would not be useful for problems that do not utilize additional spatial averaging of the velocity field. The second hierarchial method for fractal random fields has recently been introduced by two of the authors and relies on a suitable explicit multiwavelet expansion (MWE) with high-moment cancellation. This method is described briefly, including a demonstration that, unlike SRA, MWE is consistent with a stationary power-law random field over many decades of scaling and has low variance.
LISA data analysis using Markov chain Monte Carlo methods
Cornish, Neil J.; Crowder, Jeff [Department of Physics, Montana State University, Bozeman, Montana 59717 (United States)
2005-08-15
The Laser Interferometer Space Antenna (LISA) is expected to simultaneously detect many thousands of low-frequency gravitational wave signals. This presents a data analysis challenge that is very different to the one encountered in ground based gravitational wave astronomy. LISA data analysis requires the identification of individual signals from a data stream containing an unknown number of overlapping signals. Because of the signal overlaps, a global fit to all the signals has to be performed in order to avoid biasing the solution. However, performing such a global fit requires the exploration of an enormous parameter space with a dimension upwards of 50 000. Markov Chain Monte Carlo (MCMC) methods offer a very promising solution to the LISA data analysis problem. MCMC algorithms are able to efficiently explore large parameter spaces, simultaneously providing parameter estimates, error analysis, and even model selection. Here we present the first application of MCMC methods to simulated LISA data and demonstrate the great potential of the MCMC approach. Our implementation uses a generalized F-statistic to evaluate the likelihoods, and simulated annealing to speed convergence of the Markov chains. As a final step we supercool the chains to extract maximum likelihood estimates, and estimates of the Bayes factors for competing models. We find that the MCMC approach is able to correctly identify the number of signals present, extract the source parameters, and return error estimates consistent with Fisher information matrix predictions.
ACOUSTIC NODE CALIBRATION USING HELICOPTER SOUNDS AND MONT E CARLO MARKOV CHAIN METHODS
Cevher, Volkan
ACOUSTIC NODE CALIBRATION USING HELICOPTER SOUNDS AND MONT ´E CARLO MARKOV CHAIN METHODS Volkan.mcclella}@ece.gatech.edu ABSTRACT A Mont´e-Carlo method is used to calibrate a randomly placed sen- sor node using helicopter sounds. The calibration is based on using the GPS information from the helicopter and the estimated DOA's at the node
Author's personal copy Monte Carlo methods for design and analysis of radiation detectors
Shultis, J. Kenneth
Author's personal copy Monte Carlo methods for design and analysis of radiation detectors William L Radiation detectors Inverse problems Detector design a b s t r a c t An overview of Monte Carlo as a practical method for designing and analyzing radiation detectors is provided. The emphasis is on detectors
Monte Carlo Methods for Uncertainty Quantification Mathematical Institute, University of Oxford
Giles, Mike
finance, stochastic differential equations are used to model the behaviour of stocks interest rates Carlo Mike Giles (Oxford) Monte Carlo methods May 3031, 2013 2 / 33 SDEs in Finance In computational methods May 3031, 2013 3 / 33 SDEs in Finance Stochastic differential equations are just ordinary
Leon E. Smith; Christopher J. Gesh; Richard T. Pagh; Erin A. Miller; Mark W. Shaver; Eric D. Ashbaker; Michael T. Batdorf; J. Edward Ellis; William R. Kaye; Ronald J. McConn; George H. Meriwether; Jennifer J. Ressler; Andrei B. Valsan; Todd A. Wareing
2008-01-01
Simulation is often used to predict the response of gamma-ray spectrometers in technology viability and comparative studies for homeland and national security scenarios. Candidate radiation transport methods generally fall into one of two broad categories: stochastic (Monte Carlo) and deterministic. Monte Carlo methods are the most heavily used in the detection community and are particularly effective for calculating pulse-height spectra
Convolution/superposition using the Monte Carlo method
NASA Astrophysics Data System (ADS)
Naqvi, Shahid A.; Earl, Matthew A.; Shepard, David M.
2003-07-01
The convolution/superposition calculations for radiotherapy dose distributions are traditionally performed by convolving polyenergetic energy deposition kernels with TERMA (total energy released per unit mass) precomputed in each voxel of the irradiated phantom. We propose an alternative method in which the TERMA calculation is replaced by random sampling of photon energy, direction and interaction point. Then, a direction is randomly sampled from the angular distribution of the monoenergetic kernel corresponding to the photon energy. The kernel ray is propagated across the phantom, and energy is deposited in each voxel traversed. An important advantage of the explicit sampling of energy is that spectral changes with depth are automatically accounted for. No spectral or kernel hardening corrections are needed. Furthermore, the continuous sampling of photon direction allows us to model sharp changes in fluence, such as those due to collimator tongue-and-groove. The use of explicit photon direction also facilitates modelling of situations where a given voxel is traversed by photons from many directions. Extra-focal radiation, for instance, can therefore be modelled accurately. Our method also allows efficient calculation of a multi-segment/multi-beam IMRT plan by sampling of beam angles and field segments according to their relative weights. For instance, an IMRT plan consisting of seven 14 × 12 cm2 beams with a total of 300 field segments can be computed in 15 min on a single CPU, with 2% statistical fluctuations at the isocentre of the patient's CT phantom divided into 4 × 4 × 4 mm3 voxels. The calculation contains all aperture-specific effects, such as tongue and groove, leaf curvature and head scatter. This contrasts with deterministic methods in which each segment is given equal importance, and the time taken scales with the number of segments. Thus, the Monte Carlo superposition provides a simple, accurate and efficient method for complex radiotherapy dose calculations.
Treatment planning aspects and Monte Carlo methods in proton therapy
NASA Astrophysics Data System (ADS)
Fix, Michael K.; Manser, Peter
2015-05-01
Over the last years, the interest in proton radiotherapy is rapidly increasing. Protons provide superior physical properties compared with conventional radiotherapy using photons. These properties result in depth dose curves with a large dose peak at the end of the proton track and the finite proton range allows sparing the distally located healthy tissue. These properties offer an increased flexibility in proton radiotherapy, but also increase the demand in accurate dose estimations. To carry out accurate dose calculations, first an accurate and detailed characterization of the physical proton beam exiting the treatment head is necessary for both currently available delivery techniques: scattered and scanned proton beams. Since Monte Carlo (MC) methods follow the particle track simulating the interactions from first principles, this technique is perfectly suited to accurately model the treatment head. Nevertheless, careful validation of these MC models is necessary. While for the dose estimation pencil beam algorithms provide the advantage of fast computations, they are limited in accuracy. In contrast, MC dose calculation algorithms overcome these limitations and due to recent improvements in efficiency, these algorithms are expected to improve the accuracy of the calculated dose distributions and to be introduced in clinical routine in the near future.
Generalized directed loop method for quantum Monte Carlo simulations.
Alet, Fabien; Wessel, Stefan; Troyer, Matthias
2005-03-01
Efficient quantum Monte Carlo update schemes called directed loops have recently been proposed, which improve the efficiency of simulations of quantum lattice models. We propose to generalize the detailed balance equations at the local level during the loop construction by accounting for the matrix elements of the operators associated with open world-line segments. Using linear programming techniques to solve the generalized equations, we look for optimal construction schemes for directed loops. This also allows for an extension of the directed loop scheme to general lattice models, such as high-spin or bosonic models. The resulting algorithms are bounce free in larger regions of parameter space than the original directed loop algorithm. The generalized directed loop method is applied to the magnetization process of spin chains in order to compare its efficiency to that of previous directed loop schemes. In contrast to general expectations, we find that minimizing bounces alone does not always lead to more efficient algorithms in terms of autocorrelations of physical observables, because of the nonuniqueness of the bounce-free solutions. We therefore propose different general strategies to further minimize autocorrelations, which can be used as supplementary requirements in any directed loop scheme. We show by calculating autocorrelation times for different observables that such strategies indeed lead to improved efficiency; however, we find that the optimal strategy depends not only on the model parameters but also on the observable of interest. PMID:15903632
Monte Carlo methods for parallel processing of diffusion equations
Vafadari, Cyrus
2013-01-01
A Monte Carlo algorithm for solving simple linear systems using a random walk is demonstrated and analyzed. The described algorithm solves for each element in the solution vector independently. Furthermore, it is demonstrated ...
Monte Carlo and experimental characterization of the first AMIRS 103Pd brachytherapy source.
Raisali, Gholamreza; Ghonchehnazi, Maryam G; Shokrani, Parvaneh; Sadeghi, Mahdi
2008-12-01
TG-43U1 dosimetric parameters of a new brachytherapy (103)Pd source, including dose-rate constant, radial dose function, 2D anisotropy function, 1D anisotropy function and anisotropy constant, have been determined using MCNP4C code and have been verified by measurements in Perspex phantoms, using TLD-100 dosimeters calibrated in (60)Co radiation field. The comparison of calculated and measured dosimetric parameters showed the validity of Monte Carlo calculations and experimental results. The anisotropy constant was calculated as 0.87 in water and 0.88 in Perspex; and measured as 0.92 in Perspex. Comparing dosimetric parameters of the new source with other source models showed acceptable agreement. PMID:18657981
Perfetti, Christopher M [ORNL; Rearden, Bradley T [ORNL
2014-01-01
This work introduces a new approach for calculating sensitivity coefficients for generalized neutronic responses to nuclear data uncertainties using continuous-energy Monte Carlo methods. The approach presented in this paper, known as the GEAR-MC method, allows for the calculation of generalized sensitivity coefficients for multiple responses in a single Monte Carlo calculation with no nuclear data perturbations or knowledge of nuclear covariance data. The theory behind the GEAR-MC method is presented here, and proof of principle is demonstrated by using the GEAR-MC method to calculate sensitivity coefficients for responses in several 3D, continuous-energy Monte Carlo applications.
Franke, B. C. [Sandia National Laboratories, Albuquerque, NM 87185 (United States); Prinja, A. K. [Department of Chemical and Nuclear Engineering, University of New Mexico, Albuquerque, NM 87131 (United States)
2013-07-01
The stochastic Galerkin method (SGM) is an intrusive technique for propagating data uncertainty in physical models. The method reduces the random model to a system of coupled deterministic equations for the moments of stochastic spectral expansions of result quantities. We investigate solving these equations using the Monte Carlo technique. We compare the efficiency with brute-force Monte Carlo evaluation of uncertainty, the non-intrusive stochastic collocation method (SCM), and an intrusive Monte Carlo implementation of the stochastic collocation method. We also describe the stability limitations of our SGM implementation. (authors)
Widom, Michael
2011-01-01
PHYSICAL REVIEW E 84, 061912 (2011) Kinetic Monte Carlo method applied to nucleic acid hairpin December 2011) Kinetic Monte Carlo on coarse-grained systems, such as nucleic acid secondary structure states. Secondary structure models of nucleic acids, which record the pairings of complementary
Dai, Yang
Monte Carlo method Jie Lianga) and Jinfeng Zhang Department of Bioengineering, SEO, MC-063, University geometry is explored using sequential Monte Carlo importance sampling and resampling techniques. We-contacting loop formation when existence of voids is not required. We also briefly discuss the sequential Monte
A Residual Monte Carlo Method for Spatially Discrete, Angularly Continuous Radiation Transport
Wollaeger, Ryan T. [Los Alamos National Laboratory; Densmore, Jeffery D. [Los Alamos National Laboratory
2012-06-19
Residual Monte Carlo provides exponential convergence of statistical error with respect to the number of particle histories. In the past, residual Monte Carlo has been applied to a variety of angularly discrete radiation-transport problems. Here, we apply residual Monte Carlo to spatially discrete, angularly continuous transport. By maintaining angular continuity, our method avoids the deficiencies of angular discretizations, such as ray effects. For planar geometry and step differencing, we use the corresponding integral transport equation to calculate an angularly independent residual from the scalar flux in each stage of residual Monte Carlo. We then demonstrate that the resulting residual Monte Carlo method does indeed converge exponentially to within machine precision of the exact step differenced solution.
Dynamic Conditional Independence Models And Markov Chain Monte Carlo Methods
Carlo Berzuini; Nicola G. Best; Walter R. Gilks; Cristiana Larizza
1997-01-01
In dynamic statistical modeling situations, observations arise sequentially, causingthe model to expand by progressive incorporation of new data items and new unknownparameters. For example, in clinical monitoring, new patient-specific parameters areintroduced with each new patient. Markov chain Monte Carlo (MCMC) might be usedfor posterior inference, but would need to be redone at each expansion stage. Thus suchmethods are often too
A Review of Some Monte Carlo Simulation Methods for Turbulent Systems
Peter R. Kramer
2001-01-01
We provide a brief overview of some Monte Carlo methods which have been used to simulate systems with a turbulent fluid component. We discuss two main classes of simulation approaches: an \\\\Eulerian\\
Multivariate Population Balances via Moment and Monte Carlo Simulation Methods: An Important Sol with a population balance equation governing evolution of the "dispersed" (suspended) particle population. Early, hopefully, motivate a broader attack on important multivariate population balance problems, including those
NASA Technical Reports Server (NTRS)
Firstenberg, H.
1971-01-01
The statistics are considered of the Monte Carlo method relative to the interpretation of the NUGAM2 and NUGAM3 computer code results. A numerical experiment using the NUGAM2 code is presented and the results are statistically interpreted.
Hybrid Monte Carlo-Deterministic Methods for Nuclear Reactor-Related Criticality Calculations
Edward W. Larson
2004-02-17
The overall goal of this project is to develop, implement, and test new Hybrid Monte Carlo-deterministic (or simply Hybrid) methods for the more efficient and more accurate calculation of nuclear engineering criticality problems. These new methods will make use of two (philosophically and practically) very different techniques - the Monte Carlo technique, and the deterministic technique - which have been developed completely independently during the past 50 years. The concept of this proposal is to merge these two approaches and develop fundamentally new computational techniques that enhance the strengths of the individual Monte Carlo and deterministic approaches, while minimizing their weaknesses.
Efficient, automated Monte Carlo methods for radiation transport
Kong Rong; Ambrose, Martin [Claremont Graduate University, 150 E. 10th Street, Claremont, CA 91711 (United States); Spanier, Jerome [Claremont Graduate University, 150 E. 10th Street, Claremont, CA 91711 (United States); Beckman Laser Institute and Medical Clinic, University of California, 1002 Health Science Road E., Irvine, CA 92612 (United States)], E-mail: jspanier@uci.edu
2008-11-20
Monte Carlo simulations provide an indispensible model for solving radiative transport problems, but their slow convergence inhibits their use as an everyday computational tool. In this paper, we present two new ideas for accelerating the convergence of Monte Carlo algorithms based upon an efficient algorithm that couples simulations of forward and adjoint transport equations. Forward random walks are first processed in stages, each using a fixed sample size, and information from stage k is used to alter the sampling and weighting procedure in stage k+1. This produces rapid geometric convergence and accounts for dramatic gains in the efficiency of the forward computation. In case still greater accuracy is required in the forward solution, information from an adjoint simulation can be added to extend the geometric learning of the forward solution. The resulting new approach should find widespread use when fast, accurate simulations of the transport equation are needed.
Monte Carlo Method with Heuristic Adjustment for Irregularly Shaped Food Product Volume Measurement
Siswantoro, Joko; Idrus, Bahari
2014-01-01
Volume measurement plays an important role in the production and processing of food products. Various methods have been proposed to measure the volume of food products with irregular shapes based on 3D reconstruction. However, 3D reconstruction comes with a high-priced computational cost. Furthermore, some of the volume measurement methods based on 3D reconstruction have a low accuracy. Another method for measuring volume of objects uses Monte Carlo method. Monte Carlo method performs volume measurements using random points. Monte Carlo method only requires information regarding whether random points fall inside or outside an object and does not require a 3D reconstruction. This paper proposes volume measurement using a computer vision system for irregularly shaped food products without 3D reconstruction based on Monte Carlo method with heuristic adjustment. Five images of food product were captured using five cameras and processed to produce binary images. Monte Carlo integration with heuristic adjustment was performed to measure the volume based on the information extracted from binary images. The experimental results show that the proposed method provided high accuracy and precision compared to the water displacement method. In addition, the proposed method is more accurate and faster than the space carving method. PMID:24892069
Shimkin, Nahum
. Monte Carlo Optimization: Rubinstein's Cross-Entropy method 8. Simulation of queueing systems 9. Monte. Liu, Monte Carlo Strategies in Scientific Computing, Springer, 2008. GRADING (tentative): 50% Homework"Advanced Topics in Systems, Control and Learning 1" (048715), Winter 2013/4, Monte Carlo Methods
Carlo Jacoboni; Lino Reggiani
1983-01-01
This review presents in a comprehensive and tutorial form the basic principles of the Monte Carlo method, as applied to the solution of transport problems in semiconductors. Sufficient details of a typical Monte Carlo simulation have been given to allow the interested reader to create his own Monte Carlo program, and the method has been briefly compared with alternative theoretical
Comparison of Monte Carlo methods for fluorescence molecular tomography—computational efficiency
Chen, Jin; Intes, Xavier
2011-01-01
Purpose: The Monte Carlo method is an accurate model for time-resolved quantitative fluorescence tomography. However, this method suffers from low computational efficiency due to the large number of photons required for reliable statistics. This paper presents a comparison study on the computational efficiency of three Monte Carlo-based methods for time-domain fluorescence molecular tomography. Methods: The methods investigated to generate time-gated Jacobians were the perturbation Monte Carlo (pMC) method, the adjoint Monte Carlo (aMC) method and the mid-way Monte Carlo (mMC) method. The effects of the different parameters that affect the computation time and statistics reliability were evaluated. Also, the methods were applied to a set of experimental data for tomographic application. Results:In silico results establish that, the investigated parameters affect the computational time for the three methods differently (linearly, quadratically, or not significantly). Moreover, the noise level of the Jacobian varies when these parameters change. The experimental results in preclinical settings demonstrates the feasibility of using both aMC and pMC methods for time-resolved whole body studies in small animals within a few hours. Conclusions: Among the three Monte Carlo methods, the mMC method is a computationally prohibitive technique that is not well suited for time-domain fluorescence tomography applications. The pMC method is advantageous over the aMC method when the early gates are employed and large number of detectors is present. Alternatively, the aMC method is the method of choice when a small number of source-detector pairs are used. PMID:21992393
Comparisons of Different Particle-Chain Methods for Path Integral Monte Carlo Methods
NASA Astrophysics Data System (ADS)
Reese, Terrence; Miller, Bruce
2007-10-01
In previous work we have used Path Integral Monte Carlo methods to simulate a Positronium atom in a Lennard-Jones fluid. Trial positions are created for sub-chains of particles on the polymer chain to allow for proper exploration of the configuration space. Different methods can be used to determine how the different chains are selected. In this report we compare the results from simulations of Positronium in Xenon at 300 and 340K using our leap frog method and another method where the selection of the sub-chains for trial movements is done randomly. The results indicate that a random selection of sub-chains leads to more accurate simulation results at higher densities.
Green's function Monte Carlo method with exact imaginary-time propagation.
Schmidt, K E; Niyaz, Parhat; Vaught, A; Lee, Michael A
2005-01-01
We present a general formulation of the Green's function Monte Carlo method in imaginary-time quantum Monte Carlo which employs exact propagators. This algorithm has no time-step errors and is obtained by minimal modifications of the time-independent Green's function Monte Carlo method. We describe how the method can be applied to the many-body Schrödinger equation, lattice Hamiltonians, and simple field theories. Our modification of the Green's function Monte Carlo algorithm is applied to the ground state of liquid 4He. We calculate the zero-temperature imaginary-time diffusion constant and relate that to the effective mass of a mass-four "impurity" atom in liquid 4He. PMID:15697764
O'Neill, Philip D
2002-01-01
Recent Bayesian methods for the analysis of infectious disease outbreak data using stochastic epidemic models are reviewed. These methods rely on Markov chain Monte Carlo methods. Both temporal and non-temporal data are considered. The methods are illustrated with a number of examples featuring different models and datasets. PMID:12387918
New Zero-Variance Methods for Monte Carlo Criticality and Source-Detector Problems
Larsen, Edward W.; Densmore, Jeffery D.
2001-06-17
A zero-variance (ZV) Monte Carlo transport method is a theoretical construct that, if it could be implemented on a practical computer, would produce the exact result after any number of histories. Unfortunately, ZV methods are impractical; nevertheless, ZV methods are of practical interest because it is possible to approximate them in ways that yield efficient variance-reduction schemes. New ZV methods for Monte Carlo criticality and source-detector problems are described. Although these methods have the same requirements and disadvantages of earlier methods, their implementation is very different; thus, the concept of approximating them to obtain practical variance-reduction schemes opens new possibilities. The relationships between the new ZV schemes, conventional ZV schemes, and recently proposed variational variance-reduction techniques are discussed. The goal is the development of more efficient Monte Carlo variance-reduction methods.
Advanced computational methods for nodal diffusion, Monte Carlo, and S[sub N] problems
Martin, W.R.
1993-01-01
This document describes progress on five efforts for improving effectiveness of computational methods for particle diffusion and transport problems in nuclear engineering: (1) Multigrid methods for obtaining rapidly converging solutions of nodal diffusion problems. A alternative line relaxation scheme is being implemented into a nodal diffusion code. Simplified P2 has been implemented into this code. (2) Local Exponential Transform method for variance reduction in Monte Carlo neutron transport calculations. This work yielded predictions for both 1-D and 2-D x-y geometry better than conventional Monte Carlo with splitting and Russian Roulette. (3) Asymptotic Diffusion Synthetic Acceleration methods for obtaining accurate, rapidly converging solutions of multidimensional SN problems. New transport differencing schemes have been obtained that allow solution by the conjugate gradient method, and the convergence of this approach is rapid. (4) Quasidiffusion (QD) methods for obtaining accurate, rapidly converging solutions of multidimensional SN Problems on irregular spatial grids. A symmetrized QD method has been developed in a form that results in a system of two self-adjoint equations that are readily discretized and efficiently solved. (5) Response history method for speeding up the Monte Carlo calculation of electron transport problems. This method was implemented into the MCNP Monte Carlo code. In addition, we have developed and implemented a parallel time-dependent Monte Carlo code on two massively parallel processors.
A NEW MONTE CARLO METHOD FOR TIME-DEPENDENT NEUTRINO RADIATION TRANSPORT
Abdikamalov, Ernazar; Ott, Christian D.; O'Connor, Evan [TAPIR, California Institute of Technology, MC 350-17, 1200 E California Blvd., Pasadena, CA 91125 (United States); Burrows, Adam; Dolence, Joshua C. [Department of Astrophysical Sciences, Princeton University, Peyton Hall, Ivy Lane, Princeton, NJ 08544 (United States); Loeffler, Frank; Schnetter, Erik, E-mail: abdik@tapir.caltech.edu [Center for Computation and Technology, Louisiana State University, 216 Johnston Hall, Baton Rouge, LA 70803 (United States)
2012-08-20
Monte Carlo approaches to radiation transport have several attractive properties such as simplicity of implementation, high accuracy, and good parallel scaling. Moreover, Monte Carlo methods can handle complicated geometries and are relatively easy to extend to multiple spatial dimensions, which makes them potentially interesting in modeling complex multi-dimensional astrophysical phenomena such as core-collapse supernovae. The aim of this paper is to explore Monte Carlo methods for modeling neutrino transport in core-collapse supernovae. We generalize the Implicit Monte Carlo photon transport scheme of Fleck and Cummings and gray discrete-diffusion scheme of Densmore et al. to energy-, time-, and velocity-dependent neutrino transport. Using our 1D spherically-symmetric implementation, we show that, similar to the photon transport case, the implicit scheme enables significantly larger timesteps compared with explicit time discretization, without sacrificing accuracy, while the discrete-diffusion method leads to significant speed-ups at high optical depth. Our results suggest that a combination of spectral, velocity-dependent, Implicit Monte Carlo and discrete-diffusion Monte Carlo methods represents a robust approach for use in neutrino transport calculations in core-collapse supernovae. Our velocity-dependent scheme can easily be adapted to photon transport.
A New Monte Carlo Method for Time-dependent Neutrino Radiation Transport
NASA Astrophysics Data System (ADS)
Abdikamalov, Ernazar; Burrows, Adam; Ott, Christian D.; Löffler, Frank; O'Connor, Evan; Dolence, Joshua C.; Schnetter, Erik
2012-08-01
Monte Carlo approaches to radiation transport have several attractive properties such as simplicity of implementation, high accuracy, and good parallel scaling. Moreover, Monte Carlo methods can handle complicated geometries and are relatively easy to extend to multiple spatial dimensions, which makes them potentially interesting in modeling complex multi-dimensional astrophysical phenomena such as core-collapse supernovae. The aim of this paper is to explore Monte Carlo methods for modeling neutrino transport in core-collapse supernovae. We generalize the Implicit Monte Carlo photon transport scheme of Fleck & Cummings and gray discrete-diffusion scheme of Densmore et al. to energy-, time-, and velocity-dependent neutrino transport. Using our 1D spherically-symmetric implementation, we show that, similar to the photon transport case, the implicit scheme enables significantly larger timesteps compared with explicit time discretization, without sacrificing accuracy, while the discrete-diffusion method leads to significant speed-ups at high optical depth. Our results suggest that a combination of spectral, velocity-dependent, Implicit Monte Carlo and discrete-diffusion Monte Carlo methods represents a robust approach for use in neutrino transport calculations in core-collapse supernovae. Our velocity-dependent scheme can easily be adapted to photon transport.
NASA Astrophysics Data System (ADS)
Wang, Mengkuo
In particle transport computations, the Monte Carlo simulation method is a widely used algorithm. There are several Monte Carlo codes available that perform particle transport simulations. However the geometry packages and geometric modeling capability of Monte Carlo codes are limited as they can not handle complicated geometries made up of complex surfaces. Previous research exists that take advantage of the modeling capabilities of CAD software. The two major approaches are the Converter approach and the CAD engine based approach. By carefully analyzing the strategies and algorithms of these two approaches, the CAD engine based approach has peen identified as the more promising approach. Though currently the performance of this approach is not satisfactory, there is room for improvement. The development and implementation of an improved CAD based approach is the focus of this thesis. Algorithms to accelerate the CAD engine based approach are studied. The major acceleration algorithm is the Oriented Bounding Box algorithm, which is used in computer graphics. The difference in application between computer graphics and particle transport has been considered and the algorithm has been modified for particle transport. The major work of this thesis has been the development of the MCNPX/CGM code and the testing, benchmarking and implementation of the acceleration algorithms. MCNPX is a Monte Carlo code and CGM is a CAD geometry engine. A facet representation of the geometry provided the least slowdown of the Monte Carlo code. The CAD model generates the facet representation. The Oriented Bounding Box algorithm was the fastest acceleration technique adopted for this work. The slowdown of the MCNPX/CGM to MCNPX was reduced to a factor of 3 when the facet model is used. MCNPX/CGM has been successfully validated against test problems in medical physics and a fusion energy device. MCNPX/CGM gives exactly the same results as the standard MCNPX when an MCNPX geometry model is available. For the case of the complicated fusion device---the stellerator, the MCNPX/CGM's results closely match a one-dimension model calculation performed by ARIES team.
Densmore, Jeffery D., E-mail: jdd@lanl.gov [Computational Physics and Methods Group, Los Alamos National Laboratory, P.O. Box 1663, MS D409, Los Alamos, NM 87545 (United States); Thompson, Kelly G., E-mail: kgt@lanl.gov [Computational Physics and Methods Group, Los Alamos National Laboratory, P.O. Box 1663, MS D409, Los Alamos, NM 87545 (United States); Urbatsch, Todd J., E-mail: tmonster@lanl.gov [Computational Physics and Methods Group, Los Alamos National Laboratory, P.O. Box 1663, MS D409, Los Alamos, NM 87545 (United States)
2012-08-15
Discrete Diffusion Monte Carlo (DDMC) is a technique for increasing the efficiency of Implicit Monte Carlo radiative-transfer simulations in optically thick media. In DDMC, particles take discrete steps between spatial cells according to a discretized diffusion equation. Each discrete step replaces many smaller Monte Carlo steps, thus improving the efficiency of the simulation. In this paper, we present an extension of DDMC for frequency-dependent radiative transfer. We base our new DDMC method on a frequency-integrated diffusion equation for frequencies below a specified threshold, as optical thickness is typically a decreasing function of frequency. Above this threshold we employ standard Monte Carlo, which results in a hybrid transport-diffusion scheme. With a set of frequency-dependent test problems, we confirm the accuracy and increased efficiency of our new DDMC method.
The S/sub N//Monte Carlo response matrix hybrid method
Filippone, W.L.; Alcouffe, R.E.
1987-01-01
A hybrid method has been developed to iteratively couple S/sub N/ and Monte Carlo regions of the same problem. This technique avoids many of the restrictions and limitations of previous attempts to do the coupling and results in a general and relatively efficient method. We demonstrate the method with some simple examples.
A New Method for the Calculation of Diffusion Coefficients with Monte Carlo
NASA Astrophysics Data System (ADS)
Dorval, Eric
2014-06-01
This paper presents a new Monte Carlo-based method for the calculation of diffusion coefficients. One distinctive feature of this method is that it does not resort to the computation of transport cross sections directly, although their functional form is retained. Instead, a special type of tally derived from a deterministic estimate of Fick's Law is used for tallying the total cross section, which is then combined with a set of other standard Monte Carlo tallies. Some properties of this method are presented by means of numerical examples for a multi-group 1-D implementation. Calculated diffusion coefficients are in general good agreement with values obtained by other methods.
A Monte Carlo synthetic-acceleration method for solving the thermal radiation diffusion equation
Evans, Thomas M., E-mail: evanstm@ornl.gov [Oak Ridge National Laboratory, 1 Bethel Valley Rd., Oak Ridge, TN 37831 (United States); Mosher, Scott W., E-mail: moshersw@ornl.gov [Oak Ridge National Laboratory, 1 Bethel Valley Rd., Oak Ridge, TN 37831 (United States); Slattery, Stuart R., E-mail: sslattery@wisc.edu [University of Wisconsin–Madison, 1500 Engineering Dr., Madison, WI 53716 (United States); Hamilton, Steven P., E-mail: hamiltonsp@ornl.gov [Oak Ridge National Laboratory, 1 Bethel Valley Rd., Oak Ridge, TN 37831 (United States)
2014-02-01
We present a novel synthetic-acceleration-based Monte Carlo method for solving the equilibrium thermal radiation diffusion equation in three spatial dimensions. The algorithm performance is compared against traditional solution techniques using a Marshak benchmark problem and a more complex multiple material problem. Our results show that our Monte Carlo method is an effective solver for sparse matrix systems. For solutions converged to the same tolerance, it performs competitively with deterministic methods including preconditioned conjugate gradient and GMRES. We also discuss various aspects of preconditioning the method and its general applicability to broader classes of problems.
A Monte Carlo Synthetic-Acceleration Method for Solving the Thermal Radiation Diffusion Equation
Evans, Thomas M [ORNL] [ORNL; Mosher, Scott W [ORNL] [ORNL; Slattery, Stuart [University of Wisconsin, Madison] [University of Wisconsin, Madison
2014-01-01
We present a novel synthetic-acceleration based Monte Carlo method for solving the equilibrium thermal radiation diusion equation in three dimensions. The algorithm performance is compared against traditional solution techniques using a Marshak benchmark problem and a more complex multiple material problem. Our results show that not only can our Monte Carlo method be an eective solver for sparse matrix systems, but also that it performs competitively with deterministic methods including preconditioned Conjugate Gradient while producing numerically identical results. We also discuss various aspects of preconditioning the method and its general applicability to broader classes of problems.
Monte Carlo Collision method for low temperature plasma simulation
NASA Astrophysics Data System (ADS)
Taccogna, Francesco; Taccogna
2015-01-01
This work shows the basic foundation of the particle-based representation of low temperature plasma description. In particular, the Monte Carlo Collision (MCC) recipe has been described for the case of electron-atom and ion-atom collisions. The model has been applied to the problem of plasma plume expansion from an electric Hall-effect type thruster. The presence of low energy secondary electrons from electron-atom ionization on the electron energy distribution function (EEDF) have been identified in the first 3 mm from the exit plane where, due to the azimuthal heating the ionization continues to play an important role. In addition, low energy charge-exchange ions from ion-atom electron transfer collisions are evident in the ion energy distribution functions (IEDF) 1 m from the exit plane.
Analysis of microstrip line tolerance using the Monte-Carlo method
NASA Astrophysics Data System (ADS)
Tude, E. A. P.; Chiarello, M. G.
1983-07-01
The effect of W, H, and E(r) tolerances on the impedance and effect of a microstrip line was analyzed with the Monte Carlo method. Z distribution and Duroid dielectric substrate effect were investigated, and their respective sensitivities were calculated. Various methods can be used to calculate the impedance and epsilon effect of a microstrip line. Effects such as dispersion of frequency, anisotropy, and pressure of the conducting substrate are influenced by line width and other manufacturing criteria. The Monte Carlo method utilizes circuit tolerances, and it lends itself particularly well to the analysis of a combination of microwave circuit lines.
Quantum Correction for the Current-Based One-Particle Monte-Carlo Method
S. C. Brugger; A. Wirthmueller; A. Schenk
In a previous work (1) a current-based one-particle Monte-Carlo (CBOPMC) method has been proposed, in contrast to the common OPMC method by F. Venturi et al. (2) based on densities. With the CBOPMC method one can take arbitrary generation-recombination pro- cesses into account self-consistently, which no other MC method can accomplish. This paper reports an exten- sion of the method,
Perfetti, Christopher M [ORNL] [ORNL; Martin, William R [University of Michigan] [University of Michigan; Rearden, Bradley T [ORNL] [ORNL; Williams, Mark L [ORNL] [ORNL
2012-01-01
Three methods for calculating continuous-energy eigenvalue sensitivity coefficients were developed and implemented into the SHIFT Monte Carlo code within the Scale code package. The methods were used for several simple test problems and were evaluated in terms of speed, accuracy, efficiency, and memory requirements. A promising new method for calculating eigenvalue sensitivity coefficients, known as the CLUTCH method, was developed and produced accurate sensitivity coefficients with figures of merit that were several orders of magnitude larger than those from existing methods.
Simulation of chemical reaction equilibria by the reaction ensemble Monte Carlo method: a review†
C. Heath Turner; John K. Brennan; Martin Lísal; William R. Smith; J. Karl Johnson; Keith E. Gubbins
2008-01-01
Understanding and predicting the equilibrium behaviour of chemically reacting systems in highly non-ideal environments is critical to many fields of science and technology, including solvation, nanoporous materials, catalyst design, combustion and propulsion science, shock physics and many more. A method with recent success in predicting the equilibrium behaviour of reactions under non-ideal conditions is the reaction ensemble Monte Carlo method
A Monte Carlo Method Used for the Identification of the Muscle Spindle
Rigas, Alexandros
21 A Monte Carlo Method Used for the Identification of the Muscle Spindle Vassiliki K. Kotti the behavior of the muscle spindle by using a logistic regression model. The system receives input from, the recovery and the summation functions. The most favorable method of estimating the parameters of the muscle
A Monte Carlo Study of Eight Confidence Interval Methods for Coefficient Alpha
ERIC Educational Resources Information Center
Romano, Jeanine L.; Kromrey, Jeffrey D.; Hibbard, Susan T.
2010-01-01
The purpose of this research is to examine eight of the different methods for computing confidence intervals around alpha that have been proposed to determine which of these, if any, is the most accurate and precise. Monte Carlo methods were used to simulate samples under known and controlled population conditions. In general, the differences in…
Chris Brooks
1999-01-01
This paper employs an extensive Monte Carlo study to test the size and power of the BDS and close return methods of testing for departures from independent and identical distribution. It is found that the finite sample properties of the BDS test are far superior and that the close return method cannot be recommended as a model diagnostic. Neither test
Pilot, Rollout and Monte Carlo Tree Search Methods for Job Shop Scheduling
Paris-Sud XI, Université de
Pilot, Rollout and Monte Carlo Tree Search Methods for Job Shop Scheduling Thomas Philip Runarsson1 solution: similarly to what is done in game tree search, better choices are identified using lookahead the Pilot method improves upon some simple well known dispatch heuristics for the job-shop scheduling
On sequential Monte Carlo sampling methods for Bayesian filtering
ARNAUD DOUCET; SIMON GODSILL; CHRISTOPHE ANDRIEU
2000-01-01
In this article, we present an overview of methods for sequential simulation from posterior distribu- tions. These methods are of particular interest in Bayesian filtering for discrete time dynamic models that are typically nonlinear and non-Gaussian. A general importance sampling framework is devel- oped that unifies many of the methods which have been proposed over the last few decades in
High-order path-integral Monte Carlo methods for solving quantum dot problems.
Chin, Siu A
2015-03-01
The conventional second-order path-integral Monte Carlo method is plagued with the sign problem in solving many-fermion systems. This is due to the large number of antisymmetric free-fermion propagators that are needed to extract the ground state wave function at large imaginary time. In this work we show that optimized fourth-order path-integral Monte Carlo methods, which use no more than five free-fermion propagators, can yield accurate quantum dot energies for up to 20 polarized electrons with the use of the Hamiltonian energy estimator. PMID:25871047
Advantages of Analytical Transformations in Monte Carlo Methods for Radiation Transport
McKinley, M S; Brooks III, E D; Daffin, F
2004-12-13
Monte Carlo methods for radiation transport typically attempt to solve an integral by directly sampling analog or weighted particles, which are treated as physical entities. Improvements to the methods involve better sampling, probability games or physical intuition about the problem. We show that significant improvements can be achieved by recasting the equations with an analytical transform to solve for new, non-physical entities or fields. This paper looks at one such transform, the difference formulation for thermal photon transport, showing a significant advantage for Monte Carlo solution of the equations for time dependent transport. Other related areas are discussed that may also realize significant benefits from similar analytical transformations.
Kharrati, Hedi; Agrebi, Amel; Karaoui, Mohamed-Karim [Ecole Superieure des Sciences et Techniques de la Sante de Monastir, Avenue Avicenne 5000 Monastir (Tunisia); Faculte des Sciences de Monastir (Tunisia)
2007-04-15
X-ray buildup factors of lead in broad beam geometry for energies from 15 to 150 keV are determined using the general purpose Monte Carlo N-particle radiation transport computer code (MCNP4C). The obtained buildup factors data are fitted to a modified three parameter Archer et al. model for ease in calculating the broad beam transmission with computer at any tube potentials/filters combinations in diagnostic energies range. An example for their use to compute the broad beam transmission at 70, 100, 120, and 140 kVp is given. The calculated broad beam transmission is compared to data derived from literature, presenting good agreement. Therefore, the combination of the buildup factors data as determined and a mathematical model to generate x-ray spectra provide a computationally based solution to broad beam transmission for lead barriers in shielding x-ray facilities.
Revised methods for few-group cross sections generation in the Serpent Monte Carlo code
Fridman, E. [Reactor Safety Div., Helmholz-Zentrum Dresden-Rossendorf, POB 51 01 19, Dresden, 01314 (Germany); Leppaenen, J. [VTT Technical Research Centre of Finland, POB 1000, FI-02044 VTT (Finland)
2012-07-01
This paper presents new calculation methods, recently implemented in the Serpent Monte Carlo code, and related to the production of homogenized few-group constants for deterministic 3D core analysis. The new methods fall under three topics: 1) Improved treatment of neutron-multiplying scattering reactions, 2) Group constant generation in reflectors and other non-fissile regions and 3) Homogenization in leakage-corrected criticality spectrum. The methodology is demonstrated by a numerical example, comparing a deterministic nodal diffusion calculation using Serpent-generated cross sections to a reference full-core Monte Carlo simulation. It is concluded that the new methodology improves the results of the deterministic calculation, and paves the way for Monte Carlo based group constant generation. (authors)
A rare event sampling method for diffusion Monte Carlo using smart darting
NASA Astrophysics Data System (ADS)
Roberts, K.; Sebsebie, R.; Curotto, E.
2012-02-01
We identify a set of multidimensional potential energy surfaces sufficiently complex to cause both the classical parallel tempering and the guided or unguided diffusion Monte Carlo methods to converge too inefficiently for practical applications. The mathematical model is constructed as a linear combination of decoupled Double Wells [(DDW)n]. We show that the set (DDW)n provides a serious test for new methods aimed at addressing rare event sampling in stochastic simulations. Unlike the typical numerical tests used in these cases, the thermodynamics and the quantum dynamics for (DDW)n can be solved deterministically. We use the potential energy set (DDW)n to explore and identify methods that can enhance the diffusion Monte Carlo algorithm. We demonstrate that the smart darting method succeeds at reducing quasiergodicity for n ? 100 using just 1 × 106 moves in classical simulations (DDW)n. Finally, we prove that smart darting, when incorporated into the regular or the guided diffusion Monte Carlo algorithm, drastically improves its convergence. The new method promises to significantly extend the range of systems computationally tractable by the diffusion Monte Carlo algorithm.
Smith, Leon E.; Gesh, Christopher J.; Pagh, Richard T.; Miller, Erin A.; Shaver, Mark W.; Ashbaker, Eric D.; Batdorf, Michael T.; Ellis, J. E.; Kaye, William R.; McConn, Ronald J.; Meriwether, George H.; Ressler, Jennifer J.; Valsan, Andrei B.; Wareing, Todd A.
2008-10-31
Radiation transport modeling methods used in the radiation detection community fall into one of two broad categories: stochastic (Monte Carlo) and deterministic. Monte Carlo methods are typically the tool of choice for simulating gamma-ray spectrometers operating in homeland and national security settings (e.g. portal monitoring of vehicles or isotope identification using handheld devices), but deterministic codes that discretize the linear Boltzmann transport equation in space, angle, and energy offer potential advantages in computational efficiency for many complex radiation detection problems. This paper describes the development of a scenario simulation framework based on deterministic algorithms. Key challenges include: formulating methods to automatically define an energy group structure that can support modeling of gamma-ray spectrometers ranging from low to high resolution; combining deterministic transport algorithms (e.g. ray-tracing and discrete ordinates) to mitigate ray effects for a wide range of problem types; and developing efficient and accurate methods to calculate gamma-ray spectrometer response functions from the deterministic angular flux solutions. The software framework aimed at addressing these challenges is described and results from test problems that compare coupled deterministic-Monte Carlo methods and purely Monte Carlo approaches are provided.
An Evaluation of a Markov Chain Monte Carlo Method for the Rasch Model.
ERIC Educational Resources Information Center
Kim, Seock-Ho
2001-01-01
Examined the accuracy of the Gibbs sampling Markov chain Monte Carlo procedure for estimating item and person (theta) parameters in the one-parameter logistic model. Analyzed four empirical datasets using the Gibbs sampling, conditional maximum likelihood, marginal maximum likelihood, and joint maximum likelihood methods. Discusses the conditions…
An Evaluation of a Markov Chain Monte Carlo Method for the Rasch Model.
ERIC Educational Resources Information Center
Kim, Seock-Ho
The accuracy of the Markov chain Monte Carlo procedure, Gibbs sampling, was considered for estimation of item and ability parameters of the one-parameter logistic model. Four data sets were analyzed to evaluate the Gibbs sampling procedure. Data sets were also analyzed using methods of conditional maximum likelihood, marginal maximum likelihood,…
Quantum Monte Carlo Methods for First Principles Simulation of Liquid Water
ERIC Educational Resources Information Center
Gergely, John Robert
2009-01-01
Obtaining an accurate microscopic description of water structure and dynamics is of great interest to molecular biology researchers and in the physics and quantum chemistry simulation communities. This dissertation describes efforts to apply quantum Monte Carlo methods to this problem with the goal of making progress toward a fully "ab initio"…
Numerical Methods for Quantum Monte Carlo Simulations of the Hubbard Model
Bai, Zhaojun
1 CHAPTER I Numerical Methods for Quantum Monte Carlo Simulations of the Hubbard Model Zhaojun Bai-mail: bai@cs.ucdavis.edu Wenbin Chen School of Mathematical Sciences Fudan University Shanghai 200433, China in part by the China Basic Research Program under the grant 2005CB321701. #12;2 Bai, Chen, Scalettar
A mean field theory of sequential Monte Carlo methods P. Del Moral
Del Moral , Pierre
A mean field theory of sequential Monte Carlo methods P. Del Moral INRIA, Centre Bordeaux-Sud Ouest Moral (INRIA Bordeaux) INRIA Centre Bordeaux-Sud Ouest, France 1 / 32 #12;Outline 1 Foundations Moral (INRIA Bordeaux) INRIA Centre Bordeaux-Sud Ouest, France 2 / 32 #12;Summary 1 Foundations
A Monte Carlo Test of Load Calculation Methods, Lake Tahoe Basin, California-Nevada
Robert Coats; Fengjing Liu; Charles R. Goldman
2002-01-01
The sampling of streams and estimation of total loads of nitrogen, phosphorus, and suspended sediment play an important role in efforts to control the eutrophication of Lake Tahoe. We used a Monte Carlo procedure to test the precision and bias of four methods of calculating total constituent loads for nitrate-nitrogen, soluble reactive phosphorus, particulate phosphorus, total phosphorus, and suspended sediment
Kumar, Sudhir; Srinivasan, P; Sharma, S D
2010-06-01
A cylindrical graphite ionization chamber of sensitive volume 1002.4 cm(3) was designed and fabricated at Bhabha Atomic Research Centre (BARC) for use as a reference dosimeter to measure the strength of high dose rate (HDR) (192)Ir brachytherapy sources. The air kerma calibration coefficient (N(K)) of this ionization chamber was estimated analytically using Burlin general cavity theory and by the Monte Carlo method. In the analytical method, calibration coefficients were calculated for each spectral line of an HDR (192)Ir source and the weighted mean was taken as N(K). In the Monte Carlo method, the geometry of the measurement setup and physics related input data of the HDR (192)Ir source and the surrounding material were simulated using the Monte Carlo N-particle code. The total photon energy fluence was used to arrive at the reference air kerma rate (RAKR) using mass energy absorption coefficients. The energy deposition rates were used to simulate the value of charge rate in the ionization chamber and N(K) was determined. The Monte Carlo calculated N(K) agreed within 1.77 % of that obtained using the analytical method. The experimentally determined RAKR of HDR (192)Ir sources, using this reference ionization chamber by applying the analytically estimated N(K), was found to be in agreement with the vendor quoted RAKR within 1.43%. PMID:20079657
Igor V. Meglinsky; Ilya V. Yaroslavsky
1994-01-01
This paper presents the version of a Monte Carlo method for simulating of optical radiation propagation in biotissue and highly scattering media allowing for 3-D geometry of a medium and macroinhomogeneities located in one of the layers. The simulation is based on the use of Green's function of medium response to a single external pulse. The process of radiation propagation
Comparison of Monte-Carlo and Einstein methods in the light-gas interactions
Jacques Moret-Bailly
2010-01-18
To study the propagation of light in nebulae, many astrophysicists use a Monte-Carlo computation which does not take interferences into account. Replacing the wrong method by Einstein coefficients theory gives, on an example, a theoretical spectrum much closer to the observed one.
Partial Linear Gaussian Models for Tracking in Image Sequences Using Sequential Monte Carlo Methods
Elise Arnaud; Étienne Mémin
2007-01-01
The recent development of Sequential Monte Carlo methods (also called particle filters) has enabled the definition of efficient algorithms for tracking applications in image sequences. The efficiency of these approaches depends on the quality of the state-space exploration, which may be inefficient due to a crude choice of the function used to sample in the associated probability space. A careful
Monte Carlo Methods for Equilibrium and Nonequilibrium Problems in Interfacial Electrochemistry
Gregory Brown; Per Arne Rikvold; S. J. Mitchell; M. A. Novotny
1998-05-11
We present a tutorial discussion of Monte Carlo methods for equilibrium and nonequilibrium problems in interfacial electrochemistry. The discussion is illustrated with results from simulations of three specific systems: bromine adsorption on silver (100), underpotential deposition of copper on gold (111), and electrodeposition of urea on platinum (100).
FPGA-driven pseudorandom number generators aimed at accelerating Monte Carlo methods
Tarek Ould Bachir; Jean-Jules Brault
2009-01-01
Hardware acceleration in high performance computing (HPC) context is of growing interest, particularly in the field of Monte Carlo methods where the resort to field programmable gate array (FPGA) technology has been proven as an effective media, capable of enhancing by several orders the speed execution of stochastic processes. The spread-use of reconfigurable hardware for stochastic simulation gathered a significant
Bayesian Phylogenetic Inference Using DNA Sequences: A Markov Chain Monte Carlo Method
Ziheng Yang; Bruce Rannala
An improved Bayesian method is presented for estimating phylogenetic trees using DNA sequence data. The birth- death process with species sampling is used to specify the prior distribution of phylogenies and ancestral speciation times, and the posterior probabilities of phylogenies are used to estimate the maximum posterior probability (MAP) tree. Monte Carlo integration is used to integrate over the ancestral
Lee, Anthony; Yau, Christopher; Giles, Michael B; Doucet, Arnaud; Holmes, Christopher C
2010-12-01
We present a case-study on the utility of graphics cards to perform massively parallel simulation of advanced Monte Carlo methods. Graphics cards, containing multiple Graphics Processing Units (GPUs), are self-contained parallel computational devices that can be housed in conventional desktop and laptop computers and can be thought of as prototypes of the next generation of many-core processors. For certain classes of population-based Monte Carlo algorithms they offer massively parallel simulation, with the added advantage over conventional distributed multi-core processors that they are cheap, easily accessible, easy to maintain, easy to code, dedicated local devices with low power consumption. On a canonical set of stochastic simulation examples including population-based Markov chain Monte Carlo methods and Sequential Monte Carlo methods, we nd speedups from 35 to 500 fold over conventional single-threaded computer code. Our findings suggest that GPUs have the potential to facilitate the growth of statistical modelling into complex data rich domains through the availability of cheap and accessible many-core computation. We believe the speedup we observe should motivate wider use of parallelizable simulation methods and greater methodological attention to their design. PMID:22003276
Electrical conductivity of high-pressure liquid hydrogen by quantum Monte Carlo methods.
Lin, Fei; Morales, Miguel A; Delaney, Kris T; Pierleoni, Carlo; Martin, Richard M; Ceperley, D M
2009-12-18
We compute the electrical conductivity for liquid hydrogen at high pressure using Monte Carlo techniques. The method uses coupled electron-ion Monte Carlo simulations to generate configurations of liquid hydrogen. For each configuration, correlated sampling of electrons is performed in order to calculate a set of lowest many-body eigenstates and current-current correlation functions of the system, which are summed over in the many-body Kubo formula to give ac electrical conductivity. The extrapolated dc conductivity at 3000 K for several densities shows a liquid semiconductor to liquid-metal transition at high pressure. Our results are in good agreement with shock-wave data. PMID:20366267
Monte Carlo Method for Multiple Knapsack Stefka Fidanova
Fidanova, Stefka
resource allocation and capital budgeting problems. The Ant Colony Optimization (ACO) is a MC method. The sampling is realized con- currently by a collection of di#11;erently instantiated replicas of the same ant probabilistically bias fu- ture search, preventing ants to waste resources in not promising regions of the search
Sequential Monte Carlo Methods With Applications To Communication Channels
Boddikurapati, Sirish
2010-07-14
set of sample points, which guarantees accuracy with the posterior mean and covariance to the second order for any nonlinearity. It uses a deterministic sampling technique known as the unscented transform to pick a minimal set of sample points (called... . . . . . . . . . . 83 C. Capacity Calculation . . . . . . . . . . . . . . . . . . . . . 84 1. Split Step Fourier Transform Method . . . . . . . . . 85 D. Simulations . . . . . . . . . . . . . . . . . . . . . . . . . . 87 VIII CONCLUSIONS...
Power Analysis for Complex Mediational Designs Using Monte Carlo Methods
ERIC Educational Resources Information Center
Thoemmes, Felix; MacKinnon, David P.; Reiser, Mark R.
2010-01-01
Applied researchers often include mediation effects in applications of advanced methods such as latent variable models and linear growth curve models. Guidance on how to estimate statistical power to detect mediation for these models has not yet been addressed in the literature. We describe a general framework for power analyses for complex…
Reducing uncertainty in site characterization using Bayes Monte Carlo methods
Sohn, Michael D.; Small, Mitchell J.; Pantazidou, Marina
2004-04-28
A Bayesian uncertainty analysis approach is developed as a tool for assessing and reducing uncertainty in ground-water flow and chemical transport predictions. The method is illustrated for a site contaminated with chlorinated hydrocarbons. Uncertainty in source characterization, in chemical transport parameters, and in the assumed hydrogeologic structure was evaluated using engineering judgment and updated using observed field data. The updating approach using observed hydraulic head data was able to differentiate between reasonable and unreasonable hydraulic conductivity fields but could not differentiate between alternative conceptual models for the geological structure of the subsurface at the site. Updating using observed chemical concentration data reduced the uncertainty in most parameters and reduced uncertainty in alternative conceptual models describing the geological structure at the site, source locations, and the chemicals released at these sources. Thirty-year transport projections for no-action and source containment scenarios demonstrate a typical application of the methods.
Chung, Kiwhan
1996-01-01
While the use of Monte Carlo method has been prevalent in nuclear engineering, it has yet to fully blossom in the study of solute transport in porous media. By using an etched-glass micromodel, an attempt is made to apply Monte Carlo method...
NASA Astrophysics Data System (ADS)
Kim, Minho; Lee, Hyounggun; Kim, Hyosim; Park, Hongmin; Lee, Wonho; Park, Sungho
2014-03-01
This study evaluated the Monte Carlo method for determining the dose calculation in fluoroscopy by using a realistic human phantom. The dose was calculated by using Monte Carlo N-particle extended (MCNPX) in simulations and was measured by using Korean Typical Man-2 (KTMAN-2) phantom in the experiments. MCNPX is a widely-used simulation tool based on the Monte-Carlo method and uses random sampling. KTMAN-2 is a virtual phantom written in MCNPX language and is based on the typical Korean man. This study was divided into two parts: simulations and experiments. In the former, the spectrum generation program (SRS-78) was used to obtain the output energy spectrum for fluoroscopy; then, each dose to the target organ was calculated using KTMAN-2 with MCNPX. In the latter part, the output of the fluoroscope was calibrated first and TLDs (Thermoluminescent dosimeter) were inserted in the ART (Alderson Radiation Therapy) phantom at the same places as in the simulation. Thus, the phantom was exposed to radiation, and the simulated and the experimental doses were compared. In order to change the simulation unit to the dose unit, we set the normalization factor (NF) for unit conversion. Comparing the simulated with the experimental results, we found most of the values to be similar, which proved the effectiveness of the Monte Carlo method in fluoroscopic dose evaluation. The equipment used in this study included a TLD, a TLD reader, an ART phantom, an ionization chamber and a fluoroscope.
Monte-Carlo methods make Dempster-Shafer formalism feasible
NASA Technical Reports Server (NTRS)
Kreinovich, Vladik YA.; Bernat, Andrew; Borrett, Walter; Mariscal, Yvonne; Villa, Elsa
1991-01-01
One of the main obstacles to the applications of Dempster-Shafer formalism is its computational complexity. If we combine m different pieces of knowledge, then in general case we have to perform up to 2(sup m) computational steps, which for large m is infeasible. For several important cases algorithms with smaller running time were proposed. We prove, however, that if we want to compute the belief bel(Q) in any given query Q, then exponential time is inevitable. It is still inevitable, if we want to compute bel(Q) with given precision epsilon. This restriction corresponds to the natural idea that since initial masses are known only approximately, there is no sense in trying to compute bel(Q) precisely. A further idea is that there is always some doubt in the whole knowledge, so there is always a probability p(sub o) that the expert's knowledge is wrong. In view of that it is sufficient to have an algorithm that gives a correct answer a probability greater than 1-p(sub o). If we use the original Dempster's combination rule, this possibility diminishes the running time, but still leaves the problem infeasible in the general case. We show that for the alternative combination rules proposed by Smets and Yager feasible methods exist. We also show how these methods can be parallelized, and what parallelization model fits this problem best.
Inverse Direct Lighting with a Monte Carlo Method and Declarative Modelling
Vincent Jolivet; Dimitri Plemenos; Patrick Poulingeas
2002-01-01
In inverse lighting problems, a lot of optimization techniques have been used. A new method in the framework of radiosity\\u000a is presented here, using a simple Monte-Carlo method to find the positions of the lights in a direct lighting. Declarative\\u000a modelling will also be used to allow the designer to describe in a more intuitive way his lighting wishes. Declarative
Uncertainty Propagation in Complex Engineering Systems by Advanced Monte Carlo Methods
S. K. Au; D. P. Thunnissen
This paper presents a recently developed advanced Monte Carlo method called ‘Subset Simulation’ for efficient stocha stic\\u000a analysis of complex engineering systems. The method investigates rare failure scenarios by efficiently generating ‘conditional\\u000a sample s’ that populate progressively towards the rare failure region. In addition to reliability analysis and performance\\u000a margin estimation, the conditional samples also provide information for sensitivity and
NASA Astrophysics Data System (ADS)
Nakano, Shinya; Suzuki, Kazue; Kawamura, Kenji; Parrenin, Frederic; Higuchi, Tomoyuki
2015-04-01
A technique for estimating the age-depth relationship and its uncertainty in ice cores has been developed. The age-depth relationship is mainly determined by the accumulation of snow at the site of the ice core and the thinning process due to the horizontal stretching and vertical compression of an ice layer. However, both the accumulation process and the thinning process are not fully known. In order to appropriately estimate the age as a function of depth, it is crucial to incorporate observational information into a model describing the accumulation and thinning processes. In the proposed technique, the age as a function of depth is estimated from age markers and time series of ?18O data. The estimation is achieved using a method combining a sequential Monte Carlo method and the Markov chain Monte Carlo method as proposed by Andrieu et al. (2010). In this hybrid method, the posterior distributions for the parameters in the models for the accumulation and thinning processes are basically computed using a way of the standard Metropolis-Hastings method. Meanwhile, sampling from the posterior distribution for the age-depth relationship is achieved by using a sequential Monte Carlo approach at each iteration of the Metropolis-Hastings method. A sequential Monte Carlo method normally suffers from the degeneracy problem, especially in cases that the number of steps is large. However, when it is combined with the Metropolis-Hastings method, the degeneracy problem can be overcome by collecting a large number of samples obtained by many iterations of the Metropolis-Hastings method. We will demonstrate the result obtained by applying the proposed technique to the ice core data from Dome Fuji in Antarctica.
NASA Astrophysics Data System (ADS)
Wei, Jianming
2014-05-01
A Monte Carlo (MC) method using bookkeeping strategy for population balance modeling of particulate processes has been designed in this article. With this method the evaluation of coagulation time step can be done precisely. In an effort to achieve the best computational efficiency, the MC program is implemented on a many-core graphic processing unit (GPU) after being fully parallelized. Useful rules for optimizing the MC code are also suggested. The computational accuracy of the MC scheme is then verified by comparing with a deterministic sectional-method. Eventually the computational efficiency of the MC method is investigated.
Application de la methode des sous-groupes au calcul Monte-Carlo multigroupe
NASA Astrophysics Data System (ADS)
Martin, Nicolas
This thesis is dedicated to the development of a Monte Carlo neutron transport solver based on the subgroup (or multiband) method. In this formalism, cross sections for resonant isotopes are represented in the form of probability tables on the whole energy spectrum. This study is intended in order to test and validate this approach in lattice physics and criticality-safety applications. The probability table method seems promising since it introduces an alternative computational way between the legacy continuous-energy representation and the multigroup method. In the first case, the amount of data invoked in continuous-energy Monte Carlo calculations can be very important and tend to slow down the overall computational time. In addition, this model preserves the quality of the physical laws present in the ENDF format. Due to its cheap computational cost, the multigroup Monte Carlo way is usually at the basis of production codes in criticality-safety studies. However, the use of a multigroup representation of the cross sections implies a preliminary calculation to take into account self-shielding effects for resonant isotopes. This is generally performed by deterministic lattice codes relying on the collision probability method. Using cross-section probability tables on the whole energy range permits to directly take into account self-shielding effects and can be employed in both lattice physics and criticality-safety calculations. Several aspects have been thoroughly studied: (1) The consistent computation of probability tables with a energy grid comprising only 295 or 361 groups. The CALENDF moment approach conducted to probability tables suitable for a Monte Carlo code. (2) The combination of the probability table sampling for the energy variable with the delta-tracking rejection technique for the space variable, and its impact on the overall efficiency of the proposed Monte Carlo algorithm. (3) The derivation of a model for taking into account anisotropic effects of the scattering reaction consistent with the subgroup method. In this study, we generalize the Discrete Angle Technique, already proposed for homogeneous, multigroup cross sections, to isotopic cross sections on the form of probability tables. In this technique, the angular density is discretized into probability tables. Similarly to the cross-section case, a moment approach is used to compute the probability tables for the scattering cosine. (4) The introduction of a leakage model based on the B1 fundamental mode approximation. Unlike deterministic lattice packages, most Monte Carlo-based lattice physics codes do not include leakage models. However the generation of homogenized and condensed group constants (cross sections, diffusion coefficients) require the critical flux. This project has involved the development of a program into the DRAGON framework, written in Fortran 2003 and wrapped with a driver in C, the GANLIB 5. Choosing Fortran 2003 has permitted the use of some modern features, such as the definition of objects and methods, data encapsulation and polymorphism. The validation of the proposed code has been performed by comparison with other numerical methods: (1) The continuous-energy Monte Carlo method of the SERPENT code. (2) The Collision Probability (CP) method and the discrete ordinates (SN) method of the DRAGON lattice code. (3) The multigroup Monte Carlo code MORET, coupled with the DRAGON code. Benchmarks used in this work are representative of some industrial configurations encountered in reactor and criticality-safety calculations: (1)Pressurized Water Reactors (PWR) cells and assemblies. (2) Canada-Deuterium Uranium Reactors (CANDU-6) clusters. (3) Critical experiments from the ICSBEP handbook (International Criticality Safety Benchmark Evaluation Program).
Comparison of Monte Carlo methods for criticality benchmarks: Pointwise compared to multigroup
Choi, J.S.; Alesso, P.H.; Pearson, J.S. (Lawrence Livermore National Lab., CA (USA))
1989-01-01
Transport codes use multigroup cross sections where neutrons are divided into broad energy groups, and the monoenergetic equation is solved for each group with a group-averaged cross section. Monte Carlo codes differ in that they allow the use of the most basic pointwise cross-section data directly in a calculation. Most of the first Monte Carlo codes were not able to utilize this feature, however, because of the memory limitations of early computers and the lack of pointwise cross-section data. Consequently, codes written in 1970s, such as KENO-IV and MORSE-C, were adapted to use multigroup cross-section sets similar to those used in the S{sub n} transport codes. With advances in computer memory capacities and the availability of pointwise cross-section sets, new Monte Carlo codes employing pointwise cross-section libraries, such as the Los Alamos National Laboratory code MCNP and the Lawrence Livermore National Laboratory (LLNL) code COG were developed for criticality, as well as radiation transport calculations. To compare pointwise and multigroup Monte Carlo methods for criticality benchmark calculations, this paper presents and evaluated the results from the KENO-IV, MORSE-C, MCNP, and COG codes. The critical experiments selected for benchmarking include LLNL fast metal systems and low-enriched uranium moderated and reflected systems.
Folding a 20 amino acid ?? peptide with the diffusion process-controlled Monte Carlo method
NASA Astrophysics Data System (ADS)
Derreumaux, Philippe
1997-08-01
In this study we report on the application of the diffusion process-controlled Monte Carlo method to a 20 amino acid ?? peptide (Ac-E-T-Q-A-A-L-L-A-A-Q-K-A-Y-H-P-M-T-M-T-G-Am). The polypeptide chain is represented by a set of 126 particles, the side chains are modeled by spheres, and the backbone dihedral angles ? and ? of each of the amino acid residue are essentially restricted to a set of ten high probability regions, although the whole ?-? space may be visited in the course of the simulation. The method differs from other off-lattice Monte Carlo methods, in that the escape time from one accepted conformation to the next is estimated and limited at each iteration. The conformations are evaluated on the basis of pairwise nonbonded side chain energies derived from statistical distributions of contacts in real proteins and a simple main chain hydrogen bonding potential. As a result of four simulations starting from random extended conformations and one starting from a structure consistent with NMR data, the lowest-energy conformation (i.e., the ?? fold) is detected in ˜103 Monte Carlo steps, although the estimated probability of getting the ?? motif is ˜10-12. The predicted conformations deviate by 3.0 Å rms from a model structure compatible with the experimental results. In this work further evidence is provided that this method is useful in determining the lowest-energy region of medium-size polypeptide chains.
Takeshi Kida; Yasumasa Tsukamoto; Yuji Kihara Renesas
2012-01-01
With the scaling of MOSFET dimensions and the lowering of supply voltage, more precise estimation of minimum operating voltage (Vmin) of SRAM at 6-sigma level is needed. In this paper, we propose a method based on the importance sampling (IS) Monte Carlo simulation to elaborately predict Vmin for future technology node below 22 nm generation. By executing Monte Carlo (MC)
Implicit Monte Carlo methods and non-equilibrium Marshak wave radiative transport
Lynch, J.E.
1985-01-01
Two enhancements to the Fleck implicit Monte Carlo method for radiative transport are described, for use in transparent and opaque media respectively. The first introduces a spectral mean cross section, which applies to pseudoscattering in transparent regions with a high frequency incident spectrum. The second provides a simple Monte Carlo random walk method for opaque regions, without the need for a supplementary diffusion equation formulation. A time-dependent transport Marshak wave problem of radiative transfer, in which a non-equilibrium condition exists between the radiation and material energy fields, is then solved. These results are compared to published benchmark solutions and to new discrete ordinate S-N results, for both spatially integrated radiation-material energies versus time and to new spatially dependent temperature profiles. Multigroup opacities, which are independent of both temperature and frequency, are used in addition to a material specific heat which is proportional to the cube of the temperature. 7 refs., 4 figs.
Quantum World-line Monte Carlo Method with Non-binary Loops and Its Application
NASA Astrophysics Data System (ADS)
Harada, K.
A quantum world-line Monte Carlo method for high-symmetrical quantum models is proposed. Firstly, based on a representation of a partition function using the Matsubara formula, the principle of quantum world-line Monte Carlo methods is briefly outlined and a new algorithm using non-binary loops is given for quantum models with high symmetry as SU(N). The algorithm is called non-binary loop algorithm because of non-binary loop updatings. Secondary, one example of our numerical studies using the non-binary loop updating is shown. It is the problem of the ground state of two-dimensional SU(N) anti-ferromagnets. Our numerical study confirms that the ground state in the small N(? 4) case is a magnetic ordered Neel state, but the one in the large N(? 5) case has no magnetic order, and it becomes a dimer state.
Path-integral Monte Carlo method for the local Z2 Berry phase
NASA Astrophysics Data System (ADS)
Motoyama, Yuichi; Todo, Synge
2013-02-01
We present a loop cluster algorithm Monte Carlo method for calculating the local Z2 Berry phase of the quantum spin models. The Berry connection, which is given as the inner product of two ground states with different local twist angles, is expressed as a Monte Carlo average on the worldlines with fixed spin configurations at the imaginary-time boundaries. The “complex weight problem” caused by the local twist is solved by adopting the meron cluster algorithm. We present the results of simulation on the antiferromagnetic Heisenberg model on an out-of-phase bond-alternating ladder to demonstrate that our method successfully detects the change in the valence bond pattern at the quantum phase transition point. We also propose that the gauge-fixed local Berry connection can be an effective tool to estimate precisely the quantum critical point.
Structural modelling of liquid NaxCs1-x alloys using the reverse Monte Carlo method
T. Arai; R. L. McGreevy
2005-01-01
The structures of liquid NaxCs1-x alloys have been modelled using the reverse Monte Carlo method on the basis of x-ray and neutron diffraction data. The partial structure factors and partial radial distribution functions obtained were consistent with previous work. The average coordination numbers and coordination number distributions for the four distinct kinds of atom pairs have been calculated over a
Kinetic Monte Carlo method for dislocation migration in the presence of solute
Chaitanya S. Deo; David J. Srolovitz; Wei Cai; Vasily V. Bulatov
2005-01-01
We present a kinetic Monte Carlo method for simulating dislocation motion in alloys within the framework of the kink model. The model considers the glide of a dislocation in a static, three-dimensional solute atom atmosphere. It includes both a description of the short-range interaction between a dislocation core and the solute and long-range solute-dislocation interactions arising from the interplay of
Quantum Monte-Carlo method applied to Non-Markovian barrier transmission
G. Hupin; D. Lacroix
2010-01-05
In nuclear fusion and fission, fluctuation and dissipation arise due to the coupling of collective degrees of freedom with internal excitations. Close to the barrier, both quantum, statistical and non-Markovian effects are expected to be important. In this work, a new approach based on quantum Monte-Carlo addressing this problem is presented. The exact dynamics of a system coupled to an environment is replaced by a set of stochastic evolutions of the system density. The quantum Monte-Carlo method is applied to systems with quadratic potentials. In all range of temperature and coupling, the stochastic method matches the exact evolution showing that non-Markovian effects can be simulated accurately. A comparison with other theories like Nakajima-Zwanzig or Time-ConvolutionLess ones shows that only the latter can be competitive if the expansion in terms of coupling constant is made at least to fourth order. A systematic study of the inverted parabola case is made at different temperatures and coupling constants. The asymptotic passing probability is estimated in different approaches including the Markovian limit. Large differences with the exact result are seen in the latter case or when only second order in the coupling strength is considered as it is generally assumed in nuclear transport models. On opposite, if fourth order in the coupling or quantum Monte-Carlo method is used, a perfect agreement is obtained.
Ya. Pavlyuchenkov; D. Semenov; Th. Henning; St. Guilloteau; V. Pietu; R. Launhardt; A. Dutrey
2007-07-19
We analyze the line radiative transfer in protoplanetary disks using several approximate methods and a well-tested Accelerated Monte Carlo code. A low-mass flaring disk model with uniform as well as stratified molecular abundances is adopted. Radiative transfer in low and high rotational lines of CO, C18O, HCO+, DCO+, HCN, CS, and H2CO is simulated. The corresponding excitation temperatures, synthetic spectra, and channel maps are derived and compared to the results of the Monte Carlo calculations. A simple scheme that describes the conditions of the line excitation for a chosen molecular transition is elaborated. We find that the simple LTE approach can safely be applied for the low molecular transitions only, while it significantly overestimates the intensities of the upper lines. In contrast, the Full Escape Probability (FEP) approximation can safely be used for the upper transitions ($J_{\\rm up} \\ga 3$) but it is not appropriate for the lowest transitions because of the maser effect. In general, the molecular lines in protoplanetary disks are partly subthermally excited and require more sophisticated approximate line radiative transfer methods. We analyze a number of approximate methods, namely, LVG, VEP (Vertical Escape Probability) and VOR (Vertical One Ray) and discuss their algorithms in detail. In addition, two modifications to the canonical Monte Carlo algorithm that allow a significant speed up of the line radiative transfer modeling in rotating configurations by a factor of 10--50 are described.
Lattice-switching Monte Carlo method for crystals of flexible molecules.
Bridgwater, Sally; Quigley, David
2014-12-01
We discuss implementation of the lattice-switching Monte Carlo method (LSMC) as a binary sampling between two synchronized Markov chains exploring separated minima in the potential energy landscape. When expressed in this fashion, the generalization to more complex crystals is straightforward. We extend the LSMC method to a flexible model of linear alkanes, incorporating bond length and angle constraints. Within this model, we accurately locate a transition between two polymorphs of n-butane with increasing density, and suggest this as a benchmark problem for other free-energy methods. PMID:25615228
Monte Carlo Methods in Materials Science Based on FLUKA and ROOT
NASA Technical Reports Server (NTRS)
Pinsky, Lawrence; Wilson, Thomas; Empl, Anton; Andersen, Victor
2003-01-01
A comprehensive understanding of mitigation measures for space radiation protection necessarily involves the relevant fields of nuclear physics and particle transport modeling. One method of modeling the interaction of radiation traversing matter is Monte Carlo analysis, a subject that has been evolving since the very advent of nuclear reactors and particle accelerators in experimental physics. Countermeasures for radiation protection from neutrons near nuclear reactors, for example, were an early application and Monte Carlo methods were quickly adapted to this general field of investigation. The project discussed here is concerned with taking the latest tools and technology in Monte Carlo analysis and adapting them to space applications such as radiation shielding design for spacecraft, as well as investigating how next-generation Monte Carlos can complement the existing analytical methods currently used by NASA. We have chosen to employ the Monte Carlo program known as FLUKA (A legacy acronym based on the German for FLUctuating KAscade) used to simulate all of the particle transport, and the CERN developed graphical-interface object-oriented analysis software called ROOT. One aspect of space radiation analysis for which the Monte Carlo s are particularly suited is the study of secondary radiation produced as albedoes in the vicinity of the structural geometry involved. This broad goal of simulating space radiation transport through the relevant materials employing the FLUKA code necessarily requires the addition of the capability to simulate all heavy-ion interactions from 10 MeV/A up to the highest conceivable energies. For all energies above 3 GeV/A the Dual Parton Model (DPM) is currently used, although the possible improvement of the DPMJET event generator for energies 3-30 GeV/A is being considered. One of the major tasks still facing us is the provision for heavy ion interactions below 3 GeV/A. The ROOT interface is being developed in conjunction with the CERN ALICE (A Large Ion Collisions Experiment) software team through an adaptation of their existing AliROOT (ALICE Using ROOT) architecture. In order to check our progress against actual data, we have chosen to simulate the ATIC14 (Advanced Thin Ionization Calorimeter) cosmic-ray astrophysics balloon payload as well as neutron fluences in the Mir spacecraft. This paper contains a summary of status of this project, and a roadmap to its successful completion.
A Hybrid Monte Carlo-Deterministic Method for Global Binary Stochastic Medium Transport Problems
Keady, K P; Brantley, P
2010-03-04
Global deep-penetration transport problems are difficult to solve using traditional Monte Carlo techniques. In these problems, the scalar flux distribution is desired at all points in the spatial domain (global nature), and the scalar flux typically drops by several orders of magnitude across the problem (deep-penetration nature). As a result, few particle histories may reach certain regions of the domain, producing a relatively large variance in tallies in those regions. Implicit capture (also known as survival biasing or absorption suppression) can be used to increase the efficiency of the Monte Carlo transport algorithm to some degree. A hybrid Monte Carlo-deterministic technique has previously been developed by Cooper and Larsen to reduce variance in global problems by distributing particles more evenly throughout the spatial domain. This hybrid method uses an approximate deterministic estimate of the forward scalar flux distribution to automatically generate weight windows for the Monte Carlo transport simulation, avoiding the necessity for the code user to specify the weight window parameters. In a binary stochastic medium, the material properties at a given spatial location are known only statistically. The most common approach to solving particle transport problems involving binary stochastic media is to use the atomic mix (AM) approximation in which the transport problem is solved using ensemble-averaged material properties. The most ubiquitous deterministic model developed specifically for solving binary stochastic media transport problems is the Levermore-Pomraning (L-P) model. Zimmerman and Adams proposed a Monte Carlo algorithm (Algorithm A) that solves the Levermore-Pomraning equations and another Monte Carlo algorithm (Algorithm B) that is more accurate as a result of improved local material realization modeling. Recent benchmark studies have shown that Algorithm B is often significantly more accurate than Algorithm A (and therefore the L-P model) for deep penetration problems such as examined in this paper. In this research, we investigate the application of a variant of the hybrid Monte Carlo-deterministic method proposed by Cooper and Larsen to global deep penetration problems involving binary stochastic media. To our knowledge, hybrid Monte Carlo-deterministic methods have not previously been applied to problems involving a stochastic medium. We investigate two approaches for computing the approximate deterministic estimate of the forward scalar flux distribution used to automatically generate the weight windows. The first approach uses the atomic mix approximation to the binary stochastic medium transport problem and a low-order discrete ordinates angular approximation. The second approach uses the Levermore-Pomraning model for the binary stochastic medium transport problem and a low-order discrete ordinates angular approximation. In both cases, we use Monte Carlo Algorithm B with weight windows automatically generated from the approximate forward scalar flux distribution to obtain the solution of the transport problem.
Towards Correlated Sampling for the Fixed-Node Diffusion Quantum Monte Carlo Method
NASA Astrophysics Data System (ADS)
Berner, Raphael; Petz, René; Lüchow, Arne
2014-07-01
Most methods of quantum chemistry calculate total energies rather than directly the energy differences that are of interest to chemists. In the case of statistical methods like quantum Monte Carlo the statistical errors in the absolute values need to be considerably smaller than their difference. The calculation of small energy differences is therefore particularly time consuming. Correlated sampling techniques provide the possibility to compute directly energy differences by simulating the underlying systems with the same stochastic process. The smaller the energy difference the smaller its statistical error. Correlated sampling is well established in variational quantum Monte Carlo, but it is much more difficult to implement in diffusion quantum Monte Carlo because of the fixed node approximation. A correlated sampling formalism and a corresponding algorithm based on a transformed Schrödinger equation having the form of a Kolmogorov's backward equation is derived. The numerical verification of the presented algorithm is given for the harmonic oscillator. The extension of the algorithm to electron structure calculations is discussed.
Charged-Particle Thermonuclear Reaction Rates: I. Monte Carlo Method and Statistical Distributions
Longland, Richard; Champagne, Art; Newton, Joe; Ugalde, Claudio; Coc, Alain; Fitzgerald, Ryan
2010-01-01
A method based on Monte Carlo techniques is presented for evaluating thermonuclear reaction rates. We begin by reviewing commonly applied procedures and point out that reaction rates that have been reported up to now in the literature have no rigorous statistical meaning. Subsequently, we associate each nuclear physics quantity entering in the calculation of reaction rates with a specific probability density function, including Gaussian, lognormal and chi-squared distributions. Based on these probability density functions the total reaction rate is randomly sampled many times until the required statistical precision is achieved. This procedure results in a median (Monte Carlo) rate which agrees under certain conditions with the commonly reported recommended "classical" rate. In addition, we present at each temperature a low rate and a high rate, corresponding to the 0.16 and 0.84 quantiles of the cumulative reaction rate distribution. These quantities are in general different from the statistically meaningles...
Analysis of single Monte Carlo methods for prediction of reflectance from turbid media.
Martinelli, Michele; Gardner, Adam; Cuccia, David; Hayakawa, Carole; Spanier, Jerome; Venugopalan, Vasan
2011-09-26
Starting from the radiative transport equation we derive the scaling relationships that enable a single Monte Carlo (MC) simulation to predict the spatially- and temporally-resolved reflectance from homogeneous semi-infinite media with arbitrary scattering and absorption coefficients. This derivation shows that a rigorous application of this single Monte Carlo (sMC) approach requires the rescaling to be done individually for each photon biography. We examine the accuracy of the sMC method when processing simulations on an individual photon basis and also demonstrate the use of adaptive binning and interpolation using non-uniform rational B-splines (NURBS) to achieve order of magnitude reductions in the relative error as compared to the use of uniform binning and linear interpolation. This improved implementation for sMC simulation serves as a fast and accurate solver to address both forward and inverse problems and is available for use at http://www.virtualphotonics.org/. PMID:21996904
Generalized Moment Method for Gap Estimation and Quantum Monte Carlo Level Spectroscopy
Hidemaro Suwa; Synge Todo
2015-05-30
We formulate a convergent sequence for the energy gap estimation in the worldline quantum Monte Carlo method. The ambiguity left in the conventional gap calculation for quantum systems is eliminated. Our estimation will be unbiased in the low-temperature limit and also the error bar is reliably estimated. The level spectroscopy from quantum Monte Carlo data is developed as an application of the unbiased gap estimation. From the spectral analysis, we precisely determine the Kosterlitz-Thouless quantum phase-transition point of the spin-Peierls model. It is stablished that the quantum phonon with a finite frequency is essential to the critical theory governed by the antiadiabatic limit, i.e., the $k=1$ SU(2) Wess-Zumino-Witten model.
Monte Carlo Method for Electron Transport Simulation in SF6-CO2 Gas Mixtures
NASA Astrophysics Data System (ADS)
Xiao, Deng-Ming; Xu, Xin; Yang, Jing-lin
2004-02-01
The Monte Carlo method is used for the simulation of the electron transport of SF6-CO2 gas mixtures in a uniform electric field. The electron swarm behavior of SF6-CO2 gas mixtures is calculated and analyzed over the E/N range of 272.83-364.51 Td (1 Td = 10-21 V\\cdotm2) and compared with the experimental results. The result of Monte Carlo simulation shows that the present set of cross sections of SF6 and CO2 revised according to the experimental results gives the values of swarm parameters such as ionization and electron attachment coefficients, drift velocity and longitudinal diffusion coefficient which are in excellent agreement with the respective measurement results for the relatively wide range of E/N.
A boundary-dispatch Monte Carlo (Exodus) method for analysis of conductive heat transfer problems
Naraghi, M.H.N. [Manhattan Coll., Riverdale, NY (United States); Shunchang Tsai [Harvard Univ., Cambridge, MA (United States)
1993-12-01
A boundary-dispatch Monte Carlo (Exodus) method, in which the particles are dispatched from the boundaries of a conductive medium or source of heat, is developed. A fixed number of particles are dispatched from a boundary node to the nearest internal node. These particles make random walks within the medium similar to that of the conventional Monte Carlo method. Once a particle visits an internal node, a number equal to the temperature of the boundary node from which particles are dispatched is added to a counter. Performing this procedure for all boundary nodes, the temperature of a node can be determined by dividing the flag, or the counter of this node by the total number of particle visits to this node. Two versions of the boundary-dispatch method (BDM) are presented, multispecies and bispecies BDM. The results of bispecies BDM based on the Exodus dispatching method compare well with the Gauss-Seidel method in both accuracy and computational time. Its computational time is much less than the shrinking-boundary Exodus method.
An overview of spatial microscopic and accelerated kinetic Monte Carlo methods
NASA Astrophysics Data System (ADS)
Chatterjee, Abhijit; Vlachos, Dionisios G.
2007-07-01
The microscopic spatial kinetic Monte Carlo (KMC) method has been employed extensively in materials modeling. In this review paper, we focus on different traditional and multiscale KMC algorithms, challenges associated with their implementation, and methods developed to overcome these challenges. In the first part of the paper, we compare the implementation and computational cost of the null-event and rejection-free microscopic KMC algorithms. A firmer and more general foundation of the null-event KMC algorithm is presented. Statistical equivalence between the null-event and rejection-free KMC algorithms is also demonstrated. Implementation and efficiency of various search and update algorithms, which are at the heart of all spatial KMC simulations, are outlined and compared via numerical examples. In the second half of the paper, we review various spatial and temporal multiscale KMC methods, namely, the coarse-grained Monte Carlo (CGMC), the stochastic singular perturbation approximation, and the ?-leap methods, introduced recently to overcome the disparity of length and time scales and the one-at-a time execution of events. The concepts of the CGMC and the ?-leap methods, stochastic closures, multigrid methods, error associated with coarse-graining, a posteriori error estimates for generating spatially adaptive coarse-grained lattices, and computational speed-up upon coarse-graining are illustrated through simple examples from crystal growth, defect dynamics, adsorption desorption, surface diffusion, and phase transitions.
NASA Astrophysics Data System (ADS)
Zhang, G.; Lu, D.; Webster, C.
2014-12-01
The rational management of oil and gas reservoir requires an understanding of its response to existing and planned schemes of exploitation and operation. Such understanding requires analyzing and quantifying the influence of the subsurface uncertainties on predictions of oil and gas production. As the subsurface properties are typically heterogeneous causing a large number of model parameters, the dimension independent Monte Carlo (MC) method is usually used for uncertainty quantification (UQ). Recently, multilevel Monte Carlo (MLMC) methods were proposed, as a variance reduction technique, in order to improve computational efficiency of MC methods in UQ. In this effort, we propose a new acceleration approach for MLMC method to further reduce the total computational cost by exploiting model hierarchies. Specifically, for each model simulation on a new added level of MLMC, we take advantage of the approximation of the model outputs constructed based on simulations on previous levels to provide better initial states of new simulations, which will help improve efficiency by, e.g. reducing the number of iterations in linear system solving or the number of needed time-steps. This is achieved by using mesh-free interpolation methods, such as Shepard interpolation and radial basis approximation. Our approach is applied to a highly heterogeneous reservoir model from the tenth SPE project. The results indicate that the accelerated MLMC can achieve the same accuracy as standard MLMC with a significantly reduced cost.
Wang, Fang; Liao, Yuxi; Zheng, Xiaoxiang
2014-01-01
Sequential Monte Carlo estimation on point processes has been successfully applied to predict the movement from neural activity. However, there exist some issues along with this method such as the simplified tuning model and the high computational complexity, which may degenerate the decoding performance of motor brain machine interfaces. In this paper, we adopt a general tuning model which takes recent ensemble activity into account. The goodness-of-fit analysis demonstrates that the proposed model can predict the neuronal response more accurately than the one only depending on kinematics. A new sequential Monte Carlo algorithm based on the proposed model is constructed. The algorithm can significantly reduce the root mean square error of decoding results, which decreases 23.6% in position estimation. In addition, we accelerate the decoding speed by implementing the proposed algorithm in a massive parallel manner on GPU. The results demonstrate that the spike trains can be decoded as point process in real time even with 8000 particles or 300 neurons, which is over 10 times faster than the serial implementation. The main contribution of our work is to enable the sequential Monte Carlo algorithm with point process observation to output the movement estimation much faster and more accurately. PMID:24949462
Mcclarren, Ryan G [Los Alamos National Laboratory; Urbatsch, Todd J [Los Alamos National Laboratory
2008-01-01
In this note we develop a robust implicit Monte Carlo (IMC) algorithm based on more accurately updating the linearized equilibrium radiation energy density. The method does not introduce oscillations in the solution and has the same limit as {Delta}t{yields}{infinity} as the standard Fleck and Cummings IMC method. Moreover, the approach we introduce can be trivially added to current implementations of IMC by changing the definition of the Fleck factor. Using this new method we develop an adaptive scheme that uses either standard IMC or the modified method basing the adaptation on a zero-dimensional problem solved in each cell. Numerical results demonstrate that the new method alleviates both the nonphysical overheating that occurs in standard IMC when the time step is large and significantly diminishes the statistical noise in the solution.
Parsons, T.
2008-01-01
Paleoearthquake observations often lack enough events at a given site to directly define a probability density function (PDF) for earthquake recurrence. Sites with fewer than 10-15 intervals do not provide enough information to reliably determine the shape of the PDF using standard maximum-likelihood techniques (e.g., Ellsworth et al., 1999). In this paper I present a method that attempts to fit wide ranges of distribution parameters to short paleoseismic series. From repeated Monte Carlo draws, it becomes possible to quantitatively estimate most likely recurrence PDF parameters, and a ranked distribution of parameters is returned that can be used to assess uncertainties in hazard calculations. In tests on short synthetic earthquake series, the method gives results that cluster around the mean of the input distribution, whereas maximum likelihood methods return the sample means (e.g., NIST/SEMATECH, 2006). For short series (fewer than 10 intervals), sample means tend to reflect the median of an asymmetric recurrence distribution, possibly leading to an overestimate of the hazard should they be used in probability calculations. Therefore a Monte Carlo approach may be useful for assessing recurrence from limited paleoearthquake records. Further, the degree of functional dependence among parameters like mean recurrence interval and coefficient of variation can be established. The method is described for use with time-independent and time-dependent PDFs, and results from 19 paleoseismic sequences on strike-slip faults throughout the state of California are given.
Parsons, Tom
2008-01-01
Paleoearthquake observations often lack enough events at a given site to directly define a probability density function (PDF) for earthquake recurrence. Sites with fewer than 10-15 intervals do not provide enough information to reliably determine the shape of the PDF using standard maximum-likelihood techniques [e.g., Ellsworth et al., 1999]. In this paper I present a method that attempts to fit wide ranges of distribution parameters to short paleoseismic series. From repeated Monte Carlo draws, it becomes possible to quantitatively estimate most likely recurrence PDF parameters, and a ranked distribution of parameters is returned that can be used to assess uncertainties in hazard calculations. In tests on short synthetic earthquake series, the method gives results that cluster around the mean of the input distribution, whereas maximum likelihood methods return the sample means [e.g., NIST/SEMATECH, 2006]. For short series (fewer than 10 intervals), sample means tend to reflect the median of an asymmetric recurrence distribution, possibly leading to an overestimate of the hazard should they be used in probability calculations. Therefore a Monte Carlo approach may be useful for assessing recurrence from limited paleoearthquake records. Further, the degree of functional dependence among parameters like mean recurrence interval and coefficient of variation can be established. The method is described for use with time-independent and time-dependent PDF?s, and results from 19 paleoseismic sequences on strike-slip faults throughout the state of California are given.
A Macro-Monte Carlo method for the simulation of diffuse light transport in tissue
Finlay, Jarod C.; Zhu, Timothy C
2015-01-01
The Monte Carlo (MC) method of calculating light distributions in turbid media such as tissue has become the gold standard, especially in complex geometries and heterogeneous tissue. The utility of the MC method, however, is limited by is computational intensity. In an effort to reduce the time needed for MC calculations, we have adapted a macro-Monte Carlo (MMC) method (Neuenschwander, et al. 1995, Phys. Med. Biol. 40, 543-574) to the solution of tissue optics problems. Traditional MC routines trace individual photons step-by-step through the tissue. Instead, the MMC approach relies on a data set consisting of spheres in which the light absorbed in each voxel is pre-calculated using a traditional MC routine. At each MMC step, the pre-calculated absorbed light dose in the appropriate sphere, aligned to the current position and direction of the sphere, is recorded in the dose matrix. The position and direction of the photon exiting the sphere are chosen from the exit distribution of the pre-calculated sphere, and the process is repeated. By choosing the size of the pre-calculated sphere appropriately, arbitrarily complex boundary geometries can be simulated. We compare the accuracy and calculation time of the MMC method with a traditional MC algorithm for a variety of tissue optical properties and geometries. We find that the MMC algorithm can increase the speed of calculation by as much as two orders of magnitude, depending on the optical properties being simulated, without a significant loss in accuracy.
Fast Monte Carlo Electron-Photon Transport Method and Application in Accurate Radiotherapy
NASA Astrophysics Data System (ADS)
Hao, Lijuan; Sun, Guangyao; Zheng, Huaqing; Song, Jing; Chen, Zhenping; Li, Gui
2014-06-01
Monte Carlo (MC) method is the most accurate computational method for dose calculation, but its wide application on clinical accurate radiotherapy is hindered due to its poor speed of converging and long computation time. In the MC dose calculation research, the main task is to speed up computation while high precision is maintained. The purpose of this paper is to enhance the calculation speed of MC method for electron-photon transport with high precision and ultimately to reduce the accurate radiotherapy dose calculation time based on normal computer to the level of several hours, which meets the requirement of clinical dose verification. Based on the existing Super Monte Carlo Simulation Program (SuperMC), developed by FDS Team, a fast MC method for electron-photon coupled transport was presented with focus on two aspects: firstly, through simplifying and optimizing the physical model of the electron-photon transport, the calculation speed was increased with slightly reduction of calculation accuracy; secondly, using a variety of MC calculation acceleration methods, for example, taking use of obtained information in previous calculations to avoid repeat simulation of particles with identical history; applying proper variance reduction techniques to accelerate MC method convergence rate, etc. The fast MC method was tested by a lot of simple physical models and clinical cases included nasopharyngeal carcinoma, peripheral lung tumor, cervical carcinoma, etc. The result shows that the fast MC method for electron-photon transport was fast enough to meet the requirement of clinical accurate radiotherapy dose verification. Later, the method will be applied to the Accurate/Advanced Radiation Therapy System ARTS as a MC dose verification module.
Hunter, J. L. [Department of Nuclear Science and Engineering, Massachusetts Institute of Technology, 77 Massachusetts Ave., 24-107, Cambridge, MA 02139 (United States); Sutton, T. M. [Knolls Atomic Power Laboratory, Bechtel Marine Propulsion Corporation, P. O. Box 1072, Schenectady, NY 12301-1072 (United States)
2013-07-01
In Monte Carlo iterated-fission-source calculations relative uncertainties on local tallies tend to be larger in lower-power regions and smaller in higher-power regions. Reducing the largest uncertainties to an acceptable level simply by running a larger number of neutron histories is often prohibitively expensive. The uniform fission site method has been developed to yield a more spatially-uniform distribution of relative uncertainties. This is accomplished by biasing the density of fission neutron source sites while not biasing the solution. The method is integrated into the source iteration process, and does not require any auxiliary forward or adjoint calculations. For a given amount of computational effort, the use of the method results in a reduction of the largest uncertainties relative to the standard algorithm. Two variants of the method have been implemented and tested. Both have been shown to be effective. (authors)
Figueira, C; Di Maria, S; Baptista, M; Mendes, M; Madeira, P; Vaz, P
2015-07-01
Computed tomography (CT) is one of the most used techniques in medical diagnosis, and its use has become one of the main sources of exposure of the population to ionising radiation. This work concentrates on the paediatric patients, since children exhibit higher radiosensitivity than adults. Nowadays, patient doses are estimated through two standard CT dose index (CTDI) phantoms as a reference to calculate CTDI volume (CTDIvol) values. This study aims at improving the knowledge about the radiation exposure to children and to better assess the accuracy of the CTDIvol method. The effectiveness of the CTDIvol method for patient dose estimation was then investigated through a sensitive study, taking into account the doses obtained by three methods: CTDIvol measured, CTDIvol values simulated with Monte Carlo (MC) code MCNPX and the recent proposed method Size-Specific Dose Estimate (SSDE). In order to assess organ doses, MC simulations were executed with paediatric voxel phantoms. PMID:25883302
Werner, M J; Sornette, D
2009-01-01
In meteorology, engineering and computer sciences, data assimilation is routinely employed as the optimal way to combine noisy observations with prior model information for obtaining better estimates of a state, and thus better forecasts, than can be achieved by ignoring data uncertainties. Earthquake forecasting, too, suffers from measurement errors and partial model information and may thus gain significantly from data assimilation. We present perhaps the first fully implementable data assimilation method for earthquake forecasts generated by a point-process model of seismicity. We test the method on a synthetic and pedagogical example of a renewal process observed in noise, which is relevant to the seismic gap hypothesis, models of characteristic earthquakes and to recurrence statistics of large quakes inferred from paleoseismic data records. To address the non-Gaussian statistics of earthquakes, we use sequential Monte Carlo methods, a set of flexible simulation-based methods for recursively estimating ar...
CCMR: Method Development of Dynamic Mass Diffusion Monte Carlo using Lennard-Jones Clusters
NSDL National Science Digital Library
Craig, Helen A.
2007-08-29
The Lennard-Jones clusters, clusters of inert particles have a long history of being studied. Many algorithms have been proposed and used with a varying level of success from "basin hopping" [1] to “greedy search” [2]. Despite these achievements, the Lennard-Jones potential continues to be a widely studied model. Not only is it a good test case for new particle structure algorithms, but it is still an interesting model that we can yet learn from. In this project we proposed to study these cluster using a method never before attempted: dynamic mass diffusion Monte Carlo.
Path-integral-expanded-ensemble Monte Carlo method in treatment of the sign problem for fermions
NASA Astrophysics Data System (ADS)
Voznesenskiy, M. A.; Vorontsov-Velyaminov, P. N.; Lyubartsev, A. P.
2009-12-01
Expanded-ensemble Monte Carlo method with Wang-Landau algorithm was used for calculations of the ratio of partition functions for classes of permutations in the problem of several interacting quantum particles (fermions) in an external field. Simulations for systems consisting of 2 up to 7 interacting particles in harmonic or Coulombic field were performed. The presented approach allows one to carry out calculations for low enough temperatures that makes it possible to extract data for the ground-state energy and low-temperature thermodynamics.
Dynamic Load Balancing for Petascale Quantum Monte Carlo Applications: The Alias Method
Sudheer, C. D. [Sri Sathya Sai University; Krishnan, S. [Florida State University; Srinivasan, Ashok [ORNL; Kent, Paul R [ORNL
2013-01-01
Diffusion Monte Carlo is the most accurate widely used Quantum Monte Carlo method for the electronic structure of materials, but it requires frequent load balancing or population redistribution steps to maintain efficiency and avoid accumulation of systematic errors on parallel machines. The load balancing step can be a significant factor affecting performance, and will become more important as the number of processing elements increases. We propose a new dynamic load balancing algorithm, the Alias Method, and evaluate it theoretically and empirically. An important feature of the new algorithm is that the load can be perfectly balanced with each process receiving at most one message. It is also optimal in the maximum size of messages received by any process. We also optimize its implementation to reduce network contention, a process facilitated by the low messaging requirement of the algorithm. Empirical results on the petaflop Cray XT Jaguar supercomputer at ORNL showing up to 30% improvement in performance on 120,000 cores. The load balancing algorithm may be straightforwardly implemented in existing codes. The algorithm may also be employed by any method with many near identical computational tasks that requires load balancing.
A new Monte Carlo method for dynamical evolution of non-spherical stellar systems
NASA Astrophysics Data System (ADS)
Vasiliev, Eugene
2015-01-01
We have developed a novel Monte Carlo method for simulating the dynamical evolution of stellar systems in arbitrary geometry. The orbits of stars are followed in a smooth potential represented by a basis-set expansion and perturbed after each timestep using local velocity diffusion coefficients from the standard two-body relaxation theory. The potential and diffusion coefficients are updated after an interval of time that is a small fraction of the relaxation time, but may be longer than the dynamical time. Thus, our approach is a bridge between the Spitzer's formulation of the Monte Carlo method and the temporally smoothed self-consistent field method. The primary advantages are the ability to follow the secular evolution of shape of the stellar system, and the possibility of scaling the amount of two-body relaxation to the necessary value, unrelated to the actual number of particles in the simulation. Possible future applications of this approach in galaxy dynamics include the problem of consumption of stars by a massive black hole in a non-spherical galactic nucleus, evolution of binary supermassive black holes, and the influence of chaos on the shape of galaxies, while for globular clusters it may be used for studying the influence of rotation.
Dynamic load balancing for petascale quantum Monte Carlo applications: The Alias method
NASA Astrophysics Data System (ADS)
Sudheer, C. D.; Krishnan, S.; Srinivasan, A.; Kent, P. R. C.
2013-02-01
Diffusion Monte Carlo is a highly accurate Quantum Monte Carlo method for electronic structure calculations of materials, but it requires frequent load balancing or population redistribution steps to maintain efficiency on parallel machines. This step can be a significant factor affecting performance, and will become more important as the number of processing elements increases. We propose a new dynamic load balancing algorithm, the Alias Method, and evaluate it theoretically and empirically. An important feature of the new algorithm is that the load can be perfectly balanced with each process receiving at most one message. It is also optimal in the maximum size of messages received by any process. We also optimize its implementation to reduce network contention, a process facilitated by the low messaging requirement of the algorithm: a simple renumbering of the MPI ranks based on proximity and a space filling curve significantly improves the MPI Allgather performance. Empirical results on the petaflop Cray XT Jaguar supercomputer at ORNL show up to 30% improvement in performance on 120,000 cores. The load balancing algorithm may be straightforwardly implemented in existing codes. The algorithm may also be employed by any method with many near identical computational tasks that require load balancing.
The applicability of certain Monte Carlo methods to the analysis of interacting polymers
Krapp, D.M. Jr. [Univ. of California, Berkeley, CA (United States)
1998-05-01
The authors consider polymers, modeled as self-avoiding walks with interactions on a hexagonal lattice, and examine the applicability of certain Monte Carlo methods for estimating their mean properties at equilibrium. Specifically, the authors use the pivoting algorithm of Madras and Sokal and Metroplis rejection to locate the phase transition, which is known to occur at {beta}{sub crit} {approx} 0.99, and to recalculate the known value of the critical exponent {nu} {approx} 0.58 of the system for {beta} = {beta}{sub crit}. Although the pivoting-Metropolis algorithm works well for short walks (N < 300), for larger N the Metropolis criterion combined with the self-avoidance constraint lead to an unacceptably small acceptance fraction. In addition, the algorithm becomes effectively non-ergodic, getting trapped in valleys whose centers are local energy minima in phase space, leading to convergence towards different values of {nu}. The authors use a variety of tools, e.g. entropy estimation and histograms, to improve the results for large N, but they are only of limited effectiveness. Their estimate of {beta}{sub crit} using smaller values of N is 1.01 {+-} 0.01, and the estimate for {nu} at this value of {beta} is 0.59 {+-} 0.005. They conclude that even a seemingly simple system and a Monte Carlo algorithm which satisfies, in principle, ergodicity and detailed balance conditions, can in practice fail to sample phase space accurately and thus not allow accurate estimations of thermal averages. This should serve as a warning to people who use Monte Carlo methods in complicated polymer folding calculations. The structure of the phase space combined with the algorithm itself can lead to surprising behavior, and simply increasing the number of samples in the calculation does not necessarily lead to more accurate results.
NASA Astrophysics Data System (ADS)
Tokii, Maki; Kita, Eiji; Mitsumata, Chiharu; Ono, Kanta; Yanagihara, Hideto; Matsumoto, Makoto
2015-05-01
Visualization of the magnetic domain structure is indispensable to the investigation of magnetization processes and the coercivity mechanism. It is necessary to develop a reconstruction method from the reciprocal-space image to the real-space image. For this purpose, it is necessary to solve the problem of missing phase information in the reciprocal-space image. We propose the method of extend Fourier image with mean-value padding to compensate for the phase information. We visualized the magnetic domain structure using the Reverse Monte Carlo method with simulated annealing to accelerate the calculation. With this technique, we demonstrated the restoration of the magnetic domain structure, obtained magnetization and magnetic domain width, and reproduced the characteristic form that constitutes a magnetic domain.
Uncertainty analysis using Monte Carlo method in the measurement of phase by ESPI
Anguiano Morales, Marcelino; Martinez, Amalia; Rayas, J. A. [Centro de Investigaciones en Optica A. C. Apartado Postal 1-948, 37000 Leon (Mexico); Cordero, Raul R. [Leibniz Universitaet Hannover, Herrenhaeuser Str. 2, D-30419 Hannover (Germany)
2008-04-15
A method for simultaneously measuring whole field in-plane displacements by using optical fiber and based on the dual-beam illumination principle electronic speckle pattern interferometry (ESPI) is presented in this paper. A set of single mode optical fibers and beamsplitter are employed to split the laser beam into four beams of equal intensity.One pair of fibers is utilized to illuminate the sample in the horizontal plane so it is sensitive only to horizontal in-plane displacement. Another pair of optical fibers is set to be sensitive only to vertical in-plane displacement. Each pair of optical fibers differs in longitude to avoid unwanted interference. By means of a Fourier-transform method of fringe-pattern analysis (Takeda method), we can obtain the quantitative data of whole field displacements. We found the uncertainty associated with the phases by mean of Monte Carlo-based technique.
Adapting phase-switch Monte Carlo method for flexible organic molecules
NASA Astrophysics Data System (ADS)
Bridgwater, Sally; Quigley, David
2014-03-01
The role of cholesterol in lipid bilayers has been widely studied via molecular simulation, however, there has been relatively little work on crystalline cholesterol in biological environments. Recent work has linked the crystallisation of cholesterol in the body with heart attacks and strokes. Any attempt to model this process will require new models and advanced sampling methods to capture and quantify the subtle polymorphism of solid cholesterol, in which two crystalline phases are separated by a phase transition close to body temperature. To this end, we have adapted phase-switch Monte Carlo for use with flexible molecules, to calculate the free energy between crystal polymorphs to a high degree of accuracy. The method samples an order parameter ?, which divides a displacement space for the N molecules, into regions energetically favourable for each polymorph; which is traversed using biased Monte Carlo. Results for a simple model of butane will be presented, demonstrating that conformational flexibility can be correctly incorporated within a phase-switching scheme. Extension to a coarse grained model of cholesterol and the resulting free energies will be discussed.
KERR, REX A.; BARTOL, THOMAS M.; KAMINSKY, BORIS; DITTRICH, MARKUS; CHANG, JEN-CHIEN JACK; BADEN, SCOTT B.; SEJNOWSKI, TERRENCE J.; STILES, JOEL R.
2010-01-01
Many important physiological processes operate at time and space scales far beyond those accessible to atom-realistic simulations, and yet discrete stochastic rather than continuum methods may best represent finite numbers of molecules interacting in complex cellular spaces. We describe and validate new tools and algorithms developed for a new version of the MCell simulation program (MCell3), which supports generalized Monte Carlo modeling of diffusion and chemical reaction in solution, on surfaces representing membranes, and combinations thereof. A new syntax for describing the spatial directionality of surface reactions is introduced, along with optimizations and algorithms that can substantially reduce computational costs (e.g., event scheduling, variable time and space steps). Examples for simple reactions in simple spaces are validated by comparison to analytic solutions. Thus we show how spatially realistic Monte Carlo simulations of biological systems can be far more cost-effective than often is assumed, and provide a level of accuracy and insight beyond that of continuum methods. PMID:20151023
Efficient continuous-time quantum Monte Carlo method for the ground state of correlated fermions
NASA Astrophysics Data System (ADS)
Wang, Lei; Iazzi, Mauro; Corboz, Philippe; Troyer, Matthias
2015-06-01
We present the ground state extension of the efficient continuous-time quantum Monte Carlo algorithm for lattice fermions of M. Iazzi and M. Troyer, Phys. Rev. B 91, 241118 (2015), 10.1103/PhysRevB.91.241118. Based on continuous-time expansion of an imaginary-time projection operator, the algorithm is free of systematic error and scales linearly with projection time and interaction strength. Compared to the conventional quantum Monte Carlo methods for lattice fermions, this approach has greater flexibility and is easier to combine with powerful machinery such as histogram reweighting and extended ensemble simulation techniques. We discuss the implementation of the continuous-time projection in detail using the spinless t -V model as an example and compare the numerical results with exact diagonalization, density matrix renormalization group, and infinite projected entangled-pair states calculations. Finally we use the method to study the fermionic quantum critical point of spinless fermions on a honeycomb lattice and confirm previous results concerning its critical exponents.
Harries, Tim J
2015-01-01
We present a set of new numerical methods that are relevant to calculating radiation pressure terms in hydrodynamics calculations, with a particular focus on massive star formation. The radiation force is determined from a Monte Carlo estimator and enables a complete treatment of the detailed microphysics, including polychromatic radiation and anisotropic scattering, in both the free-streaming and optically-thick limits. Since the new method is computationally demanding we have developed two new methods that speed up the algorithm. The first is a photon packet splitting algorithm that enables efficient treatment of the Monte Carlo process in very optically thick regions. The second is a parallelisation method that distributes the Monte Carlo workload over many instances of the hydrodynamic domain, resulting in excellent scaling of the radiation step. We also describe the implementation of a sink particle method that enables us to follow the accretion onto, and the growth of, the protostars. We detail the resu...
Charged-Particle Thermonuclear Reaction Rates: I. Monte Carlo Method and Statistical Distributions
Richard Longland; Christian Iliadis; Art Champagne; Joe Newton; Claudio Ugalde; Alain Coc; Ryan Fitzgerald
2010-04-23
A method based on Monte Carlo techniques is presented for evaluating thermonuclear reaction rates. We begin by reviewing commonly applied procedures and point out that reaction rates that have been reported up to now in the literature have no rigorous statistical meaning. Subsequently, we associate each nuclear physics quantity entering in the calculation of reaction rates with a specific probability density function, including Gaussian, lognormal and chi-squared distributions. Based on these probability density functions the total reaction rate is randomly sampled many times until the required statistical precision is achieved. This procedure results in a median (Monte Carlo) rate which agrees under certain conditions with the commonly reported recommended "classical" rate. In addition, we present at each temperature a low rate and a high rate, corresponding to the 0.16 and 0.84 quantiles of the cumulative reaction rate distribution. These quantities are in general different from the statistically meaningless "minimum" (or "lower limit") and "maximum" (or "upper limit") reaction rates which are commonly reported. Furthermore, we approximate the output reaction rate probability density function by a lognormal distribution and present, at each temperature, the lognormal parameters miu and sigma. The values of these quantities will be crucial for future Monte Carlo nucleosynthesis studies. Our new reaction rates, appropriate for bare nuclei in the laboratory, are tabulated in the second paper of this series (Paper II). The nuclear physics input used to derive our reaction rates is presented in the third paper of this series (Paper III). In the fourth paper of this series (Paper IV) we compare our new reaction rates to previous results.
Anderson, James B.
variational Monte Carlo KEVIN E. RILEY* and JAMES B. ANDERSONy Department of Chemistry, 152 Davey Laboratory for trial wavefunctions used in quantum Monte Carlo calculations of molecular structure. These numerical). This energy difference corresponds to about 1% of the correlation energy. Variational Monte Carlo (VMC) has
Monte Carlo methods for localization of cones given multielectrode retinal ganglion cell recordings
Sadeghi, K.; Gauthier, J.L.; Field, G.D.; Greschner, M.; Agne, M.; Chichilnisky, E.J.; Paninski, L.
2013-01-01
It has recently become possible to identify cone photoreceptors in primate retina from multi-electrode recordings of ganglion cell spiking driven by visual stimuli of sufficiently high spatial resolution. In this paper we present a statistical approach to the problem of identifying the number, locations, and color types of the cones observed in this type of experiment. We develop an adaptive Markov Chain Monte Carlo (MCMC) method that explores the space of cone configurations, using a Linear-Nonlinear-Poisson (LNP) encoding model of ganglion cell spiking output, while analytically integrating out the functional weights between cones and ganglion cells. This method provides information about our posterior certainty about the inferred cone properties, and additionally leads to improvements in both the speed and quality of the inferred cone maps, compared to earlier “greedy” computational approaches. PMID:23194406
On processed splitting methods and high-order actions in path-integral Monte Carlo simulations
NASA Astrophysics Data System (ADS)
Casas, Fernando
2010-10-01
Processed splitting methods are particularly well adapted to carry out path-integral Monte Carlo (PIMC) simulations: since one is mainly interested in estimating traces of operators, only the kernel of the method is necessary to approximate the thermal density matrix. Unfortunately, they suffer the same drawback as standard, nonprocessed integrators: kernels of effective order greater than two necessarily involve some negative coefficients. This problem can be circumvented, however, by incorporating modified potentials into the composition, thus rendering schemes of higher effective order. In this work we analyze a family of fourth-order schemes recently proposed in the PIMC setting, paying special attention to their linear stability properties, and justify their observed behavior in practice. We also propose a new fourth-order scheme requiring the same computational cost but with an enlarged stability interval.
Graduiertenschule Hybrid Monte Carlo
Heermann, Dieter W.
Graduiertenschule Hybrid Monte Carlo SS 2005 Heermann - Universit¨at Heidelberg Seite 1 #12;Graduiertenschule · In conventional Monte-Carlo (MC) calculations of condensed matter systems, such as an N probability distribution, unlike Monte-Carlo calculations. · The Hybrid Monte-Carlo (HMC) method combines
Monte Carlo Monte Carlo at Work by Gary D. Doolen and John Hendricks E very second nearly 10,000,000,000 "random" numbers are being generated on computers around the world for Monte Carlo solutions to problems hundreds of full-time careers invested in the fine art of generating Monte Carlo solutions--a livelihood
Hybrid Monte Carlo/Deterministic Methods for Accelerating Active Interrogation Modeling
Peplow, Douglas E. [ORNL; Miller, Thomas Martin [ORNL; Patton, Bruce W [ORNL; Wagner, John C [ORNL
2013-01-01
The potential for smuggling special nuclear material (SNM) into the United States is a major concern to homeland security, so federal agencies are investigating a variety of preventive measures, including detection and interdiction of SNM during transport. One approach for SNM detection, called active interrogation, uses a radiation source, such as a beam of neutrons or photons, to scan cargo containers and detect the products of induced fissions. In realistic cargo transport scenarios, the process of inducing and detecting fissions in SNM is difficult due to the presence of various and potentially thick materials between the radiation source and the SNM, and the practical limitations on radiation source strength and detection capabilities. Therefore, computer simulations are being used, along with experimental measurements, in efforts to design effective active interrogation detection systems. The computer simulations mostly consist of simulating radiation transport from the source to the detector region(s). Although the Monte Carlo method is predominantly used for these simulations, difficulties persist related to calculating statistically meaningful detector responses in practical computing times, thereby limiting their usefulness for design and evaluation of practical active interrogation systems. In previous work, the benefits of hybrid methods that use the results of approximate deterministic transport calculations to accelerate high-fidelity Monte Carlo simulations have been demonstrated for source-detector type problems. In this work, the hybrid methods are applied and evaluated for three example active interrogation problems. Additionally, a new approach is presented that uses multiple goal-based importance functions depending on a particle s relevance to the ultimate goal of the simulation. Results from the examples demonstrate that the application of hybrid methods to active interrogation problems dramatically increases their calculational efficiency.
Calculations of alloy phases with a direct Monte-Carlo method
Faulkner, J.S.; Wang, Yang; Horvath, E.A. [Florida Atlantic Univ., Boca Raton, FL (United States); Stocks, G.M. [Oak Ridge National Lab., TN (United States)
1994-09-01
A method for calculating the boundaries that describe solid-solid phase transformations in the phase diagrams of alloys is described. The method is first-principles in the sense that the only input is the atomic numbers of the constituents. It proceeds from the observation that the crux of the Monte-Carlo method for obtaining the equilibrium distribution of atoms in an alloy is a calculation of the energy required to replace an A atom on site i with a B atom when the configuration of the atoms on the neighboring sites, {kappa}, is specified, {delta}H{sub {kappa}}(A{yields}B) = E{sub B}{kappa} -E{sub A}{kappa}. Normally, this energy difference is obtained by introducing interatomic potentials, v{sub ij}, into an Ising Hamiltonian, but the authors calculate it using the embedded cluster method (ECM). In the ECM an A or B atom is placed at the center of a cluster of atoms with the specified configuration K, and the atoms on all the other sites in the alloy are simulated by the effective scattering matrix obtained from the coherent potential approximation. The interchange energy is calculated directly from the electronic structure of the cluster. The table of {delta}H{sub {kappa}}(A{yields}B)`s for all configurations K and several alloy concentrations is used in a Monte Carlo calculation that predicts the phase of the alloy at any temperature and concentration. The detailed shape of the miscibility gaps in the palladium-rhodium and copper-nickel alloy systems are shown.
Coupled proton/neutron transport calculations using the S sub N and Monte Carlo methods
Filippone, W.L. (Arizona Univ., Tucson, AZ (USA). Dept. of Nuclear and Energy Engineering); Little, R.C.; Morel, J.E.; MacFarlane, R.E.; Young, P.G. (Los Alamos National Lab., NM (USA))
1991-01-01
Coupled charged/neutral article transport calculations are most often carried out using the Monte Carol technique. For example, the ITS, EGS, and MCNP (Version 4) codes are used extensively for electron/photon transport calculations while HETC models the transport of protons, neutrons and heavy ions. In recent years there has been considerable progress in deterministic models of electron transport, and many of these models are applicable to protons. However, even with these new models (and the well established models for neutron transport) deterministic coupled neutron/proton transport calculations have not been feasible for most problems of interest, due to a lack of coupled multigroup neutron/proton cross section sets. Such cross sections sets are now being developed at Los Alamos. Using these cross sections we have carried out coupled proton/neutron transport calculations using both the S{sub N} and Monte Carlo methods. The S{sub N} calculations used a code called SMARTEPANTS (simulating many accumulative Rutherford trajectories, electron, proton and neutral transport slover) while the Monte Carlo calculations are done with the multigroup option of the MCNP code. Both SMARTEPANTS and MCNP require standard multigroup cross section libraries. HETC on the other hand, avoids the need for precalculated nuclear cross sections by modeling individual nucleon collisions as the transporting neutrons and protons interact with nuclei. 21 refs., 1 fig.
Bianchini, G.; Burgio, N.; Carta, M. [ENEA C.R. CASACCIA, via Anguillarese, 301, 00123 S. Maria di Galeria Roma (Italy); Peluso, V. [ENEA C.R. BOLOGNA, Via Martiri di Monte Sole, 4, 40129 Bologna (Italy); Fabrizio, V.; Ricci, L. [Univ. of Rome La Sapienza, C/o ENEA C.R. CASACCIA, via Anguillarese, 301, 00123 S. Maria di Galeria Roma (Italy)
2012-07-01
The GUINEVERE experiment (Generation of Uninterrupted Intense Neutrons at the lead Venus Reactor) is an experimental program in support of the ADS technology presently carried out at SCK-CEN in Mol (Belgium). In the experiment a modified lay-out of the original thermal VENUS critical facility is coupled to an accelerator, built by the French body CNRS in Grenoble, working in both continuous and pulsed mode and delivering 14 MeV neutrons by bombardment of deuterons on a tritium-target. The modified lay-out of the facility consists of a fast subcritical core made of 30% U-235 enriched metallic Uranium in a lead matrix. Several off-line and on-line reactivity measurement techniques will be investigated during the experimental campaign. This report is focused on the simulation by deterministic (ERANOS French code) and Monte Carlo (MCNPX US code) calculations of three reactivity measurement techniques, Slope ({alpha}-fitting), Area-ratio and Source-jerk, applied to a GUINEVERE subcritical configuration (namely SC1). The inferred reactivity, in dollar units, by the Area-ratio method shows an overall agreement between the two deterministic and Monte Carlo computational approaches, whereas the MCNPX Source-jerk results are affected by large uncertainties and allow only partial conclusions about the comparison. Finally, no particular spatial dependence of the results is observed in the case of the GUINEVERE SC1 subcritical configuration. (authors)
Gong, Xingchu; Li, Yao; Chen, Huali; Qu, Haibin
2015-01-01
A design space approach was applied to optimize the extraction process of Danhong injection. Dry matter yield and the yields of five active ingredients were selected as process critical quality attributes (CQAs). Extraction number, extraction time, and the mass ratio of water and material (W/M ratio) were selected as critical process parameters (CPPs). Quadratic models between CPPs and CQAs were developed with determination coefficients higher than 0.94. Active ingredient yields and dry matter yield increased as the extraction number increased. Monte-Carlo simulation with models established using a stepwise regression method was applied to calculate the probability-based design space. Step length showed little effect on the calculation results. Higher simulation number led to results with lower dispersion. Data generated in a Monte Carlo simulation following a normal distribution led to a design space with a smaller size. An optimized calculation condition was obtained with 10000 simulation times, 0.01 calculation step length, a significance level value of 0.35 for adding or removing terms in a stepwise regression, and a normal distribution for data generation. The design space with a probability higher than 0.95 to attain the CQA criteria was calculated and verified successfully. Normal operating ranges of 8.2-10 g/g of W/M ratio, 1.25-1.63 h of extraction time, and two extractions were recommended. The optimized calculation conditions can conveniently be used in design space development for other pharmaceutical processes. PMID:26020778
Nonequilibrium hypersonic flows simulations with asymptotic-preserving Monte Carlo methods
NASA Astrophysics Data System (ADS)
Ren, Wei; Liu, Hong; Jin, Shi
2014-12-01
In the rarefied gas dynamics, the DSMC method is one of the most popular numerical tools. It performs satisfactorily in simulating hypersonic flows surrounding re-entry vehicles and micro-/nano- flows. However, the computational cost is expensive, especially when Kn ? 0. Even for flows in the near-continuum regime, pure DSMC simulations require a number of computational efforts for most cases. Albeit several DSMC/NS hybrid methods are proposed to deal with this, those methods still suffer from the boundary treatment, which may cause nonphysical solutions. Filbet and Jin [1] proposed a framework of new numerical methods of Boltzmann equation, called asymptotic preserving schemes, whose computational costs are affordable as Kn ? 0. Recently, Ren et al. [2] realized the AP schemes with Monte Carlo methods (AP-DSMC), which have better performance than counterpart methods. In this paper, AP-DSMC is applied in simulating nonequilibrium hypersonic flows. Several numerical results are computed and analyzed to study the efficiency and capability of capturing complicated flow characteristics.
A Monte Carlo Test of Load Calculation Methods, Lake Tahoe Basin, California-Nevada
NASA Astrophysics Data System (ADS)
Coats, Robert; Liu, Fengjing; Goldman, Charles R.
2002-06-01
The sampling of streams and estimation of total loads of nitrogen, phosphorus, and suspended sediment play an important role in efforts to control the eutrophication of Lake Tahoe. We used a Monte Carlo procedure to test the precision and bias of four methods of calculating total constituent loads for nitrate-nitrogen, soluble reactive phosphorus, particulate phosphorus, total phosphorus, and suspended sediment in one major tributary of the lake. The methods tested were two forms of the Beale's Ratio Estimator, the Period Weighted Sample, and the Rating Curve. Intensive sampling in 1985 (a dry year) and 1986 (a wet year) provided a basis for estimating loads by the "worked record" method for comparison with estimates based on resampling actual data at the lower intensity that characterizes the present monitoring program. The results show that: (1) the Period Weighted Sample method was superior to the other methods for all constituents for 1985; and (2) for total phosphorus, particulate phosphorus, and suspended sediment, the Rating Curve gave the best results in1986. Modification of the present sampling program and load calculation methods may be necessary to improve the precision and reduce the bias of estimates of total phosphorus loads in basin streams.
A modular method to handle multiple time-dependent quantities in Monte Carlo simulations
Shin, J; Perl, J; Schümann, J; Paganetti, H; Faddegon, BA
2015-01-01
A general method for handling time-dependent quantities in Monte Carlo simulations was developed to make such simulations more accessible to the medical community for a wide range of applications in radiotherapy, including fluence and dose calculation. To describe time-dependent changes in the most general way, we developed a grammar of functions that we call “Time Features”. When a simulation quantity, such as the position of a geometrical object, an angle, a magnetic field, a current, etc., takes its value from a Time Feature, that quantity varies over time. The operation of time-dependent simulation was separated into distinct parts: the Sequence samples time values either sequentially at equal increments or randomly from a uniform distribution (allowing quantities to vary continuously in time), then each time-dependent quantity is calculated according to its Time Feature. Due to this modular structure, time-dependent simulations, even in the presence of multiple time-dependent quantities, can be efficiently performed in a single simulation with any given time resolution. This approach has been implemented in TOPAS (TOol for PArticle Simulation), designed to make Monte Carlo simulations with Geant4 more accessible to both clinical and research physicists. To demonstrate the method, three clinical situations were simulated: a variable water column used to verify constancy of the Bragg peak of the Crocker Lab eye treatment facility of the University of California, the double-scattering treatment mode of the passive beam scattering system at Massachusetts General Hospital (MGH), where a spinning range modulator wheel (RMW) accompanied by beam current modulation produces a spread-out Bragg Peak, and the scanning mode at MGH, where time-dependent pulse shape, energy distribution and magnetic fields control Bragg peak positions. Results confirm the clinical applicability of the method. PMID:22572201
Radiation Transport for Explosive Outflows: A Multigroup Hybrid Monte Carlo Method
NASA Astrophysics Data System (ADS)
Wollaeger, Ryan T.; van Rossum, Daniel R.; Graziani, Carlo; Couch, Sean M.; Jordan, George C., IV; Lamb, Donald Q.; Moses, Gregory A.
2013-12-01
We explore Implicit Monte Carlo (IMC) and discrete diffusion Monte Carlo (DDMC) for radiation transport in high-velocity outflows with structured opacity. The IMC method is a stochastic computational technique for nonlinear radiation transport. IMC is partially implicit in time and may suffer in efficiency when tracking MC particles through optically thick materials. DDMC accelerates IMC in diffusive domains. Abdikamalov extended IMC and DDMC to multigroup, velocity-dependent transport with the intent of modeling neutrino dynamics in core-collapse supernovae. Densmore has also formulated a multifrequency extension to the originally gray DDMC method. We rigorously formulate IMC and DDMC over a high-velocity Lagrangian grid for possible application to photon transport in the post-explosion phase of Type Ia supernovae. This formulation includes an analysis that yields an additional factor in the standard IMC-to-DDMC spatial interface condition. To our knowledge the new boundary condition is distinct from others presented in prior DDMC literature. The method is suitable for a variety of opacity distributions and may be applied to semi-relativistic radiation transport in simple fluids and geometries. Additionally, we test the code, called SuperNu, using an analytic solution having static material, as well as with a manufactured solution for moving material with structured opacities. Finally, we demonstrate with a simple source and 10 group logarithmic wavelength grid that IMC-DDMC performs better than pure IMC in terms of accuracy and speed when there are large disparities between the magnitudes of opacities in adjacent groups. We also present and test our implementation of the new boundary condition.
A modular method to handle multiple time-dependent quantities in Monte Carlo simulations
NASA Astrophysics Data System (ADS)
Shin, J.; Perl, J.; Schümann, J.; Paganetti, H.; Faddegon, B. A.
2012-06-01
A general method for handling time-dependent quantities in Monte Carlo simulations was developed to make such simulations more accessible to the medical community for a wide range of applications in radiotherapy, including fluence and dose calculation. To describe time-dependent changes in the most general way, we developed a grammar of functions that we call ‘Time Features’. When a simulation quantity, such as the position of a geometrical object, an angle, a magnetic field, a current, etc, takes its value from a Time Feature, that quantity varies over time. The operation of time-dependent simulation was separated into distinct parts: the Sequence samples time values either sequentially at equal increments or randomly from a uniform distribution (allowing quantities to vary continuously in time), and then each time-dependent quantity is calculated according to its Time Feature. Due to this modular structure, time-dependent simulations, even in the presence of multiple time-dependent quantities, can be efficiently performed in a single simulation with any given time resolution. This approach has been implemented in TOPAS (TOol for PArticle Simulation), designed to make Monte Carlo simulations with Geant4 more accessible to both clinical and research physicists. To demonstrate the method, three clinical situations were simulated: a variable water column used to verify constancy of the Bragg peak of the Crocker Lab eye treatment facility of the University of California, the double-scattering treatment mode of the passive beam scattering system at Massachusetts General Hospital (MGH), where a spinning range modulator wheel accompanied by beam current modulation produces a spread-out Bragg peak, and the scanning mode at MGH, where time-dependent pulse shape, energy distribution and magnetic fields control Bragg peak positions. Results confirm the clinical applicability of the method.
Asselineau, Charles-Alexis; Zapata, Jose; Pye, John
2015-06-01
A stochastic optimisation method adapted to illumination and radiative heat transfer problems involving Monte-Carlo ray-tracing is presented. A solar receiver shape optimisation case study illustrates the advantages of the method and its potential: efficient receivers are identified using a moderate computational cost. PMID:26072868
J. A. Jr. Fleck; E. H. Canfield
1984-01-01
An unconditionally stable Monte Carlo method for solving the frequency dependent equations of nonlinear radiation transport has been described previously. One of the central features of this method is the replacement of a portion of the absorption and reemission of radiation by a scattering process. While the inclusion of this scattering process assures the accuracy and stability of solutions regardless
Geir Evensen
1994-01-01
A new sequential data assimilation method is discussed. It is based on forecasting the error statistics using Monte Carlo methods, a better alternative than solving the traditional and computationally extremely demanding approximate error covariance equation used in the extended Kalman filter. The unbounded error growth found in the extended Kalman filter, which is caused by an overly simplified closure in
Yassine Benhdech; Stéphane Beaumont; Jean-Pierre Guédon; Tarraf Torfeh
2010-01-01
In this paper, we deepen the R&D program named DTO-DC (Digital Object Test and Dosimetric Console), which goal is to develop an efficient, accurate and full method to achieve dosimetric quality control (QC) of radiotherapy treatment planning system (TPS). This method is mainly based on Digital Test Objects (DTOs) and on Monte Carlo (MC) simulation using the PENELOPE code [1].
A First-Passage Kinetic Monte Carlo method for reaction–drift–diffusion processes
Mauro, Ava J., E-mail: avamauro@bu.edu [Department of Mathematics and Statistics, Boston University, 111 Cummington Mall, Boston, MA 02215 (United States); Sigurdsson, Jon Karl; Shrake, Justin [Department of Mathematics, University of California, Santa Barbara (United States)] [Department of Mathematics, University of California, Santa Barbara (United States); Atzberger, Paul J., E-mail: atzberg@math.ucsb.edu [6712 South Hall, Department of Mathematics, University of California, Santa Barbara, CA 93106 (United States); Isaacson, Samuel A., E-mail: isaacson@math.bu.edu [Department of Mathematics and Statistics, Boston University, 111 Cummington Mall, Boston, MA 02215 (United States)
2014-02-15
Stochastic reaction–diffusion models are now a popular tool for studying physical systems in which both the explicit diffusion of molecules and noise in the chemical reaction process play important roles. The Smoluchowski diffusion-limited reaction model (SDLR) is one of several that have been used to study biological systems. Exact realizations of the underlying stochastic processes described by the SDLR model can be generated by the recently proposed First-Passage Kinetic Monte Carlo (FPKMC) method. This exactness relies on sampling analytical solutions to one and two-body diffusion equations in simplified protective domains. In this work we extend the FPKMC to allow for drift arising from fixed, background potentials. As the corresponding Fokker–Planck equations that describe the motion of each molecule can no longer be solved analytically, we develop a hybrid method that discretizes the protective domains. The discretization is chosen so that the drift–diffusion of each molecule within its protective domain is approximated by a continuous-time random walk on a lattice. New lattices are defined dynamically as the protective domains are updated, hence we will refer to our method as Dynamic Lattice FPKMC or DL-FPKMC. We focus primarily on the one-dimensional case in this manuscript, and demonstrate the numerical convergence and accuracy of our method in this case for both smooth and discontinuous potentials. We also present applications of our method, which illustrate the impact of drift on reaction kinetics.
Kang, H; Kang, Hyesung
1996-01-01
We report simulations of diffusive particle acceleration in oblique magnetohydrodynamical (MHD) shocks. These calculations are based on extension to oblique shocks of a numerical model for ``thermal leakage'' injection of particles at low energy into the cosmic-ray population. That technique, incorporated into a fully dynamical diffusion-convection formalism, was recently introduced for parallel shocks by Kang \\& Jones (1995). Here, we have compared results of time dependent numerical simulations using our technique with Monte Carlo simulations by Ellison, Baring \\& Jones 1995 and with {\\it in situ} observations from the Ulysses spacecraft of oblique interplanetary shocks discussed by Baring \\etal (1995). Through the success of these comparisons we have demonstrated that our {diffusion-convection} method and injection techniques provide a practical tool to capture essential physics of the injection process and particle acceleration at oblique MHD shocks. In addition to the diffusion-convection simulat...
An analysis of the convergence of the direct simulation Monte Carlo method
NASA Astrophysics Data System (ADS)
Galitzine, Cyril; Boyd, Iain D.
2015-05-01
In this article, a rigorous framework for the analysis of the convergence of the direct simulation Monte Carlo (DSMC) method is presented. It is applied to the simulation of two test cases: an axisymmetric jet at a Knudsen number of 0.01 and Mach number of 1 and a two-dimensional cylinder flow at a Knudsen of 0.05 and Mach 10. The rate of convergence of sampled quantities is found to be well predicted by an extended form of the Central Limit Theorem that takes into account the correlation of samples but requires the calculation of correlation spectra. A simplified analytical model that does not require correlation spectra is then constructed to model the effect of sample correlation. It is then used to obtain an a priori estimate of the convergence error.
An Efficient Monte Carlo Method for Modeling Radiative Transfer in Protoplanetary Disks
NASA Technical Reports Server (NTRS)
Kim, Stacy
2011-01-01
Monte Carlo methods have been shown to be effective and versatile in modeling radiative transfer processes to calculate model temperature profiles for protoplanetary disks. Temperatures profiles are important for connecting physical structure to observation and for understanding the conditions for planet formation and migration. However, certain areas of the disk such as the optically thick disk interior are under-sampled, or are of particular interest such as the snow line (where water vapor condenses into ice) and the area surrounding a protoplanet. To improve the sampling, photon packets can be preferentially scattered and reemitted toward the preferred locations at the cost of weighting packet energies to conserve the average energy flux. Here I report on the weighting schemes developed, how they can be applied to various models, and how they affect simulation mechanics and results. We find that improvements in sampling do not always imply similar improvements in temperature accuracies and calculation speeds.
Investigating the conditions of townsend-discharge ignition in helium by the monte carlo method
Ul'yanov, K.N.; Chulkov, V.V.
1986-01-01
The Monte Carlo method is used to investigate the one-dimensional model of electronic amplification in helium in strong electric fields. It is shown that in He, close to the minimum of the Paschen curve, the drift velocity, mean electron energy, and first Townsend coefficient vary significantly within the limits of the discharge gap. The probability W /sub N/ of the formation of N electron-ion pairs in an avalanche is calculated. It is shown that, with increase in E/p, the dependence W /sub N/ (N) becomes steeper, and the probability W /sub o/ (O) increases. Paschen curves are calculated for y = 0.2, 0.1, 0.067. The results are in satisfactory agreement with existing experimental data.
Simulation of aggregating particles in complex flows by the lattice kinetic Monte Carlo method
Flamm, Matthew H.; Sinno, Talid; Diamond, Scott L.
2011-01-01
We develop and validate an efficient lattice kinetic Monte Carlo (LKMC) method for simulating particle aggregation in laminar flows with spatially varying shear rate, such as parabolic flow or flows with standing vortices. A contact time model was developed to describe the particle-particle collision efficiency as a function of the local shear rate, G, and approach angle, ?. This model effectively accounts for the hydrodynamic interactions between approaching particles, which is not explicitly considered in the LKMC framework. For imperfect collisions, the derived collision efficiency [\\documentclass[12pt]{minimal} \\usepackage{amsmath} \\usepackage{wasysym} \\usepackage{amsfonts} \\usepackage{amssymb} \\usepackage{amsbsy} \\usepackage{upgreek} \\usepackage{mathrsfs} \\setlength{\\oddsidemargin}{-69pt} \\begin{document} \\begin{equation*}\\varepsilon = 1 - \\int_0^{{\\pi \\mathord{ {\\vphantom {\\pi /2}} \\kern-\
Pozzi, Sara A [ORNL; Downar, Thomas J [ORNL; Padovani, Enrico [Nuclear Engineering Department Politecnico di Milano, Milan, Italy; Clarke, Shaun D [ORNL
2006-01-01
This work illustrates a methodology based on photon interrogation and coincidence counting for determining the characteristics of fissile material. The feasibility of the proposed methods was demonstrated using a Monte Carlo code system to simulate the full statistics of the neutron and photon field generated by the photon interrogation of fissile and non-fissile materials. Time correlation functions between detectors were simulated for photon beam-on and photon beam-off operation. In the latter case, the correlation signal is obtained via delayed neutrons from photofission, which induce further fission chains in the nuclear material. An analysis methodology was demonstrated based on features selected from the simulated correlation functions and on the use of artificial neural networks. We show that the methodology can reliably differentiate between highly enriched uranium and plutonium. Furthermore, the mass of the material can be determined with a relative error of about 12%. Keywords: MCNP, MCNP-PoliMi, Artificial neural network, Correlation measurement, Photofission
NASA Technical Reports Server (NTRS)
Haviland, J. K.
1974-01-01
The results are reported of two unrelated studies. The first was an investigation of the formulation of the equations for non-uniform unsteady flows, by perturbation of an irrotational flow to obtain the linear Green's equation. The resulting integral equation was found to contain a kernel which could be expressed as the solution of the adjoint flow equation, a linear equation for small perturbations, but with non-constant coefficients determined by the steady flow conditions. It is believed that the non-uniform flow effects may prove important in transonic flutter, and that in such cases, the use of doublet type solutions of the wave equation would then prove to be erroneous. The second task covered an initial investigation into the use of the Monte Carlo method for solution of acoustical field problems. Computed results are given for a rectangular room problem, and for a problem involving a circular duct with a source located at the closed end.
Bianco, Federica B; Oh, Seung Man; Fierroz, David; Liu, Yuqian; Kewley, Lisa; Graur, Or
2015-01-01
We present the open-source Python code pyMCZ that determines oxygen abundance and its distribution from strong emission lines in the standard metallicity scales, based on the original IDL code of Kewley & Dopita (2002) with updates from Kewley & Ellison (2008), and expanded to include more recently developed scales. The standard strong-line diagnostics have been used to estimate the oxygen abundance in the interstellar medium through various emission line ratios in many areas of astrophysics, including galaxy evolution and supernova host galaxy studies. We introduce a Python implementation of these methods that, through Monte Carlo (MC) sampling, better characterizes the statistical reddening-corrected oxygen abundance confidence region. Given line flux measurements and their uncertainties, our code produces synthetic distributions for the oxygen abundance in up to 13 metallicity scales simultaneously, as well as for E(B-V), and estimates their median values and their 66% confidence regions. In additi...
A Monte Carlo Method for Modeling Thermal Damping: Beyond the Brownian-Motion Master Equation
Kurt Jacobs
2009-01-06
The "standard" Brownian motion master equation, used to describe thermal damping, is not completely positive, and does not admit a Monte Carlo method, important in numerical simulations. To eliminate both these problems one must add a term that generates additional position diffusion. He we show that one can obtain a completely positive simple quantum Brownian motion, efficiently solvable, without any extra diffusion. This is achieved by using a stochastic Schroedinger equation (SSE), closely analogous to Langevin's equation, that has no equivalent Markovian master equation. Considering a specific example, we show that this SSE is sensitive to nonlinearities in situations in which the master equation is not, and may therefore be a better model of damping for nonlinear systems.
Wagner, John C [ORNL] [ORNL; Peplow, Douglas E. [ORNL] [ORNL; Mosher, Scott W [ORNL] [ORNL
2014-01-01
This paper presents a new hybrid (Monte Carlo/deterministic) method for increasing the efficiency of Monte Carlo calculations of distributions, such as flux or dose rate distributions (e.g., mesh tallies), as well as responses at multiple localized detectors and spectra. This method, referred to as Forward-Weighted CADIS (FW-CADIS), is an extension of the Consistent Adjoint Driven Importance Sampling (CADIS) method, which has been used for more than a decade to very effectively improve the efficiency of Monte Carlo calculations of localized quantities, e.g., flux, dose, or reaction rate at a specific location. The basis of this method is the development of an importance function that represents the importance of particles to the objective of uniform Monte Carlo particle density in the desired tally regions. Implementation of this method utilizes the results from a forward deterministic calculation to develop a forward-weighted source for a deterministic adjoint calculation. The resulting adjoint function is then used to generate consistent space- and energy-dependent source biasing parameters and weight windows that are used in a forward Monte Carlo calculation to obtain more uniform statistical uncertainties in the desired tally regions. The FW-CADIS method has been implemented and demonstrated within the MAVRIC sequence of SCALE and the ADVANTG/MCNP framework. Application of the method to representative, real-world problems, including calculation of dose rate and energy dependent flux throughout the problem space, dose rates in specific areas, and energy spectra at multiple detectors, is presented and discussed. Results of the FW-CADIS method and other recently developed global variance reduction approaches are also compared, and the FW-CADIS method outperformed the other methods in all cases considered.
Uncertainty Quantification of Prompt Fission Neutron Spectra Using the Unified Monte Carlo Method
NASA Astrophysics Data System (ADS)
Rising, M. E.; Talou, P.; Prinja, A. K.
2014-04-01
In the ENDF/B-VII.1 nuclear data library, the existing covariance evaluations of the prompt fission neutron spectra (PFNS) were computed by combining the available experimental differential data with theoretical model calculations, relying on the use of a first-order linear Bayesan approach, the Kalman filter. This approach assumes that the theoretical model response to changes in input model parameters be linear about the a priori central values. While the Unified Monte Carlo (UMC) method remains a Bayesian approach, like the Kalman filter, this method does not make any assumption about the linearity of the model response or shape of the a posteriori distribution of the parameters. By sampling from a distribution centered about the a priori model parameters, the UMC method computes the moments of the a posteriori parameter distribution. As the number of samples increases, the statistical noise in the computed a posteriori moments decrease and an appropriately converged solution corresponding to the true mean of the a posteriori PDF results. The UMC method has been successfully implemented using both a uniform and Gaussian sampling distribution and has been used for the evaluation of the PFNS and its associated uncertainties. While many of the UMC results are similar to the first-order Kalman filter results, significant differences are shown when experimental data are excluded from the evaluation process. When experimental data are included a few small nonlinearities are present in the high outgoing energy tail of the PFNS.
Low-Density Nozzle Flow by the Direct Simulation Monte Carlo and Continuum Methods
NASA Technical Reports Server (NTRS)
Chung, Chang-Hong; Kim, Sku C.; Stubbs, Robert M.; Dewitt, Kenneth J.
1994-01-01
Two different approaches, the direct simulation Monte Carlo (DSMC) method based on molecular gasdynamics, and a finite-volume approximation of the Navier-Stokes equations, which are based on continuum gasdynamics, are employed in the analysis of a low-density gas flow in a small converging-diverging nozzle. The fluid experiences various kinds of flow regimes including continuum, slip, transition, and free-molecular. Results from the two numerical methods are compared with Rothe's experimental data, in which density and rotational temperature variations along the centerline and at various locations inside a low-density nozzle were measured by the electron-beam fluorescence technique. The continuum approach showed good agreement with the experimental data as far as density is concerned. The results from the DSMC method showed good agreement with the experimental data, both in the density and the rotational temperature. It is also shown that the simulation parameters, such as the gas/surface interaction model, the energy exchange model between rotational and translational modes, and the viscosity-temperature exponent, have substantial effects on the results of the DSMC method.
Heath, Emily; Seuntjens, Jan [Medical Physics Unit, McGill University, 1650 Cedar Ave., Montreal, H3G 1A4 (Canada)
2006-02-15
In this work we present a method of calculating dose in deforming anatomy where the position and shape of each dose voxel is tracked as the anatomy changes. The EGSnrc/DOSXYZnrc Monte Carlo code was modified to calculate dose in voxels that are deformed according to deformation vectors obtained from a nonlinear image registration algorithm. The defDOSXYZ code was validated by consistency checks and by comparing calculations against DOSXYZnrc calculations. Calculations in deforming phantoms were compared with a dose remapping method employing trilinear interpolation. Dose calculations with the deforming voxels agree with DOSXYZnrc calculations within 1%. In simple deforming rectangular phantoms the trilinear dose remapping method was found to underestimate the dose by up to 29% for a 1.0 cm voxel size within the field, with larger discrepancies in regions of steep dose gradients. The agreement between the two calculation methods improved with smaller voxel size and deformation magnitude. A comparison of dose remapping from Inhale to Exhale in an anatomical breathing phantom demonstrated that dose deformations are underestimated by up to 16% in the penumbra and 8% near the surface with trilinear interpolation.
Zou Yu, E-mail: yzou@Princeton.ED [Department of Chemical Engineering, Princeton University, Princeton, NJ 08544 (United States); Kavousanakis, Michail E., E-mail: mkavousa@Princeton.ED [Department of Chemical Engineering, Princeton University, Princeton, NJ 08544 (United States); Kevrekidis, Ioannis G., E-mail: yannis@Princeton.ED [Department of Chemical Engineering, Princeton University, Princeton, NJ 08544 (United States); Program in Applied and Computational Mathematics, Princeton University, Princeton, NJ 08544 (United States); Fox, Rodney O., E-mail: rofox@iastate.ed [Department of Chemical and Biological Engineering, Iowa State University, Ames, IA 50011 (United States)
2010-07-20
The study of particle coagulation and sintering processes is important in a variety of research studies ranging from cell fusion and dust motion to aerosol formation applications. These processes are traditionally simulated using either Monte-Carlo methods or integro-differential equations for particle number density functions. In this paper, we present a computational technique for cases where we believe that accurate closed evolution equations for a finite number of moments of the density function exist in principle, but are not explicitly available. The so-called equation-free computational framework is then employed to numerically obtain the solution of these unavailable closed moment equations by exploiting (through intelligent design of computational experiments) the corresponding fine-scale (here, Monte-Carlo) simulation. We illustrate the use of this method by accelerating the computation of evolving moments of uni- and bivariate particle coagulation and sintering through short simulation bursts of a constant-number Monte-Carlo scheme.
MONTE CARLO EXTENSION OF QUASIMONTE CARLO Art B. Owen
Owen, Art
MONTE CARLO EXTENSION OF QUASIMONTE CARLO Art B. Owen Department of Statistics Stanford University Stanford CA 94305, U.S.A. ABSTRACT This paper surveys recent research on using Monte Carlo techniques to improve quasiMonte Carlo tech niques. Randomized quasiMonte Carlo methods pro vide a basis for error
Inverse simulation of the lithospheric thermal regime using the Monte Carlo method
NASA Astrophysics Data System (ADS)
Jokinen, Jarkko; Kukkonen, Ilmo T.
1999-06-01
The Monte Carlo inversion method was applied to geothermal lithospheric models of conductive heat transfer in steady-state conditions. A priori models were generated from probability distributions assigned to thermal conductivity and heat production rate of the models. Corresponding temperature and heat flow density values were calculated numerically, and the modification of the a priori distributions into samples of the a posteriori distributions was done using the Metropolis algorithm as the acceptance rule and surface heat flow density values as a fitting object. Two models were analysed, first a one-dimensional layered earth model with three crustal and one upper mantle layer, and second, a two-dimensional lithospheric model in the Fennoscandian Shield. The thermal conductivity and heat production rate were either (1) evenly or (2) normally and log-normally distributed in the models. In both cases the results were generally similar in the sense that the same kinds of changes were suggested by the inversion algorithm for conductivity, heat production rate, temperature and heat flow density, although the changes were not identical in details. The result indicates that the inversion tool is robust and able to reach solutions from relatively loosely constrained a priori parameter estimates. However, the general ambiguity of the geothermal inversion problem influences the results considerably. The Monte Carlo inversion can be used for analysing the problem with the aid of the a posteriori frequency distributions of different parameters. Improving of the results, i.e. shifting of mean values and narrowing of the distributions were observed in many domains of the models. Deterioration of the parameter estimates was not recorded.
Louis Leon Thurstone in Monte Carlo: creating error bars for the method of paired comparison
NASA Astrophysics Data System (ADS)
Montag, Ethan D.
2003-12-01
The method of paired comparison is often used in experiments where perceptual scale values for a collection of stimuli are desired, such as in experiments analyzing image quality. Thurstone's Case V of his Law of Comparative Judgments is often used as the basis for analyzing data produced in paired comparison experiments. However, methods for determining confidence intervals and critical distances for significant differences based on Thurstone's Law have been elusive leading some to abandon the simple analysis provided by Thurstone's formulation. In order to provide insight into this problem of determining error, Monte Carlo simulations of paired comparison experiments were performed based on the assumptions of uniformly normal, independent, and uncorrelated responses from stimulus pair presentations. The results from these multiple simulations show that the variation in the distribution of experimental results of paired comparison experiments can be well predicted as a function of stimulus number and the number of observations. Using these results, confidence intervals and critical values for comparisons can be made using traditional statistical methods. In addition the results from simulations can be used to analyze goodness-of-fit techniques.
Histogram analysis as a method for determining the line tension by Monte-Carlo simulations
Yuri Djikaev
2004-11-15
A method is proposed for determining the line tension, which is the main physical characteristic of a three-phase contact region, by Monte-Carlo (MC) simulations. The key idea of the proposed method is that if a three-phase equilibrium involves a three-phase contact region, the probability distribution of states of a system as a function of two order parameters depends not only on the surface tension, but also on the line tension. This probability distribution can be obtained as a normalized histogram by appropriate MC simulations, so one can use the combination of histogram analysis and finite-size scaling to study the properties of a three phase contact region. Every histogram and results extracted therefrom will depend on the size of the simulated system. Carrying out MC simulations for a series of system sizes and extrapolating the results, obtained from the corresponding series of histograms, to infinite size, one can determine the line tension of the three phase contact region and the interfacial tensions of all three interfaces (and hence the contact angles) in an infinite system. To illustrate the proposed method, it is applied to the three-dimensional ternary fluid mixture, in which molecular pairs of like species do not interact whereas those of unlike species interact as hard spheres. The simulated results are in agreement with expectations.
Statistical Properties of Nuclei by the Shell Model Monte Carlo Method
Y. Alhassid
2006-04-26
We use quantum Monte Carlo methods in the framework of the interacting nuclear shell model to calculate the statistical properties of nuclei at finite temperature and/or excitation energies. With this approach we can carry out realistic calculations in much larger configuration spaces than are possible by conventional methods. A major application of the methods has been the microscopic calculation of nuclear partition functions and level densities, taking into account both correlations and shell effects. Our results for nuclei in the mass region A ~ 50 - 70 are in remarkably good agreement with experimental level densities without any adjustable parameters and are an improvement over empirical formulas. We have recently extended the shell model theory of level statistics to higher temperatures, including continuum effects. We have also constructed simple statistical models to explain the dependence of the microscopically calculated level densities on good quantum numbers such as parity. Thermal signatures of pairing correlations are identified through odd-even effects in the heat capacity.
Density-of-states based Monte Carlo methods for simulation of biological systems
NASA Astrophysics Data System (ADS)
Rathore, Nitin; Knotts, Thomas A.; de Pablo, Juan J.
2004-03-01
We have developed density-of-states [1] based Monte Carlo techniques for simulation of biological molecules. Two such methods are discussed. The first, Configurational Temperature Density of States (CTDOS) [2], relies on computing the density of states of a peptide system from knowledge of its configurational temperature. The reciprocal of this intrinsic temperature, computed from instantaneous configurational information of the system, is integrated to arrive at the density of states. The method shows improved efficiency and accuracy over techniques that are based on histograms of random visits to distinct energy states. The second approach, Expanded Ensemble Density of States (EXEDOS), incorporates elements from both the random walk method and the expanded ensemble formalism. It is used in this work to study mechanical deformation of model peptides. Results are presented in the form of force-extension curves and the corresponding potentials of mean force. The application of this proposed technique is further generalized to other biological systems; results will be presented for ion transport through protein channels, base stacking in nucleic acids and hybridization of DNA strands. [1]. F. Wang and D. P. Landau, Phys. Rev. Lett., 86, 2050 (2001). [2]. N. Rathore, T. A. Knotts IV and J. J. de Pablo, Biophys. J., Dec. (2003).
Yamamoto, K.; Hashizume, K.; Wada, T.; Ohta, M.; Suda, T. [Department of Physics, Konan University, 8-9-1 Okamoto, Kobe 653-8501 (Japan); Nishimura, T.; Fujimoto, M. Y.; Kato, K. [Division of Science, Hokkaido University, Sapporo 060-0810 (Japan); Meme Media Laboratory, Hokkaido University, Sapporo 060-0813 (Japan); Aikawa, M. [Institut d'Astronomie et d'Astrophysique, C.P.226, Universite Libre de Bruxelles, B-1050 Brussels (Belgium)
2006-07-12
We propose a Monte Carlo method to study the reaction paths in nucleosynthesis during stellar evolution. Determination of reaction paths is important to obtain the physical picture of stellar evolution. The combination of network calculation and our method gives us a better understanding of physical picture. We apply our method to the case of the helium shell flash model in the extremely metal poor star.
Viviana Fanti; Roberto Marzeddu; Callisto Pili; Paolo Randaccio; Sabyasachi Siddhanta; Jenny Spiga; Artur Szostak
2009-01-01
This work describes a fast Monte Carlo Machine for dose calculation in radiotherapy treatment planning on FPGA based hardware. When performing Monte Carlo simulations of the radiation dose delivered to the human body, the Compton interaction is simulated. The inputs to the system are the energy and the normalized direction vectors of the incoming photon. The energy and the direction
J. J. DeMarco; C. H. Cagnon; D. D. Cody; D. M. Stevens; C. H. McCollough; J. O'Daniel; M. F. McNitt-Gray
2005-01-01
The purpose of this work was to extend the verification of Monte Carlo based methods for estimating radiation dose in computed tomography (CT) exams beyond a single CT scanner to a multidetector CT (MDCT) scanner, and from cylindrical CTDI phantom measurements to both cylindrical and physical anthropomorphic phantoms. Both cylindrical and physical anthropomorphic phantoms were scanned on an MDCT under
Bendele, Travis Henry
2013-02-22
A honeycomb probe was designed to measure the optical properties of biological tissues using single Monte Carlo method. The ongoing project is intended to be a multi-wavelength, real time, and in-vivo technique to detect breast cancer. Preliminary...
Kalugin, M. A.; Oleynik, D. S.; Sukhino-Khomenko, E. A., E-mail: sukhino-khomenko@adis.vver.kiae.ru [National Research Centre Kurchatov Institute (Russian Federation)
2012-12-15
The algorithms of estimation of the time series correlation functions in nuclear reactor calculations using the Monte Carlo method are described. Correlation functions are used for the estimation of biases, for calculations of variance taking into account the correlations between neutron generations, and for choosing skipped generations.
Jean-Pierre Fouque; Chuan-Hsiang Han
2004-01-01
We present variance reduction methods for Monte Carlo simulations to evaluate European and Asian options in the context of multiscale stochastic volatility models. European option price approximations, obtained from singular and regular perturbation analysis [Fouque J P, Papanicolaou G, Sircar R and Solna K 2003 Multiscale stochastic volatility asymptotics, SIAM J. Multiscale Modeling and Simulation2], are used in importance sampling
Shimkin, Nahum
3 Variance Reduction Methods, I We return to the problem of Monte-Carlo integration, namely approaches. 3.1 Monitoring the Estimation Error Before going into variance reduction, let us discuss briefly = Var(^N ) = · · · = 1 N Var(H(X)) . (Note that the MSE and variance are the same here since
Straub, John E.
On Monte Carlo and molecular dynamics methods inspired by Tsallis statistics: Methodology a generalized statistical distribution derived from a modification of the GibbsShannon entropy proposed of the phase space may result in distinct time averages. Statistical theories of chemical sys- tems are often
Da, B.; Sun, Y.; Ding, Z. J. [Hefei National Laboratory for Physical Sciences at Microscale and Department of Physics, University of Science and Technology of China, 96 Jinzhai Road, Hefei, Anhui 230026, People's Republic of China (China)] [Hefei National Laboratory for Physical Sciences at Microscale and Department of Physics, University of Science and Technology of China, 96 Jinzhai Road, Hefei, Anhui 230026, People's Republic of China (China); Mao, S. F. [School of Nuclear Science and Technology, University of Science and Technology of China, 96 Jinzhai Road, Hefei, Anhui 230026, People's Republic of China (China)] [School of Nuclear Science and Technology, University of Science and Technology of China, 96 Jinzhai Road, Hefei, Anhui 230026, People's Republic of China (China); Zhang, Z. M. [Centre of Physical Experiments, University of Science and Technology of China, 96 Jinzhai Road, Hefei, Anhui 230026, People's Republic of China (China)] [Centre of Physical Experiments, University of Science and Technology of China, 96 Jinzhai Road, Hefei, Anhui 230026, People's Republic of China (China); Jin, H.; Yoshikawa, H.; Tanuma, S. [Advanced Surface Chemical Analysis Group, National Institute for Materials Science, 1-2-1 Sengen Tsukuba, Ibaraki 305-0047 (Japan)] [Advanced Surface Chemical Analysis Group, National Institute for Materials Science, 1-2-1 Sengen Tsukuba, Ibaraki 305-0047 (Japan)
2013-06-07
A reverse Monte Carlo (RMC) method is developed to obtain the energy loss function (ELF) and optical constants from a measured reflection electron energy-loss spectroscopy (REELS) spectrum by an iterative Monte Carlo (MC) simulation procedure. The method combines the simulated annealing method, i.e., a Markov chain Monte Carlo (MCMC) sampling of oscillator parameters, surface and bulk excitation weighting factors, and band gap energy, with a conventional MC simulation of electron interaction with solids, which acts as a single step of MCMC sampling in this RMC method. To examine the reliability of this method, we have verified that the output data of the dielectric function are essentially independent of the initial values of the trial parameters, which is a basic property of a MCMC method. The optical constants derived for SiO{sub 2} in the energy loss range of 8-90 eV are in good agreement with other available data, and relevant bulk ELFs are checked by oscillator strength-sum and perfect-screening-sum rules. Our results show that the dielectric function can be obtained by the RMC method even with a wide range of initial trial parameters. The RMC method is thus a general and effective method for determining the optical properties of solids from REELS measurements.
A Sequential Monte Carlo Method for Bayesian Analysis of Massive Datasets
Ridgeway, Greg; Madigan, David
2009-01-01
Markov chain Monte Carlo (MCMC) techniques revolutionized statistical practice in the 1990s by providing an essential toolkit for making the rigor and flexibility of Bayesian analysis computationally practical. At the same time the increasing prevalence of massive datasets and the expansion of the field of data mining has created the need for statistically sound methods that scale to these large problems. Except for the most trivial examples, current MCMC methods require a complete scan of the dataset for each iteration eliminating their candidacy as feasible data mining techniques. In this article we present a method for making Bayesian analysis of massive datasets computationally feasible. The algorithm simulates from a posterior distribution that conditions on a smaller, more manageable portion of the dataset. The remainder of the dataset may be incorporated by reweighting the initial draws using importance sampling. Computation of the importance weights requires a single scan of the remaining observations. While importance sampling increases efficiency in data access, it comes at the expense of estimation efficiency. A simple modification, based on the “rejuvenation” step used in particle filters for dynamic systems models, sidesteps the loss of efficiency with only a slight increase in the number of data accesses. To show proof-of-concept, we demonstrate the method on two examples. The first is a mixture of transition models that has been used to model web traffic and robotics. For this example we show that estimation efficiency is not affected while offering a 99% reduction in data accesses. The second example applies the method to Bayesian logistic regression and yields a 98% reduction in data accesses. PMID:19789656
Asteroid orbital inversion using a virtual-observation Markov-chain Monte Carlo method
NASA Astrophysics Data System (ADS)
Muinonen, Karri; Granvik, Mikael; Oszkiewicz, Dagmara; Pieniluoma, Tuomo; Pentikäinen, Hanna
2012-12-01
A novel virtual-observation Markov-chain Monte Carlo method (MCMC) is presented for the asteroid orbital inverse problem posed by small to moderate numbers of astrometric observations. In the method, the orbital-element proposal probability density is chosen to mimic the convolution of the a posteriori density by itself: first, random errors are simulated for each observation, resulting in a set of virtual observations; second, least-squares orbital elements are derived for the virtual observations using the Nelder-Mead downhill simplex method; third, repeating the procedure gives a difference between two sets of what can be called virtual least-squares elements; and, fourth, the difference obtained constitutes a symmetric proposal in a random-walk Metropolis-Hastings algorithm, avoiding the explicit computation of the proposal density. In practice, the proposals are based on a large number of pre-computed sets of orbital elements. Virtual-observation MCMC is thus based on the characterization of the phase-space volume of solutions before the actual MCMC sampling. Virtual-observation MCMC is compared to MCMC orbital ranging, a random-walk Metropolis-Hastings algorithm based on sampling with the help of Cartesian positions at two observation dates, in the case of the near-Earth asteroid (85640) 1998 OX4. In the present preliminary comparison, the methods yield similar results for a 9.1-day observational time interval extracted from the full current astrometry of the asteroid. In the future, both of the methods are to be applied to the astrometric observations of the Gaia mission.
Simulation of Watts Bar Unit 1 Initial Startup Tests with Continuous Energy Monte Carlo Methods
Godfrey, Andrew T [ORNL; Gehin, Jess C [ORNL; Bekar, Kursat B [ORNL; Celik, Cihangir [ORNL
2014-01-01
The Consortium for Advanced Simulation of Light Water Reactors* is developing a collection of methods and software products known as VERA, the Virtual Environment for Reactor Applications. One component of the testing and validation plan for VERA is comparison of neutronics results to a set of continuous energy Monte Carlo solutions for a range of pressurized water reactor geometries using the SCALE component KENO-VI developed by Oak Ridge National Laboratory. Recent improvements in data, methods, and parallelism have enabled KENO, previously utilized predominately as a criticality safety code, to demonstrate excellent capability and performance for reactor physics applications. The highly detailed and rigorous KENO solutions provide a reliable nu-meric reference for VERAneutronics and also demonstrate the most accurate predictions achievable by modeling and simulations tools for comparison to operating plant data. This paper demonstrates the performance of KENO-VI for the Watts Bar Unit 1 Cycle 1 zero power physics tests, including reactor criticality, control rod worths, and isothermal temperature coefficients.
Adjoint-based deviational Monte Carlo methods for phonon transport calculations
NASA Astrophysics Data System (ADS)
Péraud, Jean-Philippe M.; Hadjiconstantinou, Nicolas G.
2015-06-01
In the field of linear transport, adjoint formulations exploit linearity to derive powerful reciprocity relations between a variety of quantities of interest. In this paper, we develop an adjoint formulation of the linearized Boltzmann transport equation for phonon transport. We use this formulation for accelerating deviational Monte Carlo simulations of complex, multiscale problems. Benefits include significant computational savings via direct variance reduction, or by enabling formulations which allow more efficient use of computational resources, such as formulations which provide high resolution in a particular phase-space dimension (e.g., spectral). We show that the proposed adjoint-based methods are particularly well suited to problems involving a wide range of length scales (e.g., nanometers to hundreds of microns) and lead to computational methods that can calculate quantities of interest with a cost that is independent of the system characteristic length scale, thus removing the traditional stiffness of kinetic descriptions. Applications to problems of current interest, such as simulation of transient thermoreflectance experiments or spectrally resolved calculation of the effective thermal conductivity of nanostructured materials, are presented and discussed in detail.
Application of Quantum Monte Carlo Methods to Describe the Properties of Manganese Oxide Polymorphs
NASA Astrophysics Data System (ADS)
Schiller, Joshua; Ertekin, Elif
2015-03-01
First-principles descriptions of the properties of correlated materials such as transition metal oxides has been a long-standing challenge. Manganese oxide is one such example: according to both conventional and hybrid functional density functional theory, the zinc blende polymorph is predicted to be lower in energy than the rock salt polymorph that occurs in nature. While the correct energy ordering can be obtained in density functional approaches by careful selection of modeling parameters, we present here an alternative approach based on quantum Monte Carlo methods, which are a suite of stochastic tools for solution of the many-body Schrodinger equation. Due to its direct treatment of electron correlation, the QMC method offers the possibility of parameter-free, high-accuracy, systematically improvable analysis. In manganese oxide, we find that the QMC methodology is able to accurately reproduce relative phase energies, lattice constants, and band gaps without the use of adjustable parameters. Additionally, statistical analysis of the many-body wave functions from QMC provides some diagnostic assessments to reveal the physics that may be missing from other modeling approaches.
Cu-Au alloys using Monte Carlo simulations and the BFS method for alloys
Bozzolo, G. [Analex Corp., Brook Park, OH (United States); Good, B.; Ferrante, J. [National Aeronautics and Space Administration, Cleveland, OH (United States). Lewis Research Center
1996-12-31
Semi empirical methods have shown considerable promise in aiding in the calculation of many properties of materials. Materials used in engineering applications have defects that occur for various reasons including processing. In this work the authors present the first application of the BFS (Bozzolo, Ferrante and Smith) method for alloys to describe some aspects of microstructure due to processing for the Cu-Au system (Cu-Au, CuAu{sub 3}, and Cu{sub 3}Au). they use finite temperature Monte Carlo calculations, in order to show the influence of heat treatment in the low-temperature phase of the alloy. Although relatively simple, it has enough features that could be used as a first test of the reliability of the technique. The main questions to be answered in this work relate to the existence of low temperature ordered structures for specific concentrations, for example, the ability to distinguish between rather similar phases for equiatomic alloys (CuAu I and CuAu II, the latter characterized by an antiphase boundary separating two identical phases).
MC-Net: a method for the construction of phylogenetic networks based on the Monte-Carlo method
2010-01-01
Background A phylogenetic network is a generalization of phylogenetic trees that allows the representation of conflicting signals or alternative evolutionary histories in a single diagram. There are several methods for constructing these networks. Some of these methods are based on distances among taxa. In practice, the methods which are based on distance perform faster in comparison with other methods. The Neighbor-Net (N-Net) is a distance-based method. The N-Net produces a circular ordering from a distance matrix, then constructs a collection of weighted splits using circular ordering. The SplitsTree which is a program using these weighted splits makes a phylogenetic network. In general, finding an optimal circular ordering is an NP-hard problem. The N-Net is a heuristic algorithm to find the optimal circular ordering which is based on neighbor-joining algorithm. Results In this paper, we present a heuristic algorithm to find an optimal circular ordering based on the Monte-Carlo method, called MC-Net algorithm. In order to show that MC-Net performs better than N-Net, we apply both algorithms on different data sets. Then we draw phylogenetic networks corresponding to outputs of these algorithms using SplitsTree and compare the results. Conclusions We find that the circular ordering produced by the MC-Net is closer to optimal circular ordering than the N-Net. Furthermore, the networks corresponding to outputs of MC-Net made by SplitsTree are simpler than N-Net. PMID:20727135
Variational Monte Carlo Methods for Strongly Correlated Quantum Systems on Multileg Ladders
NASA Astrophysics Data System (ADS)
Block, Matthew S.
Quantum mechanical systems of strongly interacting particles in two dimensions comprise a realm of condensed matter physics for which there remain many unanswered theoretical questions. In particular, the most formidable challenges may lie in cases where the ground states show no signs of ordering, break no symmetries, and support many gapless excitations. Such systems are known to exhibit exotic, disordered ground states that are notoriously difficult to study analytically using traditional perturbation techniques or numerically using the most recent methods (e.g., tensor network states) due to the large amount of spatial entanglement. Slave particle descriptions provide a glimmer of hope in the attempt to capture the fundamental, low-energy physics of these highly non-trivial phases of matter. To this end, this dissertation describes the construction and implementation of trial wave functions for use with variational Monte Carlo techniques that can easily model slave particle states. While these methods are extremely computationally tractable in two dimensions, we have applied them here to quasi-one-dimensional systems so that the results of other numerical techniques, such as the density matrix renormalization group, can be directly compared to those determined by the trial wave functions and so that exclusively one-dimensional analytic approaches, namely bosonization, can be employed. While the focus here is on the use of variational Monte Carlo, the sum of these different numerical and analytical tools has yielded a remarkable amount of insight into several exotic quantum ground states. In particular, the results of research on the d-wave Bose liquid phase, an uncondensed state of strongly correlated hard-core bosons living on the square lattice whose wave function exhibits a d-wave sign structure, and the spin Bose-metal phase, a spin-1/2, SU(2) invariant spin liquid of strongly correlated spins living on the triangular lattice, will be presented. Both phases support gapless excitations along surfaces in momentum space in two spatial dimensions and at incommensurate wave vectors in quasi-one dimension, where we have studied them on three- and four-leg ladders. An extension of this work to the study of d-wave correlated itinerant electrons will be discussed.
Quantifying uncertainties in pollutant mapping studies using the Monte Carlo method
NASA Astrophysics Data System (ADS)
Tan, Yi; Robinson, Allen L.; Presto, Albert A.
2014-12-01
Routine air monitoring provides accurate measurements of annual average concentrations of air pollutants, but the low density of monitoring sites limits its capability in capturing intra-urban variation. Pollutant mapping studies measure air pollutants at a large number of sites during short periods. However, their short duration can cause substantial uncertainty in reproducing annual mean concentrations. In order to quantify this uncertainty for existing sampling strategies and investigate methods to improve future studies, we conducted Monte Carlo experiments with nationwide monitoring data from the EPA Air Quality System. Typical fixed sampling designs have much larger uncertainties than previously assumed, and produce accurate estimates of annual average pollution concentrations approximately 80% of the time. Mobile sampling has difficulties in estimating long-term exposures for individual sites, but performs better for site groups. The accuracy and the precision of a given design decrease when data variation increases, indicating challenges in sites intermittently impact by local sources such as traffic. Correcting measurements with reference sites does not completely remove the uncertainty associated with short duration sampling. Using reference sites with the addition method can better account for temporal variations than the multiplication method. We propose feasible methods for future mapping studies to reduce uncertainties in estimating annual mean concentrations. Future fixed sampling studies should conduct two separate 1-week long sampling periods in all 4 seasons. Mobile sampling studies should estimate annual mean concentrations for exposure groups with five or more sites. Fixed and mobile sampling designs have comparable probabilities in ordering two sites, so they may have similar capabilities in predicting pollutant spatial variations. Simulated sampling designs have large uncertainties in reproducing seasonal and diurnal variations at individual sites, but are capable to predict these variations for exposure groups.
Shepherd, James J; Booth, George H; Alavi, Ali
2012-06-28
Using the homogeneous electron gas (HEG) as a model, we investigate the sources of error in the "initiator" adaptation to full configuration interaction quantum Monte Carlo (i-FCIQMC), with a view to accelerating convergence. In particular, we find that the fixed-shift phase, where the walker number is allowed to grow slowly, can be used to effectively assess stochastic and initiator error. Using this approach we provide simple explanations for the internal parameters of an i-FCIQMC simulation. We exploit the consistent basis sets and adjustable correlation strength of the HEG to analyze properties of the algorithm, and present finite basis benchmark energies for N = 14 over a range of densities 0.5 ? r(s) ? 5.0 a.u. A single-point extrapolation scheme is introduced to produce complete basis energies for 14, 38, and 54 electrons. It is empirically found that, in the weakly correlated regime, the computational cost scales linearly with the plane wave basis set size, which is justifiable on physical grounds. We expect the fixed-shift strategy to reduce the computational cost of many i-FCIQMC calculations of weakly correlated systems. In addition, we provide benchmarks for the electron gas, to be used by other quantum chemical methods in exploring periodic solid state systems. PMID:22755559
Numerical simulation of pulsed neutron induced gamma log using Monte Carlo method
NASA Astrophysics Data System (ADS)
Byeongho, Byeongho; Hwang, Seho; Shin, Jehyun; Park, Chang Je; Kim, Jongman; Kim, Ki-Seog
2015-04-01
Recently the neutron induced gamma log is the key role in shale play. This study was performed for understanding an energy characteristics spectrum of neutron induced gamma log using Monte Carlo method. A neutron generator which emits 14 MeV neutron particles was used. Flux of thermal neutron and capture gamma was calculated from detectors arranged at 10 cm intervals from neutron generator. Sandstone, limestone, granite, and basalt were selected to estimate and simulate response characteristics using MCNP. Also, the design for reducing effects of natural gamma (K, Th U) and back scattering was also applied to the sonde model in MCNP. Through results of energy spectrum analysis of capture gamma which detected to the detector in numerical sonde model, we knew that atoms which have wide neutron cross-section and are abundant in formation such as calcium, iron, silicon, magnesium, aluminium, hydrogen, and so forth were detected. Those results can help to design the optimal array of neutron and capture gamma detectors.
Kinetic Monte Carlo method for dislocation migration in the presence of solute
Deo, Chaitanya S.; Srolovitz, David J.; Cai Wei; Bulatov, Vasily V. [Princeton Materials Institute, Princeton University, Princeton, New Jersey 08540 (United States); Department of Materials Science and Engineering, University of Michigan, Ann Arbor, Michigan 48105 (United States); Department of Mechanical and Aerospace Engineering, Princeton University, Princeton, New Jersey 08544 (United States); Princeton Materials Institute, Princeton University, Princeton, New Jersey 08544 (United States); Chemistry and Materials Science Directorate, Lawrence Livermore National Laboratory, Livermore, California 94550 (United States)
2005-01-01
We present a kinetic Monte Carlo method for simulating dislocation motion in alloys within the framework of the kink model. The model considers the glide of a dislocation in a static, three-dimensional solute atom atmosphere. It includes both a description of the short-range interaction between a dislocation core and the solute and long-range solute-dislocation interactions arising from the interplay of the solute misfit and the dislocation stress field. Double-kink nucleation rates are calculated using a first-passage-time analysis that accounts for the subcritical annihilation of embryonic double kinks as well as the presence of solutes. We explicitly consider the case of the motion of a <111>-oriented screw dislocation on a {l_brace}011{r_brace}-slip plane in body-centered-cubic Mo-based alloys. Simulations yield dislocation velocity as a function of stress, temperature, and solute concentration. The dislocation velocity results are shown to be consistent with existing experimental data and, in some cases, analytical models. Application of this model depends upon the validity of the kink model and the availability of fundamental properties (i.e., single-kink energy, Peierls stress, secondary Peierls barrier to kink migration, single-kink mobility, solute-kink interaction energies, solute misfit), which can be obtained from first-principles calculations and/or molecular-dynamics simulations.
Monte Carlo evaluation of accuracy and noise properties of two scatter correction methods
Narita, Y. [Research Institute for Brain and Blood Vessels, Akita City (Japan); [Tohoku Univ., Sendai (Japan); Eberl, S. [Royal Prince Alfred Hospital, Sydney (Australia); Nakamura, T. [Tohoku Univ., Sendai (Japan)] [and others
1996-12-31
Two independent scatter correction techniques, transmission dependent convolution subtraction (TDCS) and triple-energy window (TEW) method, were evaluated in terms of quantitative accuracy and noise properties using Monte Carlo simulation (EGS4). Emission projections (primary, scatter and scatter plus primary) were simulated for {sup 99m}Tc and {sup 201}Tl for numerical chest phantoms. Data were reconstructed with ordered-subset ML-EM algorithm including attenuation correction using the transmission data. In the chest phantom simulation, TDCS provided better S/N than TEW, and better accuracy, i.e., 1.0% vs -7.2% in myocardium, and -3.7% vs -30.1% in the ventricular chamber for {sup 99m}Tc with TDCS and TEW, respectively. For {sup 201}Tl, TDCS provided good visual and quantitative agreement with simulated true primary image without noticeably increasing the noise after scatter correction. Overall TDCS proved to be more accurate and less noisy than TEW, facilitating quantitative assessment of physiological functions with SPECT.
Improving Bayesian analysis for LISA Pathfinder using an efficient Markov Chain Monte Carlo method
NASA Astrophysics Data System (ADS)
Ferraioli, Luigi; Porter, Edward K.; Armano, Michele; Audley, Heather; Congedo, Giuseppe; Diepholz, Ingo; Gibert, Ferran; Hewitson, Martin; Hueller, Mauro; Karnesis, Nikolaos; Korsakova, Natalia; Nofrarias, Miquel; Plagnol, Eric; Vitale, Stefano
2014-02-01
We present a parameter estimation procedure based on a Bayesian framework by applying a Markov Chain Monte Carlo algorithm to the calibration of the dynamical parameters of the LISA Pathfinder satellite. The method is based on the Metropolis-Hastings algorithm and a two-stage annealing treatment in order to ensure an effective exploration of the parameter space at the beginning of the chain. We compare two versions of the algorithm with an application to a LISA Pathfinder data analysis problem. The two algorithms share the same heating strategy but with one moving in coordinate directions using proposals from a multivariate Gaussian distribution, while the other uses the natural logarithm of some parameters and proposes jumps in the eigen-space of the Fisher Information matrix. The algorithm proposing jumps in the eigen-space of the Fisher Information matrix demonstrates a higher acceptance rate and a slightly better convergence towards the equilibrium parameter distributions in the application to LISA Pathfinder data. For this experiment, we return parameter values that are all within ˜1 ? of the injected values. When we analyse the accuracy of our parameter estimation in terms of the effect they have on the force-per-unit of mass noise, we find that the induced errors are three orders of magnitude less than the expected experimental uncertainty in the power spectral density.
Velazquez, L; Castro-Palacio, J C
2015-03-01
Velazquez and Curilef [J. Stat. Mech. (2010); J. Stat. Mech. (2010)] have proposed a methodology to extend Monte Carlo algorithms that are based on canonical ensemble. According to our previous study, their proposal allows us to overcome slow sampling problems in systems that undergo any type of temperature-driven phase transition. After a comprehensive review about ideas and connections of this framework, we discuss the application of a reweighting technique to improve the accuracy of microcanonical calculations, specifically, the well-known multihistograms method of Ferrenberg and Swendsen [Phys. Rev. Lett. 63, 1195 (1989)]. As an example of application, we reconsider the study of the four-state Potts model on the square lattice L×L with periodic boundary conditions. This analysis allows us to detect the existence of a very small latent heat per site qL during the occurrence of temperature-driven phase transition of this model, whose size dependence seems to follow a power law qL(L)?(1/L)z with exponent z?0.26±0.02. Discussed is the compatibility of these results with the continuous character of temperature-driven phase transition when L?+?. PMID:25871247
Feasibility of a Monte Carlo-deterministic hybrid method for fast reactor analysis
Heo, W.; Kim, W.; Kim, Y. [Korea Advanced Institute of Science and Technology - KAIST, 291 Daehak-ro, Yuseong-gu, Daejeon, 305-701 (Korea, Republic of); Yun, S. [Korea Atomic Energy Research Institute - KAERI, 989-111 Daedeok-daero, Yuseong-gu, Daejeon, 305-353 (Korea, Republic of)
2013-07-01
A Monte Carlo and deterministic hybrid method is investigated for the analysis of fast reactors in this paper. Effective multi-group cross sections data are generated using a collision estimator in the MCNP5. A high order Legendre scattering cross section data generation module was added into the MCNP5 code. Both cross section data generated from MCNP5 and TRANSX/TWODANT using the homogeneous core model were compared, and were applied to DIF3D code for fast reactor core analysis of a 300 MWe SFR TRU burner core. For this analysis, 9 groups macroscopic-wise data was used. In this paper, a hybrid calculation MCNP5/DIF3D was used to analyze the core model. The cross section data was generated using MCNP5. The k{sub eff} and core power distribution were calculated using the 54 triangle FDM code DIF3D. A whole core calculation of the heterogeneous core model using the MCNP5 was selected as a reference. In terms of the k{sub eff}, 9-group MCNP5/DIF3D has a discrepancy of -154 pcm from the reference solution, 9-group TRANSX/TWODANT/DIF3D analysis gives -1070 pcm discrepancy. (authors)
Development of a software package for solid-angle calculations using the Monte Carlo method
NASA Astrophysics Data System (ADS)
Zhang, Jie; Chen, Xiulian; Zhang, Changsheng; Li, Gang; Xu, Jiayun; Sun, Guangai
2014-02-01
Solid-angle calculations play an important role in the absolute calibration of radioactivity measurement systems and in the determination of the activity of radioactive sources, which are often complicated. In the present paper, a software package is developed to provide a convenient tool for solid-angle calculations in nuclear physics. The proposed software calculates solid angles using the Monte Carlo method, in which a new type of variance reduction technique was integrated. The package, developed under the environment of Microsoft Foundation Classes (MFC) in Microsoft Visual C++, has a graphical user interface, in which, the visualization function is integrated in conjunction with OpenGL. One advantage of the proposed software package is that it can calculate the solid angle subtended by a detector with different geometric shapes (e.g., cylinder, square prism, regular triangular prism or regular hexagonal prism) to a point, circular or cylindrical source without any difficulty. The results obtained from the proposed software package were compared with those obtained from previous studies and calculated using Geant4. It shows that the proposed software package can produce accurate solid-angle values with a greater computation speed than Geant4.
HRMC_1.1: Hybrid Reverse Monte Carlo method with silicon and carbon potentials
NASA Astrophysics Data System (ADS)
Opletal, G.; Petersen, T. C.; O'Malley, B.; Snook, I. K.; McCulloch, D. G.; Yarovsky, I.
2011-02-01
The Hybrid Reverse Monte Carlo (HRMC) code models the atomic structure of materials via the use of a combination of constraints including experimental diffraction data and an empirical energy potential. This energy constraint is in the form of either the Environment Dependent Interatomic Potential (EDIP) for carbon and silicon and the original and modified Stillinger-Weber potentials applicable to silicon. In this version, an update is made to correct an error in the EDIP carbon energy calculation routine. New version program summaryProgram title: HRMC version 1.1 Catalogue identifier: AEAO_v1_1 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AEAO_v1_1.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: Standard CPC licence, http://cpc.cs.qub.ac.uk/licence/licence.html No. of lines in distributed program, including test data, etc.: 36 991 No. of bytes in distributed program, including test data, etc.: 907 800 Distribution format: tar.gz Programming language: FORTRAN 77 Computer: Any computer capable of running executables produced by the g77 Fortran compiler. Operating system: Unix, Windows RAM: Depends on the type of empirical potential use, number of atoms and which constraints are employed. Classification: 7.7 Catalogue identifier of previous version: AEAO_v1_0 Journal reference of previous version: Comput. Phys. Comm. 178 (2008) 777 Does the new version supersede the previous version?: Yes Nature of problem: Atomic modelling using empirical potentials and experimental data. Solution method: Monte Carlo Reasons for new version: An error in a term associated with the calculation of energies using the EDIP carbon potential which results in incorrect energies. Summary of revisions: Fix to correct brackets in the two body part of the EDIP carbon potential routine. Additional comments: The code is not standard FORTRAN 77 but includes some additional features and therefore generates errors when compiled using the Nag95 compiler. It does compile successfully with the GNU g77 compiler ( http://www.gnu.org/software/fortran/fortran.html). Running time: Depends on the type of empirical potential use, number of atoms and which constraints are employed. The test included in the distribution took 37 minutes on a DEC Alpha PC.
A Randomized Quasi-Monte Carlo Simulation Method for Markov Chains Pierre L'Ecuyer
Tuffin, Bruno
´epartement d'Informatique et de Recherche Op´erationnelle Universit´e de Montr´eal, C.P. 6128, Succ. Centre Monte Carlo is substantial. The variance can be reduced by factors of several thousands in some cases
MonteCarlo and Analytical Methods for Forced Outage Rate Calculations of Peaking Units
Rondla, Preethi 1988-
2012-10-26
(unavailability) of such units. This thesis examines the representation of peaking units using a four-state model and performs the analytical calculations and Monte Carlo simulations to examine whether such a model does indeed represent the peaking units...
Parallel Markov Chain Monte Carlo Methods for Large Scale Statistical Inverse Problems
Wang, Kainan
2014-04-18
but also the uncertainty of these estimations. Markov chain Monte Carlo (MCMC) is a useful technique to sample the posterior distribution and information can be extracted from the sampled ensemble. However, MCMC is very expensive to compute, especially...
Physics-based Predictive Time Propagation Method for Monte Carlo Coupled Depletion Simulations
Johns, Jesse Merlin
2014-12-18
calculation. In 16 the stochastic Monte Carlo simulation, the neutron transport process is simulated similarly to the real physical process. This leads to a pseudo-physical/numerical noise that can be amplified if the simulation is not appropriately converged...
NASA Astrophysics Data System (ADS)
Harries, Tim J.
2015-04-01
We present a set of new numerical methods that are relevant to calculating radiation pressure terms in hydrodynamics calculations, with a particular focus on massive star formation. The radiation force is determined from a Monte Carlo estimator and enables a complete treatment of the detailed microphysics, including polychromatic radiation and anisotropic scattering, in both the free-streaming and optically thick limits. Since the new method is computationally demanding we have developed two new methods that speed up the algorithm. The first is a photon packet splitting algorithm that enables efficient treatment of the Monte Carlo process in very optically thick regions. The second is a parallelization method that distributes the Monte Carlo workload over many instances of the hydrodynamic domain, resulting in excellent scaling of the radiation step. We also describe the implementation of a sink particle method that enables us to follow the accretion on to, and the growth of, the protostars. We detail the results of extensive testing and benchmarking of the new algorithms.
Capote, Roberto [Nuclear Data Section, International Atomic Energy Agency, P.O. Box 100, Wagramer Strasse 5, A-1400 Vienna (Austria)], E-mail: Roberto.CapoteNoy@iaea.org; Smith, Donald L. [Argonne National Laboratory, 1710 Avenida del Mundo, Coronado, California 92118-3073 (United States)
2008-12-15
The Unified Monte Carlo method (UMC) has been suggested to avoid certain limitations and approximations inherent to the well-known Generalized Least Squares (GLS) method of nuclear data evaluation. This contribution reports on an investigation of the performance of the UMC method in comparison with the GLS method. This is accomplished by applying both methods to simple examples with few input values that were selected to explore various features of the evaluation process that impact upon the quality of an evaluation. Among the issues explored are: i) convergence of UMC results with the number of Monte Carlo histories and the ranges of sampled values; ii) a comparison of Monte Carlo sampling using the Metropolis scheme and a brute force approach; iii) the effects of large data discrepancies; iv) the effects of large data uncertainties; v) the effects of strong or weak model or experimental data correlations; and vi) the impact of ratio data and integral data. Comparisons are also made of the evaluated results for these examples when the input values are first transformed to comparable logarithmic values prior to performing the evaluation. Some general conclusions that are applicable to more realistic evaluation exercises are offered.
S. M. Mesli; M. Habchi; M. Kotbi; H. Xu
2013-03-25
The choice of appropriate interaction models is among the major disadvantages of conventional methods such as molecular dynamics and Monte Carlo simulations. On the other hand, the so-called reverse Monte Carlo (RMC) method, based on experimental data, can be applied without any interatomic and/or intermolecular interactions. The RMC results are accompanied by artificial satellite peaks. To remedy this problem, we use an extension of the RMC algorithm, which introduces an energy penalty term into the acceptance criteria. This method is referred to as the hybrid reverse Monte Carlo (HRMC) method. The idea of this paper is to test the validity of a combined potential model of coulomb and Lennard-Jones in a fluoride glass system BaMnMF_{7} (M=Fe,V) using HRMC method. The results show a good agreement between experimental and calculated characteristics, as well as a meaningful improvement in partial pair distribution functions. We suggest that this model should be used in calculating the structural properties and in describing the average correlations between components of fluoride glass or a similar system. We also suggest that HRMC could be useful as a tool for testing the interaction potential models, as well as for conventional applications.
NASA Astrophysics Data System (ADS)
Massoudieh, A.; Sharifi, S.; Solomon, K.
2012-12-01
The estimation of groundwater age has received increasing attention due to its applications in assessing the sustainability of water withdrawal from the aquifers and evaluating the vulnerability of groundwater resources to near surface or recharge water contamination. In most of the works done in the past, whether a single or multiple tracers used for groundwater dating, the uncertainties in observed concentrations of the tracers and their decay rate constants have been neglected. Furthermore, tracers have been assumed to move at the same speed as the groundwater. In reality some of the radio-tracers or anthropogenic chemicals used for groundwater dating might undergo adsorption and desorption and move with a slower velocity than the groundwater. Also there are uncertainties in the decay rates of synthetic chemicals such as CFCs commonly used for groundwater dating. In this presentation development of a Bayesian modeling approach using Markov Chain Monte Carlo method for estimation of age distribution is described. The model considers the uncertainties in the measured tracer concentrations as well as the parameters affecting the concentration of tracers in the groundwater and provides the frequency distributions of the parameters defining the groundwater age distribution. The model also incorporates the effect of the contribution of dissolution of aquifer minerals in diluting the 14C signature and the uncertainties associated with this process on inferred age distribution parameters. The results of application of the method to data collected at Laselva Biological Station - Costa Rica will also be presented. In this demonstration application, eight different forms of presumed groundwater age distributions have been tested including four single-peak forms and four double-peaked forms assuming the groundwater consisting distinct young and old fractions. The performance of these presumed groundwater age forms have been evaluated in terms of their capability in predicting tracer concentration close to the observed values and also the level of certainty they provide in estimation of the age-distribution of parameters. The schematic of the hypothetical 2D (vertical) aquifer model
Forward-Weighted CADIS Method for Variance Reduction of Monte Carlo Reactor Analyses
Wagner, John C [ORNL] [ORNL; Mosher, Scott W [ORNL] [ORNL
2010-01-01
Current state-of-the-art tools and methods used to perform 'real' commercial reactor analyses use high-fidelity transport codes to produce few-group parameters at the assembly level for use in low-order methods applied at the core level. Monte Carlo (MC) methods, which allow detailed and accurate modeling of the full geometry and energy details and are considered the 'gold standard' for radiation transport solutions, are playing an ever-increasing role in correcting and/or verifying the several-decade-old methodology used in current practice. However, the prohibitive computational requirements associated with obtaining fully converged system-wide solutions restrict the role of MC to benchmarking deterministic results at a limited number of state-points for a limited number of relevant quantities. A goal of current research at Oak Ridge National Laboratory (ORNL) is to change this paradigm by enabling the direct use of MC for full-core reactor analyses. The most significant of the many technical challenges that must be overcome is the slow non-uniform convergence of system-wide MC estimates and the memory requirements associated with detailed solutions throughout a reactor (problems involving hundreds of millions of different material and tally regions due to fuel irradiation, temperature distributions, and the needs associated with multi-physics code coupling). To address these challenges, research has focused on development in the following two areas: (1) a hybrid deterministic/MC method for determining high-precision fluxes throughout the problem space in k-eigenvalue problems and (2) an efficient MC domain-decomposition algorithm that partitions the problem phase space onto multiple processors for massively parallel systems, with statistical uncertainty estimation. The focus of this paper is limited to the first area mentioned above. It describes the FW-CADIS method applied to variance reduction of MC reactor analyses and provides initial results for calculating group-wise fluxes throughout a generic 2-D pressurized water reactor (PWR) quarter core model.
Pazirandeh, Ali; Azizi, Maryam; Farhad Masoudi, S
2006-01-01
Among many conventional techniques, nuclear techniques have shown to be faster, more reliable, and more effective in detecting explosives. In the present work, neutrons from a 5 Ci Am-Be neutron source being in water tank are captured by elements of soil and landmine (TNT), namely (14)N, H, C, and O. The prompt capture gamma-ray spectrum taken by a NaI (Tl) scintillation detector indicates the characteristic photo peaks of the elements in soil and landmine. In the high-energy region of the gamma-ray spectrum, besides 10.829 MeV of (15)N, single escape (SE) and double escape (DE) peaks are unmistakable photo peaks, which make the detection of concealed explosive possible. The soil has the property of moderating neutrons as well as diffusing the thermal neutron flux. Among many elements in soil, silicon is more abundant and (29)Si emits 10.607 MeV prompt capture gamma-ray, which makes 10.829 MeV detection difficult. The Monte Carlo simulation was used to adjust source-target-detector distances and soil moisture content to yield the best result. Therefore, we applied MCNP4C for configuration very close to reality of a hidden landmine in soil. PMID:16081298
Forward treatment planning for modulated electron radiotherapy (MERT) employing Monte Carlo methods
Henzen, D., E-mail: henzen@ams.unibe.ch; Manser, P.; Frei, D.; Volken, W.; Born, E. J.; Lössl, K.; Aebersold, D. M.; Fix, M. K. [Division of Medical Radiation Physics and Department of Radiation Oncology, Inselspital, Bern University Hospital, University of Bern, CH-3010 Berne (Switzerland)] [Division of Medical Radiation Physics and Department of Radiation Oncology, Inselspital, Bern University Hospital, University of Bern, CH-3010 Berne (Switzerland); Neuenschwander, H. [Clinic for Radiation-Oncology, Lindenhofspital Bern, CH-3012 Berne (Switzerland)] [Clinic for Radiation-Oncology, Lindenhofspital Bern, CH-3012 Berne (Switzerland); Stampanoni, M. F. M. [Institute for Biomedical Engineering, ETH Zürich and Paul Scherrer Institut, CH-5234 Villigen (Switzerland)] [Institute for Biomedical Engineering, ETH Zürich and Paul Scherrer Institut, CH-5234 Villigen (Switzerland)
2014-03-15
Purpose: This paper describes the development of a forward planning process for modulated electron radiotherapy (MERT). The approach is based on a previously developed electron beam model used to calculate dose distributions of electron beams shaped by a photon multi leaf collimator (pMLC). Methods: As the electron beam model has already been implemented into the Swiss Monte Carlo Plan environment, the Eclipse treatment planning system (Varian Medical Systems, Palo Alto, CA) can be included in the planning process for MERT. In a first step, CT data are imported into Eclipse and a pMLC shaped electron beam is set up. This initial electron beam is then divided into segments, with the electron energy in each segment chosen according to the distal depth of the planning target volume (PTV) in beam direction. In order to improve the homogeneity of the dose distribution in the PTV, a feathering process (Gaussian edge feathering) is launched, which results in a number of feathered segments. For each of these segments a dose calculation is performed employing the in-house developed electron beam model along with the macro Monte Carlo dose calculation algorithm. Finally, an automated weight optimization of all segments is carried out and the total dose distribution is read back into Eclipse for display and evaluation. One academic and two clinical situations are investigated for possible benefits of MERT treatment compared to standard treatments performed in our clinics and treatment with a bolus electron conformal (BolusECT) method. Results: The MERT treatment plan of the academic case was superior to the standard single segment electron treatment plan in terms of organs at risk (OAR) sparing. Further, a comparison between an unfeathered and a feathered MERT plan showed better PTV coverage and homogeneity for the feathered plan, with V{sub 95%} increased from 90% to 96% and V{sub 107%} decreased from 8% to nearly 0%. For a clinical breast boost irradiation, the MERT plan led to a similar homogeneity in the PTV compared to the standard treatment plan while the mean body dose was lower for the MERT plan. Regarding the second clinical case, a whole breast treatment, MERT resulted in a reduction of the lung volume receiving more than 45% of the prescribed dose when compared to the standard plan. On the other hand, the MERT plan leads to a larger low-dose lung volume and a degraded dose homogeneity in the PTV. For the clinical cases evaluated in this work, treatment plans using the BolusECT technique resulted in a more homogenous PTV and CTV coverage but higher doses to the OARs than the MERT plans. Conclusions: MERT treatments were successfully planned for phantom and clinical cases, applying a newly developed intuitive and efficient forward planning strategy that employs a MC based electron beam model for pMLC shaped electron beams. It is shown that MERT can lead to a dose reduction in OARs compared to other methods. The process of feathering MERT segments results in an improvement of the dose homogeneity in the PTV.
Somasundaram, E.; Palmer, T. S. [Department of Nuclear Engineering and Radiation Health Physics, Oregon State University, 116 Radiation Center, Corvallis, OR 97332-5902 (United States)
2013-07-01
In this paper, the work that has been done to implement variance reduction techniques in a three dimensional, multi group Monte Carlo code - Tortilla, that works within the frame work of the commercial deterministic code - Attila, is presented. This project is aimed to develop an integrated Hybrid code that seamlessly takes advantage of the deterministic and Monte Carlo methods for deep shielding radiation detection problems. Tortilla takes advantage of Attila's features for generating the geometric mesh, cross section library and source definitions. Tortilla can also read importance functions (like adjoint scalar flux) generated from deterministic calculations performed in Attila and use them to employ variance reduction schemes in the Monte Carlo simulation. The variance reduction techniques that are implemented in Tortilla are based on the CADIS (Consistent Adjoint Driven Importance Sampling) method and the LIFT (Local Importance Function Transform) method. These methods make use of the results from an adjoint deterministic calculation to bias the particle transport using techniques like source biasing, survival biasing, transport biasing and weight windows. The results obtained so far and the challenges faced in implementing the variance reduction techniques are reported here. (authors)
Modeling and simulation of radiation from hypersonic flows with Monte Carlo methods
NASA Astrophysics Data System (ADS)
Sohn, Ilyoup
During extreme-Mach number reentry into Earth's atmosphere, spacecraft experience hypersonic non-equilibrium flow conditions that dissociate molecules and ionize atoms. Such situations occur behind a shock wave leading to high temperatures, which have an adverse effect on the thermal protection system and radar communications. Since the electronic energy levels of gaseous species are strongly excited for high Mach number conditions, the radiative contribution to the total heat load can be significant. In addition, radiative heat source within the shock layer may affect the internal energy distribution of dissociated and weakly ionized gas species and the number density of ablative species released from the surface of vehicles. Due to the radiation total heat load to the heat shield surface of the vehicle may be altered beyond mission tolerances. Therefore, in the design process of spacecrafts the effect of radiation must be considered and radiation analyses coupled with flow solvers have to be implemented to improve the reliability during the vehicle design stage. To perform the first stage for radiation analyses coupled with gas-dynamics, efficient databasing schemes for emission and absorption coefficients were developed to model radiation from hypersonic, non-equilibrium flows. For bound-bound transitions, spectral information including the line-center wavelength and assembled parameters for efficient calculations of emission and absorption coefficients are stored for typical air plasma species. Since the flow is non-equilibrium, a rate equation approach including both collisional and radiatively induced transitions was used to calculate the electronic state populations, assuming quasi-steady-state (QSS). The Voigt line shape function was assumed for modeling the line broadening effect. The accuracy and efficiency of the databasing scheme was examined by comparing results of the databasing scheme with those of NEQAIR for the Stardust flowfield. An accuracy of approximately 1 % was achieved with an efficiency about three times faster than the NEQAIR code. To perform accurate and efficient analyses of chemically reacting flowfield - radiation interactions, the direct simulation Monte Carlo (DSMC) and the photon Monte Carlo (PMC) radiative transport methods are used to simulate flowfield - radiation coupling from transitional to peak heating freestream conditions. The non-catalytic and fully catalytic surface conditions were modeled and good agreement of the stagnation-point convective heating between DSMC and continuum fluid dynamics (CFD) calculation under the assumption of fully catalytic surface was achieved. Stagnation-point radiative heating, however, was found to be very different. To simulate three-dimensional radiative transport, the finite-volume based PMC (FV-PMC) method was employed. DSMC - FV-PMC simulations with the goal of understanding the effect of radiation on the flow structure for different degrees of hypersonic non-equilibrium are presented. It is found that except for the highest altitudes, the coupling of radiation influences the flowfield, leading to a decrease in both heavy particle translational and internal temperatures and a decrease in the convective heat flux to the vehicle body. The DSMC - FV-PMC coupled simulations are compared with the previous coupled simulations and correlations obtained using continuum flow modeling and one-dimensional radiative transport. The modeling of radiative transport is further complicated by radiative transitions occurring during the excitation process of the same radiating gas species. This interaction affects the distribution of electronic state populations and, in turn, the radiative transport. The radiative transition rate in the excitation/de-excitation processes and the radiative transport equation (RTE) must be coupled simultaneously to account for non-local effects. The QSS model is presented to predict the electronic state populations of radiating gas species taking into account non-local radiation. The definition of the escape factor which is depende
Hyesung Kang; T. W. Jones
1996-07-10
We report simulations of diffusive particle acceleration in oblique magnetohydrodynamical (MHD) shocks. These calculations are based on extension to oblique shocks of a numerical model for ``thermal leakage'' injection of particles at low energy into the cosmic-ray population. That technique, incorporated into a fully dynamical diffusion-convection formalism, was recently introduced for parallel shocks by Kang \\& Jones (1995). Here, we have compared results of time dependent numerical simulations using our technique with Monte Carlo simulations by Ellison, Baring \\& Jones 1995 and with {\\it in situ} observations from the Ulysses spacecraft of oblique interplanetary shocks discussed by Baring \\etal (1995). Through the success of these comparisons we have demonstrated that our {diffusion-convection} method and injection techniques provide a practical tool to capture essential physics of the injection process and particle acceleration at oblique MHD shocks. In addition to the diffusion-convection simulations, we have included time dependent two-fluid simulations for a couple of the shocks to demonstrate the basic validity of that formalism in the oblique shock context. Using simple models for the two-fluid closure parameters based on test-particle considerations, we find good agreement with the dynamical properties of the more detailed diffusion-convection results. We emphasize, however, that such two-fluid results can be sensitive to the properties of these closure parameters when the flows are not truly steady. Furthermore, we emphasize through example how the validity of the two-fluid formalism does not necessarily mean that {\\it steady-state} two-fluid models provide a reliable tool for predicting the efficiency of particle acceleration in real shocks.
Nanothermodynamics of large iron clusters by means of a flat histogram Monte Carlo method
NASA Astrophysics Data System (ADS)
Basire, M.; Soudan, J.-M.; Angelié, C.
2014-09-01
The thermodynamics of iron clusters of various sizes, from 76 to 2452 atoms, typical of the catalyst particles used for carbon nanotubes growth, has been explored by a flat histogram Monte Carlo (MC) algorithm (called the ?-mapping), developed by Soudan et al. [J. Chem. Phys. 135, 144109 (2011), Paper I]. This method provides the classical density of states, gp(Ep) in the configurational space, in terms of the potential energy of the system, with good and well controlled convergence properties, particularly in the melting phase transition zone which is of interest in this work. To describe the system, an iron potential has been implemented, called "corrected EAM" (cEAM), which approximates the MEAM potential of Lee et al. [Phys. Rev. B 64, 184102 (2001)] with an accuracy better than 3 meV/at, and a five times larger computational speed. The main simplification concerns the angular dependence of the potential, with a small impact on accuracy, while the screening coefficients Sij are exactly computed with a fast algorithm. With this potential, ergodic explorations of the clusters can be performed efficiently in a reasonable computing time, at least in the upper half of the solid zone and above. Problems of ergodicity exist in the lower half of the solid zone but routes to overcome them are discussed. The solid-liquid (melting) phase transition temperature Tm is plotted in terms of the cluster atom number Nat. The standard N_{at}^{-1/3} linear dependence (Pawlow law) is observed for Nat >300, allowing an extrapolation up to the bulk metal at 1940 ±50 K. For Nat <150, a strong divergence is observed compared to the Pawlow law. The melting transition, which begins at the surface, is stated by a Lindemann-Berry index and an atomic density analysis. Several new features are obtained for the thermodynamics of cEAM clusters, compared to the Rydberg pair potential clusters studied in Paper I.
D'Angola, A.; Tuttafesta, M.; Guadagno, M.; Santangelo, P.; Laricchiuta, A.; Colonna, G.; Capitelli, M. [Scuola di Ingegneria SI, Universita della Basilicata, via dell'Ateneo Lucano, 10 - 85100 Potenza (Italy); Universita di Bari, via Orabona, 4 - 70126 Bari (Italy); Scuola di Ingegneria SI, Universita della Basilicata, via dell'Ateneo Lucano, 10 - 85100 Potenza (Italy); CNR-IMIP Bari, via Amendola 122/D - 70126 Bari (Italy); Universita di Bari, via Orabona, 4 - 70126 Bari (Italy) and CNR-IMIP Bari, via Amendola 122/D - 70126 Bari (Italy)
2012-11-27
Calculations of thermodynamic properties of Helium plasma by using the Reaction Ensemble Monte Carlo method (REMC) are presented. Non ideal effects at high pressure are observed. Calculations, performed by using Exp-6 or multi-potential curves in the case of neutral-charge interactions, show that in the thermodynamic conditions considered no significative differences are observed. Results have been obtained by using a Graphics Processing Unit (GPU)-CUDA C version of REMC.
Williams, M. L.; Gehin, J. C.; Clarno, K. T. [Oak Ridge National Laboratory, Bldg. 5700, P.O. Box 2008, Oak Ridge, TN 37831-6170 (United States)
2006-07-01
The TSUNAMI computational sequences currently in the SCALE 5 code system provide an automated approach to performing sensitivity and uncertainty analysis for eigenvalue responses, using either one-dimensional discrete ordinates or three-dimensional Monte Carlo methods. This capability has recently been expanded to address eigenvalue-difference responses such as reactivity changes. This paper describes the methodology and presents results obtained for an example advanced CANDU reactor design. (authors)
Accelerated kinetics of amorphous silicon using an on-the-fly off-lattice kinetic Monte-Carlo method
Jean-Francois Joly; Fedwa El-Mellouhi; Laurent Karim Beland; Normand Mousseau
2011-01-01
The time evolution of a series of well relaxed amorphous silicon models was simulated using the kinetic Activation-RelaxationTechnique (kART), an on-the-fly off-lattice kinetic Monte Carlo method [1]. This novel algorithm uses the ART nouveau algorithm to generate activated events and links them with local topologies. It was shown to work well for crystals with few defects but this is the
A Straightforward Approach to Markov Chain Monte Carlo Methods for Item Response Models.
ERIC Educational Resources Information Center
Patz, Richard J.; Junker, Brian W.
1999-01-01
Demonstrates Markov chain Monte Carlo (MCMC) techniques that are well-suited to complex models with Item Response Theory (IRT) assumptions. Develops an MCMC methodology that can be routinely implemented to fit normal IRT models, and compares the approach to approaches based on Gibbs sampling. Contains 64 references. (SLD)
An Evaluation of a Markov Chain Monte Carlo Method for the Two-Parameter Logistic Model.
ERIC Educational Resources Information Center
Kim, Seock-Ho; Cohen, Allan S.
The accuracy of the Markov Chain Monte Carlo (MCMC) procedure Gibbs sampling was considered for estimation of item parameters of the two-parameter logistic model. Data for the Law School Admission Test (LSAT) Section 6 were analyzed to illustrate the MCMC procedure. In addition, simulated data sets were analyzed using the MCMC, marginal Bayesian…
Using a Monte-Carlo method for active leakage control in water supply networks
B. Jankovi?-Niši?; N. Graham
2000-01-01
In this paper, procedures and guidelines developed for uncertainty analysis in the modelling and operation of water distribution systems are presented. The proposed Monte-Carlo model analyses the propagation of initial uncertainty in demand through a water supply model and its influence on calculated flows. The procedure is supposed to reduce uncertainty and facilitate the decision making process by setting up
Method Monte Carlo in optical diagnostics of skin and skin tissues
Igor V. Meglinski
2003-01-01
A novel Monte Carlo (MC) technique for photon migration through 3D media with the spatially varying optical properties is presented. The employed MC technique combines the statistical weighting variance reduction and real photon paths tracing schemes. The overview of the results of applications of the developed MC technique in optical\\/near-infrared reflectance spectroscopy, confocal microscopy, fluorescence spectroscopy, OCT, Doppler flowmetry and
A novel Monte Carlo method for the optical diagnostics of skin
Igor V. Meglinski; Dmitry Y. Churmakov
2003-01-01
A novel Monte Carlo (MC) technique for photon migration through 3D media with the spatially varying optical properties is presented. The employed MC technique combines the statistical weighting variance reduction and real photon paths tracing schemes. The overview of the results of applications of the developed MC technique in optical\\/near-infrared reflectance spectroscopy, confocal microscopy, fluorescence spectroscopy, OCT, Diffusing Wave Spectroscopy
Recent developments in quantum Monte Carlo methods for electronic structure of atomic clusters
Lubos Mitas
2004-01-01
Recent developments of quantum Monte Carlo (QMC) for electronic structure calculations of clusters, other nanomaterials and quantum systems will be reviewed. QMC methodology is based on a combination of analytical insights about properties of exact wavefunctions, explicit treatment of electron-electron correlation and robustness of computational stochastic techniques. In the course of QMC development for calculations of real materials, small and
The Markov chain Monte Carlo method: an approach to approximate counting and integration
Mark Jerrum; Alistair Sinclair
1996-01-01
In the area of statistical physics, Monte Carlo algorithms based on Markov chain simulation have been in use for many years. The validity of these algorithms depends cru- cially on the rate of convergence to equilibrium of the Markov chain being simulated. Unfortunately, the classical theory of stochastic processes hardly touches on the sort of non-asymptotic analysis required in this
Nanothermodynamics of large iron clusters by means of a flat histogram Monte Carlo method.
Basire, M; Soudan, J-M; Angelié, C
2014-09-14
The thermodynamics of iron clusters of various sizes, from 76 to 2452 atoms, typical of the catalyst particles used for carbon nanotubes growth, has been explored by a flat histogram Monte Carlo (MC) algorithm (called the ?-mapping), developed by Soudan et al. [J. Chem. Phys. 135, 144109 (2011), Paper I]. This method provides the classical density of states, gp(Ep) in the configurational space, in terms of the potential energy of the system, with good and well controlled convergence properties, particularly in the melting phase transition zone which is of interest in this work. To describe the system, an iron potential has been implemented, called "corrected EAM" (cEAM), which approximates the MEAM potential of Lee et al. [Phys. Rev. B 64, 184102 (2001)] with an accuracy better than 3 meV/at, and a five times larger computational speed. The main simplification concerns the angular dependence of the potential, with a small impact on accuracy, while the screening coefficients S(ij) are exactly computed with a fast algorithm. With this potential, ergodic explorations of the clusters can be performed efficiently in a reasonable computing time, at least in the upper half of the solid zone and above. Problems of ergodicity exist in the lower half of the solid zone but routes to overcome them are discussed. The solid-liquid (melting) phase transition temperature T(m) is plotted in terms of the cluster atom number N(at). The standard N(at)(-1/3) linear dependence (Pawlow law) is observed for N(at) >300, allowing an extrapolation up to the bulk metal at 1940 ±50 K. For N(at) <150, a strong divergence is observed compared to the Pawlow law. The melting transition, which begins at the surface, is stated by a Lindemann-Berry index and an atomic density analysis. Several new features are obtained for the thermodynamics of cEAM clusters, compared to the Rydberg pair potential clusters studied in Paper I. PMID:25217913
NASA Astrophysics Data System (ADS)
Hirvijoki, E.; Kurki-Suonio, T.; Äkäslompolo, S.; Varje, J.; Koskela, T.; Miettunen, J.
2015-06-01
This paper explains how to obtain the distribution function of minority ions in tokamak plasmas using the Monte Carlo method. Since the emphasis is on energetic ions, the guiding-center transformation is outlined, including also the transformation of the collision operator. Even within the guiding-center formalism, the fast particle simulations can still be very CPU intensive and, therefore, we introduce the reader also to the world of high-performance computing. The paper is concluded with a few examples where the presented method has been applied.
NASA Astrophysics Data System (ADS)
Sadovich, Sergey; Talamo, A.; Burnos, V.; Kiyavitskaya, H.; Fokov, Yu.
2014-06-01
In subcritical systems driven by an external neutron source, the experimental methods based on pulsed neutron source and statistical techniques play an important role for reactivity measurement. Simulation of these methods is very time-consumed procedure. For simulations in Monte-Carlo programs several improvements for neutronic calculations have been made. This paper introduces a new method for simulation PNS and statistical measurements. In this method all events occurred in the detector during simulation are stored in a file using PTRAC feature in the MCNP. After that with a special code (or post-processing) PNS and statistical methods can be simulated. Additionally different shapes of neutron pulses and its lengths as well as dead time of detectors can be included into simulation. The methods described above were tested on subcritical assembly Yalina-Thermal, located in Joint Institute for Power and Nuclear Research SOSNY, Minsk, Belarus. A good agreement between experimental and simulated results was shown.
Kim, Beop-Min
1991-01-01
AN ANALYSIS OF THE EFFECT OF COUPLING BETWEEN TEMPERATURE RISE AND LIGHT DISTRIBUTION IN LASER IRRADIATED TISSUE USING FINITE ELEMENT AND MONTE- CARLO METHODS A Thesis BEOP-MIN KIM Submitted to the Office of Graduate Studies of Texas A... FINITE ELEMENT AND MONTE- CARLO METHODS A Thesis BEOP-MIN KIM Approved as to style and content by: Sohi Rastegar (Chair of Committee) Gerald E. Miller (member) He F. Taylor (member) . Kemble ennett (Head of Department) August 1991 ABSTRACT...
NASA Astrophysics Data System (ADS)
Krzakala, Florent; Rosso, Alberto; Semerjian, Guilhem; Zamponi, Francesco
2008-10-01
The cavity method is a well-established technique for solving classical spin models on sparse random graphs (mean-field models with finite connectivity). Laumann [Phys. Rev. B 78, 134424 (2008)] proposed recently an extension of this method to quantum spin-1/2 models in a transverse field, using a discretized Suzuki-Trotter imaginary-time formalism. Here we show how to take analytically the continuous imaginary-time limit. Our main technical contribution is an explicit procedure to generate the spin trajectories in a path-integral representation of the imaginary-time dynamics. As a side result we also show how this procedure can be used in simple heat bath Monte Carlo simulations of generic quantum spin models. The replica symmetric continuous-time quantum cavity method is formulated for a wide class of models and applied as a simple example on the Bethe lattice ferromagnet in a transverse field. The results of the methods are confronted with various approximation schemes in this particular case. On this system we performed quantum Monte Carlo simulations that confirm the exactness of the cavity method in the thermodynamic limit.
Wang Haifeng [Sibley School of Mechanical and Aerospace Engineering, Cornell University, Ithaca, NY 14853 (United States)], E-mail: hw98@cornell.edu; Popov, Pavel P.; Pope, Stephen B. [Sibley School of Mechanical and Aerospace Engineering, Cornell University, Ithaca, NY 14853 (United States)
2010-03-01
We study a class of methods for the numerical solution of the system of stochastic differential equations (SDEs) that arises in the modeling of turbulent combustion, specifically in the Monte Carlo particle method for the solution of the model equations for the composition probability density function (PDF) and the filtered density function (FDF). This system consists of an SDE for particle position and a random differential equation for particle composition. The numerical methods considered advance the solution in time with (weak) second-order accuracy with respect to the time step size. The four primary contributions of the paper are: (i) establishing that the coefficients in the particle equations can be frozen at the mid-time (while preserving second-order accuracy), (ii) examining the performance of three existing schemes for integrating the SDEs, (iii) developing and evaluating different splitting schemes (which treat particle motion, reaction and mixing on different sub-steps), and (iv) developing the method of manufactured solutions (MMS) to assess the convergence of Monte Carlo particle methods. Tests using MMS confirm the second-order accuracy of the schemes. In general, the use of frozen coefficients reduces the numerical errors. Otherwise no significant differences are observed in the performance of the different SDE schemes and splitting schemes.
An Irregularly Portioned Lagrangian Monte Carlo Method for Turbulent Flow Simulation
Server L. Yilmaz; Mehdi B. Nik; Mohammad Reza H. Sheikhi; P. A. Strakey; Peyman Givi
2011-01-01
A novel computational methodology, termed “Irregularly Portioned Lagrangian Monte Carlo” (IPLMC) is developed for large eddy\\u000a simulation (LES) of turbulent flows. This methodology is intended for use in the filtered density function (FDF) formulation\\u000a and is particularly suitable for simulation of chemically reacting flows on massively parallel platforms. The IPLMC facilitates\\u000a efficient simulations, and thus allows reliable prediction of complex
Response of thermoluminescent dosimeters to photons simulated with the Monte Carlo method
M. Moralles; C. C. Guimarães; E. Okuno
2005-01-01
Personal monitors composed of thermoluminescent dosimeters (TLDs) made of natural fluorite (CaF2:NaCl) and lithium fluoride (Harshaw TLD-100) were exposed to gamma and X rays of different qualities. The GEANT4 radiation transport Monte Carlo toolkit was employed to calculate the energy depth deposition profile in the TLDs. X-ray spectra of the ISO\\/4037-1 narrow-spectrum series, with peak voltage (kVp) values in the
The Metropolis Monte Carlo method with CUDA enabled Graphic Processing Units
Hall, Clifford [Computational Materials Science Center, George Mason University, 4400 University Dr., Fairfax, VA 22030 (United States) [Computational Materials Science Center, George Mason University, 4400 University Dr., Fairfax, VA 22030 (United States); School of Physics, Astronomy, and Computational Sciences, George Mason University, 4400 University Dr., Fairfax, VA 22030 (United States); Ji, Weixiao [Computational Materials Science Center, George Mason University, 4400 University Dr., Fairfax, VA 22030 (United States)] [Computational Materials Science Center, George Mason University, 4400 University Dr., Fairfax, VA 22030 (United States); Blaisten-Barojas, Estela, E-mail: blaisten@gmu.edu [Computational Materials Science Center, George Mason University, 4400 University Dr., Fairfax, VA 22030 (United States) [Computational Materials Science Center, George Mason University, 4400 University Dr., Fairfax, VA 22030 (United States); School of Physics, Astronomy, and Computational Sciences, George Mason University, 4400 University Dr., Fairfax, VA 22030 (United States)
2014-02-01
We present a CPU–GPU system for runtime acceleration of large molecular simulations using GPU computation and memory swaps. The memory architecture of the GPU can be used both as container for simulation data stored on the graphics card and as floating-point code target, providing an effective means for the manipulation of atomistic or molecular data on the GPU. To fully take advantage of this mechanism, efficient GPU realizations of algorithms used to perform atomistic and molecular simulations are essential. Our system implements a versatile molecular engine, including inter-molecule interactions and orientational variables for performing the Metropolis Monte Carlo (MMC) algorithm, which is one type of Markov chain Monte Carlo. By combining memory objects with floating-point code fragments we have implemented an MMC parallel engine that entirely avoids the communication time of molecular data at runtime. Our runtime acceleration system is a forerunner of a new class of CPU–GPU algorithms exploiting memory concepts combined with threading for avoiding bus bandwidth and communication. The testbed molecular system used here is a condensed phase system of oligopyrrole chains. A benchmark shows a size scaling speedup of 60 for systems with 210,000 pyrrole monomers. Our implementation can easily be combined with MPI to connect in parallel several CPU–GPU duets. -- Highlights: •We parallelize the Metropolis Monte Carlo (MMC) algorithm on one CPU—GPU duet. •The Adaptive Tempering Monte Carlo employs MMC and profits from this CPU—GPU implementation. •Our benchmark shows a size scaling-up speedup of 62 for systems with 225,000 particles. •The testbed involves a polymeric system of oligopyrroles in the condensed phase. •The CPU—GPU parallelization includes dipole—dipole and Mie—Jones classic potentials.
A highly heterogeneous 3D PWR core benchmark: deterministic and Monte Carlo method comparison
NASA Astrophysics Data System (ADS)
Jaboulay, J.-C.; Damian, F.; Douce, S.; Lopez, F.; Guenaut, C.; Aggery, A.; Poinot-Salanon, C.
2014-06-01
Physical analyses of the LWR potential performances with regards to the fuel utilization require an important part of the work dedicated to the validation of the deterministic models used for theses analyses. Advances in both codes and computer technology give the opportunity to perform the validation of these models on complex 3D core configurations closed to the physical situations encountered (both steady-state and transient configurations). In this paper, we used the Monte Carlo Transport code TRIPOLI-4®; to describe a whole 3D large-scale and highly-heterogeneous LWR core. The aim of this study is to validate the deterministic CRONOS2 code to Monte Carlo code TRIPOLI-4®; in a relevant PWR core configuration. As a consequence, a 3D pin by pin model with a consistent number of volumes (4.3 millions) and media (around 23,000) is established to precisely characterize the core at equilibrium cycle, namely using a refined burn-up and moderator density maps. The configuration selected for this analysis is a very heterogeneous PWR high conversion core with fissile (MOX fuel) and fertile zones (depleted uranium). Furthermore, a tight pitch lattice is selcted (to increase conversion of 238U in 239Pu) that leads to harder neutron spectrum compared to standard PWR assembly. In these conditions two main subjects will be discussed: the Monte Carlo variance calculation and the assessment of the diffusion operator with two energy groups for the core calculation.
Investigation of Collimator Influential Parameter on SPECT Image Quality: a Monte Carlo Study
Banari Bahnamiri, Sh.
2015-01-01
Background Obtaining high quality images in Single Photon Emission Tomography (SPECT) device is the most important goal in nuclear medicine. Because if image quality is low, the possibility of making a mistake in diagnosing and treating the patient will rise. Studying effective factors in spatial resolution of imaging systems is thus deemed to be vital. One of the most important factors in SPECT imaging in nuclear medicine is the use of an appropriate collimator for a certain radiopharmaceutical feature in order to create the best image as it can be effective in the quantity of Full Width at Half Maximum (FWHM) which is the main parameter in spatial resolution. Method In this research, the simulation of the detector and collimator of SPECT imaging device, Model HD3 made by Philips Co. and the investigation of important factors on the collimator were carried out using MCNP-4c code. Results The results of the experimental measurments and simulation calculations revealed a relative difference of less than 5% leading to the confirmation of the accuracy of conducted simulation MCNP code calculation. Conclusion This is the first essential step in the design and modelling of new collimators used for creating high quality images in nuclear medicine. PMID:25973410
Tang, Ke; Zhang, Jinfeng; Liang, Jie
2014-01-01
Loops in proteins are flexible regions connecting regular secondary structures. They are often involved in protein functions through interacting with other molecules. The irregularity and flexibility of loops make their structures difficult to determine experimentally and challenging to model computationally. Conformation sampling and energy evaluation are the two key components in loop modeling. We have developed a new method for loop conformation sampling and prediction based on a chain growth sequential Monte Carlo sampling strategy, called Distance-guided Sequential chain-Growth Monte Carlo (DiSGro). With an energy function designed specifically for loops, our method can efficiently generate high quality loop conformations with low energy that are enriched with near-native loop structures. The average minimum global backbone RMSD for 1,000 conformations of 12-residue loops is Å, with a lowest energy RMSD of Å, and an average ensemble RMSD of Å. A novel geometric criterion is applied to speed up calculations. The computational cost of generating 1,000 conformations for each of the x loops in a benchmark dataset is only about cpu minutes for 12-residue loops, compared to ca cpu minutes using the FALCm method. Test results on benchmark datasets show that DiSGro performs comparably or better than previous successful methods, while requiring far less computing time. DiSGro is especially effective in modeling longer loops (– residues). PMID:24763317
Martin, W.R.
1993-01-01
This document describes progress on five efforts for improving effectiveness of computational methods for particle diffusion and transport problems in nuclear engineering: (1) Multigrid methods for obtaining rapidly converging solutions of nodal diffusion problems. A alternative line relaxation scheme is being implemented into a nodal diffusion code. Simplified P2 has been implemented into this code. (2) Local Exponential Transform method for variance reduction in Monte Carlo neutron transport calculations. This work yielded predictions for both 1-D and 2-D x-y geometry better than conventional Monte Carlo with splitting and Russian Roulette. (3) Asymptotic Diffusion Synthetic Acceleration methods for obtaining accurate, rapidly converging solutions of multidimensional SN problems. New transport differencing schemes have been obtained that allow solution by the conjugate gradient method, and the convergence of this approach is rapid. (4) Quasidiffusion (QD) methods for obtaining accurate, rapidly converging solutions of multidimensional SN Problems on irregular spatial grids. A symmetrized QD method has been developed in a form that results in a system of two self-adjoint equations that are readily discretized and efficiently solved. (5) Response history method for speeding up the Monte Carlo calculation of electron transport problems. This method was implemented into the MCNP Monte Carlo code. In addition, we have developed and implemented a parallel time-dependent Monte Carlo code on two massively parallel processors.
NASA Astrophysics Data System (ADS)
Wang, Dong; Tse, Peter W.
2015-05-01
Slurry pumps are commonly used in oil-sand mining for pumping mixtures of abrasive liquids and solids. These operations cause constant wear of slurry pump impellers, which results in the breakdown of the slurry pumps. This paper develops a prognostic method for estimating remaining useful life of slurry pump impellers. First, a moving-average wear degradation index is proposed to assess the performance degradation of the slurry pump impeller. Secondly, the state space model of the proposed health index is constructed. A general sequential Monte Carlo method is employed to derive the parameters of the state space model. The remaining useful life of the slurry pump impeller is estimated by extrapolating the established state space model to a specified alert threshold. Data collected from an industrial oil sand pump were used to validate the developed method. The results show that the accuracy of the developed method improves as more data become available.
Morera-Gómez, Yasser; Cartas-Aguila, Héctor A; Alonso-Hernández, Carlos M; Bernal-Castillo, Jose L; Guillén-Arruebarrena, Aniel
2015-03-01
Monte Carlo efficiency transfer method was used to determine the full energy peak efficiency of a coaxial n-type HPGe detector. The efficiencies calibration curves for three Certificate Reference Materials were determined by efficiency transfer using a (152)Eu reference source. The efficiency values obtained after efficiency transfer were used to calculate the activity concentration of the radionuclides detected in the three materials, which were measured in a low-background gamma spectrometry system. Reported and calculated activity concentration show a good agreement with mean deviations of 5%, which is satisfactory for environmental samples measurement. PMID:25544663
Lin, Uei-Tyng; Chu, Chien-Hau
2006-05-01
Monte Carlo method was used to simulate the correction factors for electron loss and scattered photons for two improved cylindrical free-air ionization chambers (FACs) constructed at the Institute of Nuclear Energy Research (INER, Taiwan). The method is based on weighting correction factors for mono-energetic photons with X-ray spectra. The newly obtained correction factors for the medium-energy free-air chamber were compared with the current values, which were based on a least-squares fit to experimental data published in the NBS Handbook 64 [Wyckoff, H.O., Attix, F.H., 1969. Design of free-air ionization chambers. National Bureau Standards Handbook, No. 64. US Government Printing Office, Washington, DC, pp. 1-16; Chen, W.L., Su, S.H., Su, L.L., Hwang, W.S., 1999. Improved free-air ionization chamber for the measurement of X-rays. Metrologia 36, 19-24]. The comparison results showed the agreement between the Monte Carlo method and experimental data is within 0.22%. In addition, mono-energetic correction factors for the low-energy free-air chamber were calculated. Average correction factors were then derived for measured and theoretical X-ray spectra at 30-50 kVp. Although the measured and calculated spectra differ slightly, the resulting differences in the derived correction factors are less than 0.02%. PMID:16427292
Nuclear Level Density of ${}^{161}$Dy in the Shell Model Monte Carlo Method
Özen, Cem; Nakada, Hitoshi
2012-01-01
We extend the shell-model Monte Carlo applications to the rare-earth region to include the odd-even nucleus ${}^{161}$Dy. The projection on an odd number of particles leads to a sign problem at low temperatures making it impractical to extract the ground-state energy in direct calculations. We use level counting data at low energies and neutron resonance data to extract the shell model ground-state energy to good precision. We then calculate the level density of ${}^{161}$Dy and find it in very good agreement with the level density extracted from experimental data.
Gudjonson, Herman; Kats, Mikhail A.; Liu, Kun; Nie, Zhihong; Kumacheva, Eugenia; Capasso, Federico
2014-01-01
Many experimental systems consist of large ensembles of uncoupled or weakly interacting elements operating as a single whole; this is particularly the case for applications in nano-optics and plasmonics, including colloidal solutions, plasmonic or dielectric nanoparticles on a substrate, antenna arrays, and others. In such experiments, measurements of the optical spectra of ensembles will differ from measurements of the independent elements as a result of small variations from element to element (also known as polydispersity) even if these elements are designed to be identical. In particular, sharp spectral features arising from narrow-band resonances will tend to appear broader and can even be washed out completely. Here, we explore this effect of inhomogeneous broadening as it occurs in colloidal nanopolymers comprising self-assembled nanorod chains in solution. Using a technique combining finite-difference time-domain simulations and Monte Carlo sampling, we predict the inhomogeneously broadened optical spectra of these colloidal nanopolymers and observe significant qualitative differences compared with the unbroadened spectra. The approach combining an electromagnetic simulation technique with Monte Carlo sampling is widely applicable for quantifying the effects of inhomogeneous broadening in a variety of physical systems, including those with many degrees of freedom that are otherwise computationally intractable. PMID:24469797
Verification of biological dose calculation for carbon ion therapy with a monte carlo method.
Nose, Hiroyuki; Aso, Tsukasa; Kase, Yuki; Matsufuji, Naruhiro; Kanai, Tatsuaki
2009-01-01
We developed a calculation technique to evaluate the biological dose distribution of heavy ion beams, and verified its reliability by comparison with experimental results. The calculation technique was developed by connecting two general-purpose Monte Carlo codes. In order to evaluate the radiation quality and biological effect, the microdosimetric kinetic model was adopted. We estimated the distribution of physical dose and radiation quality for the carbon broad beam with experiments and with Monte Carlo calculations. Relative biological effectiveness (RBE) was estimated from radiation quality, and biological dose could be calculated as the product of the physical dose and the RBE. Our calculations showed good agreement with experiments, not only for the physical dose, but also for the dose-averaged lineal energy as an expression of radiation quality and therefore biological dose. This finding indicates that our calculation tool will be useful to estimate biological dose distribution in the design of heavy ion radiation facilities or for quality assurance in treatment planning. PMID:21976254
Response of thermoluminescent dosimeters to photons simulated with the Monte Carlo method
NASA Astrophysics Data System (ADS)
Moralles, M.; Guimarães, C. C.; Okuno, E.
2005-06-01
Personal monitors composed of thermoluminescent dosimeters (TLDs) made of natural fluorite (CaF 2:NaCl) and lithium fluoride (Harshaw TLD-100) were exposed to gamma and X rays of different qualities. The GEANT4 radiation transport Monte Carlo toolkit was employed to calculate the energy depth deposition profile in the TLDs. X-ray spectra of the ISO/4037-1 narrow-spectrum series, with peak voltage (kVp) values in the range 20-300 kV, were obtained by simulating a X-ray Philips MG-450 tube associated with the recommended filters. A realistic photon distribution of a 60Co radiotherapy source was taken from results of Monte Carlo simulations found in the literature. Comparison between simulated and experimental results revealed that the attenuation of emitted light in the readout process of the fluorite dosimeter must be taken into account, while this effect is negligible for lithium fluoride. Differences between results obtained by heating the dosimeter from the irradiated side and from the opposite side allowed the determination of the light attenuation coefficient for CaF 2:NaCl (mass proportion 60:40) as 2.2 mm -1.
Brown, F.B.; Sutton, T.M.
1996-02-01
This report is composed of the lecture notes from the first half of a 32-hour graduate-level course on Monte Carlo methods offered at KAPL. These notes, prepared by two of the principle developers of KAPL`s RACER Monte Carlo code, cover the fundamental theory, concepts, and practices for Monte Carlo analysis. In particular, a thorough grounding in the basic fundamentals of Monte Carlo methods is presented, including random number generation, random sampling, the Monte Carlo approach to solving transport problems, computational geometry, collision physics, tallies, and eigenvalue calculations. Furthermore, modern computational algorithms for vector and parallel approaches to Monte Carlo calculations are covered in detail, including fundamental parallel and vector concepts, the event-based algorithm, master/slave schemes, parallel scaling laws, and portability issues.
Satoshi SATO; Hiromasa IIDA; Takeo NISHITANI
2002-01-01
For the evaluation of gamma-ray dose rates around the duct penetrations after shutdown of nuclear fusion reactor, the calculation method is proposed with an application of the Monte Carlo neutron and decay gamma-ray transport calculation. For the radioisotope production rates during operation, the Monte Carlo calculation is conducted by the modification of the nuclear data library replacing a prompt gamma-ray
NASA Astrophysics Data System (ADS)
Hosseini, Seyed Mahmoud; Shahabian, Farzad
2009-09-01
In this article, the dynamic responses of functionally graded thick hollow cylinders are studied from stochastic view using Monte Carlo method. The FG cylinder is subjected to mechanical shock loads applied to inner surface of cylinder. The FG cylinder is assumed to be in plane strain conditions and axisymmetry conditions. To obtain the radial displacement in each point, the Navier equation in displacement form is derived using isotropic elements. To solve the problem, the combined numerical method is used (Galerkin finite element and Newmark finite difference methods). The maximum, mean and minimum values of radial displacement also variance of variation in radial displacement are calculated in various points across thickness for different values of volume fraction exponent (in mechanical properties function of FG cylinder).
Rijken, J D; Harris-Phillips, W; Lawson, J M
2015-03-01
Lithium fluoride thermoluminescent dosimeters (TLDs) exhibit a dependence on the energy of the radiation beam of interest so need to be carefully calibrated for different energy spectra if used for clinical radiation oncology beam dosimetry and quality assurance. TLD energy response was investigated for a specific set of TLD700:LiF(Mg,Ti) chips for a high dose rate (192)Ir brachytherapy source. A novel method of energy response calculation for (192)Ir was developed where dose was determined through Monte Carlo modelling in Geant4. The TLD response was then measured experimentally. Results showed that TLD700 has a depth dependent response in water ranging from 1.170 ± 0.125 at 20 mm to 0.976 ± 0.043 at 50 mm (normalised to a nominal 6 MV beam response). The method of calibration and Monte Carlo data developed through this study could be easily applied by other Medical Physics departments seeking to use TLDs for (192)Ir patient dosimetry or treatment planning system experimental verification. PMID:25663432
Williams, Michael S; Ebel, Eric D
2014-11-18
The fitting of statistical distributions to chemical and microbial contamination data is a common application in risk assessment. These distributions are used to make inferences regarding even the most pedestrian of statistics, such as the population mean. The reason for the heavy reliance on a fitted distribution is the presence of left-, right-, and interval-censored observations in the data sets, with censored observations being the result of nondetects in an assay, the use of screening tests, and other practical limitations. Considerable effort has been expended to develop statistical distributions and fitting techniques for a wide variety of applications. Of the various fitting methods, Markov Chain Monte Carlo methods are common. An underlying assumption for many of the proposed Markov Chain Monte Carlo methods is that the data represent independent and identically distributed (iid) observations from an assumed distribution. This condition is satisfied when samples are collected using a simple random sampling design. Unfortunately, samples of food commodities are generally not collected in accordance with a strict probability design. Nevertheless, pseudosystematic sampling efforts (e.g., collection of a sample hourly or weekly) from a single location in the farm-to-table continuum are reasonable approximations of a simple random sample. The assumption that the data represent an iid sample from a single distribution is more difficult to defend if samples are collected at multiple locations in the farm-to-table continuum or risk-based sampling methods are employed to preferentially select samples that are more likely to be contaminated. This paper develops a weighted bootstrap estimation framework that is appropriate for fitting a distribution to microbiological samples that are collected with unequal probabilities of selection. An example based on microbial data, derived by the Most Probable Number technique, demonstrates the method and highlights the magnitude of biases in an estimator that ignores the effects of an unequal probability sample design. PMID:25333423
Quantum Monte Carlo Helsinki 2011
Boyer, Edmond
Quantum Monte Carlo Helsinki 2011 Marius Lewerenz MSME/CT, UMR 8208 CNRS, Universit´e Paris Est? . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2 1.2 What is a Monte Carlo method? . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5 1.3 What are Monte Carlo methods good for? . . . . . . . . . . . . . . . . . . . . . . . 5 1
Cosmic ray ionization and dose at Mars: Benchmarking deterministic and Monte Carlo methods
NASA Astrophysics Data System (ADS)
Norman, R. B.; Gronoff, G.; Mertens, C. J.
2014-12-01
The ability to evaluate the cosmic ray environment at Mars is of interest for future manned exploration. To support exploration, tools must be developed to accurately access the radiation environment in both free space and on planetary surfaces. The primary tool NASA uses to quantify radiation exposure behind shielding materials is the space radiation transport code, HZETRN. In order to build confidence in HZETRN, code benchmarking against Monte Carlo radiation transport codes is often used. This work compares the dose calculations at Mars by HZETRN and the GEANT4 application, Planetocosmics. The dose at ground and the energy deposited in the atmosphere by galactic cosmic ray protons and alpha particles has been calculated for the Curiosity landing conditions. In addition, this work has considered Solar Energetic Particle events, which allows for a better understanding of the spectral form in the comparison. The results for protons and alpha particles show very good agreement between HZETRN and Planetocosmics.
The Acceptance Probability of the Hybrid Monte Carlo Method in High-Dimensional Problems
NASA Astrophysics Data System (ADS)
Beskos, A.; Pillai, N. S.; Roberts, G. O.; Sanz-Serna, J. M.; Stuart, A. M.
2010-09-01
We investigate the properties of the Hybrid Monte-Carlo algorithm in high dimensions. In the simplified scenario of independent, identically distributed components, we prove that, to obtain an G(1) acceptance probability as the dimension d of the state space tends to ?, the Verlet/leap-frog step-size h should be scaled as h = ?×d-1/4. We also identify analytically the asymptotically optimal acceptance probability, which turns out to be 0.651 (with three decimal places); this is the choice that optimally balances the cost of generating a proposal, which decreases as ? increases, against the cost related to the average number of proposals required to obtain acceptance, which increases as ? increases.
Modeling of radiation-induced bystander effect using Monte Carlo methods
NASA Astrophysics Data System (ADS)
Xia, Junchao; Liu, Liteng; Xue, Jianming; Wang, Yugang; Wu, Lijun
2009-03-01
Experiments showed that the radiation-induced bystander effect exists in cells, or tissues, or even biological organisms when irradiated with energetic ions or X-rays. In this paper, a Monte Carlo model is developed to study the mechanisms of bystander effect under the cells sparsely populated conditions. This model, based on our previous experiment which made the cells sparsely located in a round dish, focuses mainly on the spatial characteristics. The simulation results successfully reach the agreement with the experimental data. Moreover, other bystander effect experiment is also computed by this model and finally the model succeeds in predicting the results. The comparison of simulations with the experimental results indicates the feasibility of the model and the validity of some vital mechanisms assumed.
A Markov-Chain Monte-Carlo Based Method for Flaw Detection in Beams
Glaser, R E; Lee, C L; Nitao, J J; Hickling, T L; Hanley, W G
2006-09-28
A Bayesian inference methodology using a Markov Chain Monte Carlo (MCMC) sampling procedure is presented for estimating the parameters of computational structural models. This methodology combines prior information, measured data, and forward models to produce a posterior distribution for the system parameters of structural models that is most consistent with all available data. The MCMC procedure is based upon a Metropolis-Hastings algorithm that is shown to function effectively with noisy data, incomplete data sets, and mismatched computational nodes/measurement points. A series of numerical test cases based upon a cantilever beam is presented. The results demonstrate that the algorithm is able to estimate model parameters utilizing experimental data for the nodal displacements resulting from specified forces.
NASA Astrophysics Data System (ADS)
Shahrabi, Mohammad; Tavakoli-Anbaran, Hossien
2015-02-01
Calculation of dosimetry parameters by TG-60 approach for beta sources and TG-43 approach for gamma sources can help to design brachytherapy sources. In this work, TG-60 dosimetry parameters are calculated for the Sm-153 brachytherapy seed using the Monte Carlo simulation approach. The continuous beta spectrum of Sm-153 and probability density are applied to simulate the Sm-153 source. Sm-153 is produced by neutron capture during the 152Sm( n,)153Sm reaction in reactors. The Sm-153 radionuclide decays by beta rays followed by gamma-ray emissions with half-life of 1.928 days. Sm-153 source is simulated in a spherical water phantom to calculate the deposited energy and geometry function in the intended points. The Sm-153 seed consists of 20% samarium, 30% calcium and 50% silicon, in cylindrical shape with density 1.76gr/cm^3. The anisotropy function and radial dose function were calculated at 0-4mm radial distances relative to the seed center and polar angles of 0-90 degrees. The results of this research are compared with the results of Taghdiri et al. (Iran. J. Radiat. Res. 9, 103 (2011)). The final beta spectrum of Sm-153 is not considered in their work. Results show significant relative differences even up to 5 times for anisotropy functions at 0.6, 1 and 2mm distances and some angles. MCNP4C Monte Carlo code is applied in both in the present paper and in the above-mentioned one.
Comparative Dosimetric Estimates of a 25 keV Electron Micro-beam with three Monte Carlo Codes
Mainardi, Enrico; Donahue, Richard J.; Blakely, Eleanor A.
2002-09-11
The calculations presented compare the different performances of the three Monte Carlo codes PENELOPE-1999, MCNP-4C and PITS, for the evaluation of Dose profiles from a 25 keV electron micro-beam traversing individual cells. The overall model of a cell is a water cylinder equivalent for the three codes but with a different internal scoring geometry: hollow cylinders for PENELOPE and MCNP, whereas spheres are used for the PITS code. A cylindrical cell geometry with scoring volumes with the shape of hollow cylinders was initially selected for PENELOPE and MCNP because of its superior simulation of the actual shape and dimensions of a cell and for its improved computer-time efficiency if compared to spherical internal volumes. Some of the transfer points and energy transfer that constitute a radiation track may actually fall in the space between spheres, that would be outside the spherical scoring volume. This internal geometry, along with the PENELOPE algorithm, drastically reduced the computer time when using this code if comparing with event-by-event Monte Carlo codes like PITS. This preliminary work has been important to address dosimetric estimates at low electron energies. It demonstrates that codes like PENELOPE can be used for Dose evaluation, even with such small geometries and energies involved, which are far below the normal use for which the code was created. Further work (initiated in Summer 2002) is still needed however, to create a user-code for PENELOPE that allows uniform comparison of exact cell geometries, integral volumes and also microdosimetric scoring quantities, a field where track-structure codes like PITS, written for this purpose, are believed to be superior.
Dupuis, Paul [Brown University] [Brown University
2014-03-14
This proposal is concerned with applications of Monte Carlo to problems in physics and chemistry where rare events degrade the performance of standard Monte Carlo. One class of problems is concerned with computation of various aspects of the equilibrium behavior of some Markov process via time averages. The problem to be overcome is that rare events interfere with the efficient sampling of all relevant parts of phase space. A second class concerns sampling transitions between two or more stable attractors. Here, rare events do not interfere with the sampling of all relevant parts of phase space, but make Monte Carlo inefficient because of the very large number of samples required to obtain variance comparable to the quantity estimated. The project uses large deviation methods for the mathematical analyses of various Monte Carlo techniques, and in particular for algorithmic analysis and design. This is done in the context of relevant application areas, mainly from chemistry and biology.
Zhang, Zhe; Schindler, Christina E. M.; Lange, Oliver F.; Zacharias, Martin
2015-01-01
The high-resolution refinement of docked protein-protein complexes can provide valuable structural and mechanistic insight into protein complex formation complementing experiment. Monte Carlo (MC) based approaches are frequently applied to sample putative interaction geometries of proteins including also possible conformational changes of the binding partners. In order to explore efficiency improvements of the MC sampling, several enhanced sampling techniques, including temperature or Hamiltonian replica exchange and well-tempered ensemble approaches, have been combined with the MC method and were evaluated on 20 protein complexes using unbound partner structures. The well-tempered ensemble method combined with a 2-dimensional temperature and Hamiltonian replica exchange scheme (WTE-H-REMC) was identified as the most efficient search strategy. Comparison with prolonged MC searches indicates that the WTE-H-REMC approach requires approximately 5 times fewer MC steps to identify near native docking geometries compared to conventional MC searches. PMID:26053419
NASA Astrophysics Data System (ADS)
Densmore, J. D.; Park, H.; Wollaber, A. B.; Rauenzahn, R. M.; Knoll, D. A.
2015-03-01
We present a moment-based acceleration algorithm applied to Monte Carlo simulation of thermal radiative-transfer problems. Our acceleration algorithm employs a continuum system of moments to accelerate convergence of stiff absorption-emission physics. The combination of energy-conserving tallies and the use of an asymptotic approximation in optically thick regions remedy the difficulties of local energy conservation and mitigation of statistical noise in such regions. We demonstrate the efficiency and accuracy of the developed method. We also compare directly to the standard linearization-based method of Fleck and Cummings [1]. A factor of 40 reduction in total computational time is achieved with the new algorithm for an equivalent (or more accurate) solution as compared with the Fleck-Cummings algorithm.
Zhang, Zhe; Schindler, Christina E M; Lange, Oliver F; Zacharias, Martin
2015-01-01
The high-resolution refinement of docked protein-protein complexes can provide valuable structural and mechanistic insight into protein complex formation complementing experiment. Monte Carlo (MC) based approaches are frequently applied to sample putative interaction geometries of proteins including also possible conformational changes of the binding partners. In order to explore efficiency improvements of the MC sampling, several enhanced sampling techniques, including temperature or Hamiltonian replica exchange and well-tempered ensemble approaches, have been combined with the MC method and were evaluated on 20 protein complexes using unbound partner structures. The well-tempered ensemble method combined with a 2-dimensional temperature and Hamiltonian replica exchange scheme (WTE-H-REMC) was identified as the most efficient search strategy. Comparison with prolonged MC searches indicates that the WTE-H-REMC approach requires approximately 5 times fewer MC steps to identify near native docking geometries compared to conventional MC searches. PMID:26053419
Dieudonne, C.; Dumonteil, E.; Malvagi, F.; Diop, C. M. [Commissariat a l'Energie Atomique et aux Energies Alternatives CEA, Service d'Etude des Reacteurs et de Mathematiques Appliquees, DEN/DANS/DM2S/SERMA/LTSD, F91191 Gif-sur-Yvette cedex (France)
2013-07-01
For several years, Monte Carlo burnup/depletion codes have appeared, which couple a Monte Carlo code to simulate the neutron transport to a deterministic method that computes the medium depletion due to the neutron flux. Solving Boltzmann and Bateman equations in such a way allows to track fine 3 dimensional effects and to get rid of multi-group hypotheses done by deterministic solvers. The counterpart is the prohibitive calculation time due to the time-expensive Monte Carlo solver called at each time step. Therefore, great improvements in term of calculation time could be expected if one could get rid of Monte Carlo transport sequences. For example, it may seem interesting to run an initial Monte Carlo simulation only once, for the first time/burnup step, and then to use the concentration perturbation capability of the Monte Carlo code to replace the other time/burnup steps (the different burnup steps are seen like perturbations of the concentrations of the initial burnup step). This paper presents some advantages and limitations of this technique and preliminary results in terms of speed up and figure of merit. Finally, we will detail different possible calculation scheme based on that method. (authors)
Bashkatov, A N; Genina, Elina A; Kochubei, V I; Tuchin, Valerii V [Department of Optics and Biomedical Physics, N.G.Chernyshevskii Saratov State University (Russian Federation)
2006-12-31
Based on the digital image analysis and inverse Monte-Carlo method, the proximate analysis method is deve-loped and the optical properties of hairs of different types are estimated in three spectral ranges corresponding to three colour components. The scattering and absorption properties of hairs are separated for the first time by using the inverse Monte-Carlo method. The content of different types of melanin in hairs is estimated from the absorption coefficient. It is shown that the dominating type of melanin in dark hairs is eumelanin, whereas in light hairs pheomelanin dominates. (special issue devoted to multiple radiation scattering in random media)
Ridikas, D; Feray, S; Cometto, M; Damoy, F
2005-01-01
During the decommissioning of the SATURNE accelerator at CEA Saclay (France), a number of concrete containers with radioactive materials of low or very low activity had to be characterised before their final storage. In this paper, a non-destructive approach combining gamma ray spectroscopy and Monte Carlo simulations is used in order to characterise massive concrete blocks containing some radioactive waste. The limits and uncertainties of the proposed method are quantified for the source term activity estimates using 137Cs as a tracer element. A series of activity measurements with a few representative waste containers were performed before and after destruction. It has been found that neither was the distribution of radioactive materials homogeneous nor was its density unique, and this became the major source of systematic errors in this study. Nevertheless, we conclude that by combining gamma ray spectroscopy and full scale Monte Carlo simulations one can estimate the source term activity for some tracer elements such as 134Cs, 137Cs, 60Co, etc. The uncertainty of this estimation should not be bigger than a factor of 2-3. PMID:16381694
NASA Astrophysics Data System (ADS)
Trinci, G.; Massari, R.; Scandellari, M.; Boccalini, S.; Costantini, S.; Di Sero, R.; Basso, A.; Sala, R.; Scopinaro, F.; Soluri, A.
2010-09-01
The aim of this work is to show a new scintigraphic device able to change automatically the length of its collimator in order to adapt the spatial resolution value to gamma source distance. This patented technique replaces the need for collimator change that standard gamma cameras still feature. Monte Carlo simulations represent the best tool in searching new technological solutions for such an innovative collimation structure. They also provide a valid analysis on response of gamma cameras performances as well as on advantages and limits of this new solution. Specifically, Monte Carlo simulations are realized with GEANT4 (GEometry ANd Tracking) framework and the specific simulation object is a collimation method based on separate blocks that can be brought closer and farther, in order to reach and maintain specific spatial resolution values for all source-detector distances. To verify the accuracy and the faithfulness of these simulations, we have realized experimental measurements with identical setup and conditions. This confirms the power of the simulation as an extremely useful tool, especially where new technological solutions need to be studied, tested and analyzed before their practical realization. The final aim of this new collimation system is the improvement of the SPECT techniques, with the real control of the spatial resolution value during tomographic acquisitions. This principle did allow us to simulate a tomographic acquisition of two capillaries of radioactive solution, in order to verify the possibility to clearly distinguish them.
Thomas C. Henderson; Brandt Erickson; Travis Longoria; Edward Grant; Kyle Luthy; Leonardo Mattos; Matt Craver
2005-01-01
Biswas et al. (1) introduced a probabilistic approach to inference with limited information in sensor networks. They represented the sensor network as a Bayesian network and performed approximate inference using Markov Chain Monte Carlo (MCMC). The goal is to robustly answer queries even under noisy or partial information scenarios. We propose an alter- native method based on simple Monte Carlo
Monte Carlo Planning Method Estimates Planning Horizons during Interactive Social Exchange.
Hula, Andreas; Montague, P Read; Dayan, Peter
2015-06-01
Reciprocating interactions represent a central feature of all human exchanges. They have been the target of various recent experiments, with healthy participants and psychiatric populations engaging as dyads in multi-round exchanges such as a repeated trust task. Behaviour in such exchanges involves complexities related to each agent's preference for equity with their partner, beliefs about the partner's appetite for equity, beliefs about the partner's model of their partner, and so on. Agents may also plan different numbers of steps into the future. Providing a computationally precise account of the behaviour is an essential step towards understanding what underlies choices. A natural framework for this is that of an interactive partially observable Markov decision process (IPOMDP). However, the various complexities make IPOMDPs inordinately computationally challenging. Here, we show how to approximate the solution for the multi-round trust task using a variant of the Monte-Carlo tree search algorithm. We demonstrate that the algorithm is efficient and effective, and therefore can be used to invert observations of behavioural choices. We use generated behaviour to elucidate the richness and sophistication of interactive inference. PMID:26053429
Monte Carlo Planning Method Estimates Planning Horizons during Interactive Social Exchange
Hula, Andreas; Montague, P. Read; Dayan, Peter
2015-01-01
Reciprocating interactions represent a central feature of all human exchanges. They have been the target of various recent experiments, with healthy participants and psychiatric populations engaging as dyads in multi-round exchanges such as a repeated trust task. Behaviour in such exchanges involves complexities related to each agent’s preference for equity with their partner, beliefs about the partner’s appetite for equity, beliefs about the partner’s model of their partner, and so on. Agents may also plan different numbers of steps into the future. Providing a computationally precise account of the behaviour is an essential step towards understanding what underlies choices. A natural framework for this is that of an interactive partially observable Markov decision process (IPOMDP). However, the various complexities make IPOMDPs inordinately computationally challenging. Here, we show how to approximate the solution for the multi-round trust task using a variant of the Monte-Carlo tree search algorithm. We demonstrate that the algorithm is efficient and effective, and therefore can be used to invert observations of behavioural choices. We use generated behaviour to elucidate the richness and sophistication of interactive inference. PMID:26053429
Efficient 3D kinetic Monte Carlo method for modeling of molecular structure and dynamics.
Panshenskov, Mikhail; Solov'yov, Ilia A; Solov'yov, Andrey V
2014-06-30
Self-assembly of molecular systems is an important and general problem that intertwines physics, chemistry, biology, and material sciences. Through understanding of the physical principles of self-organization, it often becomes feasible to control the process and to obtain complex structures with tailored properties, for example, bacteria colonies of cells or nanodevices with desired properties. Theoretical studies and simulations provide an important tool for unraveling the principles of self-organization and, therefore, have recently gained an increasing interest. The present article features an extension of a popular code MBN EXPLORER (MesoBioNano Explorer) aiming to provide a universal approach to study self-assembly phenomena in biology and nanoscience. In particular, this extension involves a highly parallelized module of MBN EXPLORER that allows simulating stochastic processes using the kinetic Monte Carlo approach in a three-dimensional space. We describe the computational side of the developed code, discuss its efficiency, and apply it for studying an exemplary system. PMID:24752427
NASA Technical Reports Server (NTRS)
Jensen, K. A.; Ripoll, J.-F.; Wray, A. A.; Joseph, D.; ElHafi, M.
2004-01-01
Five computational methods for solution of the radiative transfer equation in an absorbing-emitting and non-scattering gray medium were compared on a 2 m JP-8 pool fire. The temperature and absorption coefficient fields were taken from a synthetic fire due to the lack of a complete set of experimental data for fires of this size. These quantities were generated by a code that has been shown to agree well with the limited quantity of relevant data in the literature. Reference solutions to the governing equation were determined using the Monte Carlo method and a ray tracing scheme with high angular resolution. Solutions using the discrete transfer method, the discrete ordinate method (DOM) with both S(sub 4) and LC(sub 11) quadratures, and moment model using the M(sub 1) closure were compared to the reference solutions in both isotropic and anisotropic regions of the computational domain. DOM LC(sub 11) is shown to be the more accurate than the commonly used S(sub 4) quadrature technique, especially in anisotropic regions of the fire domain. This represents the first study where the M(sub 1) method was applied to a combustion problem occurring in a complex three-dimensional geometry. The M(sub 1) results agree well with other solution techniques, which is encouraging for future applications to similar problems since it is computationally the least expensive solution technique. Moreover, M(sub 1) results are comparable to DOM S(sub 4).
Kaplan, I G; Sukhonosov VYa
1991-07-01
A numerical computer simulation of the processes of the interaction of electrons with liquid water and vapor was performed, beginning with the absorption of the energy of ionizing radiation and including the chemical changes in the medium. The specific features of the liquid phase compared with the gaseous phase were taken into account. Among them are the decrease of the ionization potential and collective excitations of the plasmon type. The mass stopping powers and ranges of electrons in liquid water and vapor were calculated. Within the frames of the stochastic model the kinetics of water radiolysis in the picosecond range of radiolysis was calculated by the Monte Carlo method. The mechanism of water radiolysis was found with the electron-ion recombination and the reactions of quasi-free and solvated electrons taken into account. PMID:2068265
NASA Astrophysics Data System (ADS)
Liu, Lang
2015-05-01
The unitary correlation operator method (UCOM) and the similarity renormalization group theory (SRG) are compared and discussed in the framework of the no-core Monte Carlo shell model (MCSM) calculations for 3H and 4He. The treatment of spurious center-of-mass motion by Lawson's prescription is performed in the MCSM calculations. These results with both transformed interactions show good suppression of spurious center-of-mass motion with proper Lawson's prescription parameter ?c.m. values. The UCOM potentials obtain faster convergence of total energy for the ground state than that of SRG potentials in the MCSM calculations, which differs from the cases in the no-core shell model calculations (NCSM). These differences are discussed and analyzed in terms of the truncation scheme in the MCSM and NCSM, as well as the properties of the potentials of SRG and UCOM. Supported by Fundamental Research Funds for the Central Universities (JUSRP1035), National Natural Science Foundation of China (11305077)
Wang, Lei; Li, Ningning; Xiao, Shiyan; Liang, Haojun
2014-07-01
The phase transition of a single flexible homopolymer chain in the limit condition of dilute solution is systematically investigated using a coarse-grained model. Replica exchange Monte Carlo method is used to enhance the performance of the conformation space exploration, and thus detailed investigation of phase behavior of the system can be provided. With the designed potentials, the coil-globule transition and the liquid-solid-like transition are identified, and the transition temperatures are measured with the conformational and thermodynamic analyses. Additionally, by extrapolating the coil-globule transition temperature, T ? , and the liquid-solid-like transition temperature T(L ? S) to the thermodynamic limit, N????, we found no "tri-critical" point in the current model. PMID:24961896
NASA Astrophysics Data System (ADS)
Nasser, Hassan; Marre, Olivier; Cessac, Bruno
2013-03-01
Understanding the dynamics of neural networks is a major challenge in experimental neuroscience. For that purpose, a modelling of the recorded activity that reproduces the main statistics of the data is required. In the first part, we present a review on recent results dealing with spike train statistics analysis using maximum entropy models (MaxEnt). Most of these studies have focused on modelling synchronous spike patterns, leaving aside the temporal dynamics of the neural activity. However, the maximum entropy principle can be generalized to the temporal case, leading to Markovian models where memory effects and time correlations in the dynamics are properly taken into account. In the second part, we present a new method based on Monte Carlo sampling which is suited for the fitting of large-scale spatio-temporal MaxEnt models. The formalism and the tools presented here will be essential to fit MaxEnt spatio-temporal models to large neural ensembles.
NASA Astrophysics Data System (ADS)
Makri, T.; Yakoumakis, E.; Papadopoulou, D.; Gialousis, G.; Theodoropoulos, V.; Sandilos, P.; Georgiou, E.
2006-10-01
Seeking to assess the radiation risk associated with radiological examinations in neonatal intensive care units, thermo-luminescence dosimetry was used for the measurement of entrance surface dose (ESD) in 44 AP chest and 28 AP combined chest-abdominal exposures of a sample of 60 neonates. The mean values of ESD were found to be equal to 44 ± 16 µGy and 43 ± 19 µGy, respectively. The MCNP-4C2 code with a mathematical phantom simulating a neonate and appropriate x-ray energy spectra were employed for the simulation of the AP chest and AP combined chest-abdominal exposures. Equivalent organ dose per unit ESD and energy imparted per unit ESD calculations are presented in tabular form. Combined with ESD measurements, these calculations yield an effective dose of 10.2 ± 3.7 µSv, regardless of sex, and an imparted energy of 18.5 ± 6.7 µJ for the chest radiograph. The corresponding results for the combined chest-abdominal examination are 14.7 ± 7.6 µSv (males)/17.2 ± 7.6 µSv (females) and 29.7 ± 13.2 µJ. The calculated total risk per radiograph was low, ranging between 1.7 and 2.9 per million neonates, per film, and being slightly higher for females. Results of this study are in good agreement with previous studies, especially in view of the diversity met in the calculation methods.
NASA Astrophysics Data System (ADS)
Miller, G. L.; Lu, D.; Ye, M.; Curtis, G. P.; Mendes, B. S.; Draper, D.
2010-12-01
Parametric uncertainty in groundwater modeling is commonly assessed using the first-order-second-moment method, which yields the linear confidence/prediction intervals. More advanced techniques are able to produce the nonlinear confidence/prediction intervals that are more accurate than the linear intervals for nonlinear models. However, both the methods are restricted to certain assumptions such as normality in model parameters. We developed a Markov Chain Monte Carlo (MCMC) method to directly investigate the parametric distributions and confidence/prediction intervals. The MCMC results are used to evaluate accuracy of the linear and nonlinear confidence/prediction intervals. The MCMC method is applied to nonlinear surface complexation models developed by Kohler et al. (1996) to simulate reactive transport of uranium (VI). The breakthrough data of Kohler et al. (1996) obtained from a series of column experiments are used as the basis of the investigation. The calibrated parameters of the models are the equilibrium constants of the surface complexation reactions and fractions of functional groups. The Morris method sensitivity analysis shows that all of the parameters exhibit highly nonlinear effects on the simulation. The MCMC method is combined with traditional optimization method to improve computational efficiency. The parameters of the surface complexation models are first calibrated using a global optimization technique, multi-start quasi-Newton BFGS, which employs an approximation to the Hessian. The parameter correlation is measured by the covariance matrix computed via the Fisher information matrix. Parameter ranges are necessary to improve convergence of the MCMC simulation, even when the adaptive Metropolis method is used. The MCMC results indicate that the parameters do not necessarily follow a normal distribution and that the nonlinear intervals are more accurate than the linear intervals for the nonlinear surface complexation models. In comparison with the linear and nonlinear prediction intervals, the prediction intervals of MCMC are more robust to simulate the breakthrough curves that are not used for the parameter calibration and estimation of parameter distributions.
Monte Carlo and Quasi-Monte Carlo for Art B. Owen
Owen, Art
Monte Carlo and Quasi-Monte Carlo for Statistics Art B. Owen Abstract This article reports Monte Carlo methods can be used. There was a special emphasis on areas where Quasi-Monte Carlo ideas This survey is aimed at exposing good problems in statistics to researchers in Quasi- Monte Carlo. It has
AN ASSESSMENT OF MCNP WEIGHT WINDOWS
J. S. HENDRICKS; C. N. CULBERTSON
2000-01-01
The weight window variance reduction method in the general-purpose Monte Carlo N-Particle radiation transport code MCNPTM has recently been rewritten. In particular, it is now possible to generate weight window importance functions on a superimposed mesh, eliminating the need to subdivide geometries for variance reduction purposes. Our assessment addresses the following questions: (1) Does the new MCNP4C treatment utilize weight windows as well as the former MCNP4B treatment? (2) Does the new MCNP4C weight window generator generate importance functions as well as MCNP4B? (3) How do superimposed mesh weight windows compare to cell-based weight windows? (4) What are the shortcomings of the new MCNP4C weight window generator? Our assessment was carried out with five neutron and photon shielding problems chosen for their demanding variance reduction requirements. The problems were an oil well logging problem, the Oak Ridge fusion shielding benchmark problem, a photon skyshine problem, an air-over-ground problem, and a sample problem for variance reduction.
NASA Astrophysics Data System (ADS)
Benhdech, Yassine; Beaumont, Stéphane; Guédon, Jean-Pierre; Torfeh, Tarraf
2010-04-01
In this paper, we deepen the R&D program named DTO-DC (Digital Object Test and Dosimetric Console), which goal is to develop an efficient, accurate and full method to achieve dosimetric quality control (QC) of radiotherapy treatment planning system (TPS). This method is mainly based on Digital Test Objects (DTOs) and on Monte Carlo (MC) simulation using the PENELOPE code [1]. These benchmark simulations can advantageously replace experimental measures typically used as reference for comparison with TPS calculated dose. Indeed, the MC simulations rather than dosimetric measurements allow contemplating QC without tying treatment devices and offer in many situations (i.p. heterogeneous medium, lack of scattering volume...) better accuracy compared to dose measurements with classical dosimetry equipment of a radiation therapy department. Furthermore using MC simulations and DTOs, i.e. a totally numerical QC tools, will also simplify QC implementation, and enable process automation; this allows radiotherapy centers to have a more complete and thorough QC. The program DTO-DC was established primarily on ELEKTA accelerator (photons mode) using non-anatomical DTOs [2]. Today our aim is to complete and apply this program on VARIAN accelerator (photons and electrons mode) using anatomical DTOs. First, we developed, modeled and created three anatomical DTOs in DICOM format: 'Head and Neck', Thorax and Pelvis. We parallelized the PENELOPE code using MPI libraries to accelerate their calculation, we have modeled in PENELOPE geometry Clinac head of Varian Clinac 2100CD (photons mode). Then, to implement this method, we calculated the dose distributions in Pelvis DTO using PENELOPE and ECLIPSE TPS. Finally we compared simulated and calculated dose distributions employing the relative difference proposed by Venselaar [3]. The results of this work demonstrate the feasibility of this method that provides a more accurate and easily achievable QC. Nonetheless, this method, implemented on ECLIPSE TPS version 8.6.15, has revealed large discrepancies (11%) between Monte Carlo simulations and the AAA algorithm calculations especially in equivalent air and equivalent bone areas. Our work will be completed by dose measurement (with film) in the presence of heterogeneous environment to validate MC simulations.
NASA Technical Reports Server (NTRS)
Palmer, Grant; Prabhu, Dinesh; Cruden, Brett A.
2013-01-01
The 2013-2022 Decaedal survey for planetary exploration has identified probe missions to Uranus and Saturn as high priorities. This work endeavors to examine the uncertainty for determining aeroheating in such entry environments. Representative entry trajectories are constructed using the TRAJ software. Flowfields at selected points on the trajectories are then computed using the Data Parallel Line Relaxation (DPLR) Computational Fluid Dynamics Code. A Monte Carlo study is performed on the DPLR input parameters to determine the uncertainty in the predicted aeroheating, and correlation coefficients are examined to identify which input parameters show the most influence on the uncertainty. A review of the present best practices for input parameters (e.g. transport coefficient and vibrational relaxation time) is also conducted. It is found that the 2(sigma) - uncertainty for heating on Uranus entry is no more than 2.1%, assuming an equilibrium catalytic wall, with the uncertainty being determined primarily by diffusion and H(sub 2) recombination rate within the boundary layer. However, if the wall is assumed to be partially or non-catalytic, this uncertainty may increase to as large as 18%. The catalytic wall model can contribute over 3x change in heat flux and a 20% variation in film coefficient. Therefore, coupled material response/fluid dynamic models are recommended for this problem. It was also found that much of this variability is artificially suppressed when a constant Schmidt number approach is implemented. Because the boundary layer is reacting, it is necessary to employ self-consistent effective binary diffusion to obtain a correct thermal transport solution. For Saturn entries, the 2(sigma) - uncertainty for convective heating was less than 3.7%. The major uncertainty driver was dependent on shock temperature/velocity, changing from boundary layer thermal conductivity to diffusivity and then to shock layer ionization rate as velocity increases. While radiative heating for Uranus entry was negligible, the nominal solution for Saturn computed up to 20% radiative heating at the highest velocity examined. The radiative heating followed a non-normal distribution, with up to a 3x variation in magnitude. This uncertainty is driven by the H(sub 2) dissociation rate, as H(sub 2) that persists in the hot non-equilibrium zone contributes significantly to radiation.
A method for photon beam Monte Carlo multileaf collimator particle transport
NASA Astrophysics Data System (ADS)
Siebers, Jeffrey V.; Keall, Paul J.; Kim, Jong Oh; Mohan, Radhe
2002-09-01
Monte Carlo (MC) algorithms are recognized as the most accurate methodology for patient dose assessment. For intensity-modulated radiation therapy (IMRT) delivered with dynamic multileaf collimators (DMLCs), accurate dose calculation, even with MC, is challenging. Accurate IMRT MC dose calculations require inclusion of the moving MLC in the MC simulation. Due to its complex geometry, full transport through the MLC can be time consuming. The aim of this work was to develop an MLC model for photon beam MC IMRT dose computations. The basis of the MC MLC model is that the complex MLC geometry can be separated into simple geometric regions, each of which readily lends itself to simplified radiation transport. For photons, only attenuation and first Compton scatter interactions are considered. The amount of attenuation material an individual particle encounters while traversing the entire MLC is determined by adding the individual amounts from each of the simplified geometric regions. Compton scatter is sampled based upon the total thickness traversed. Pair production and electron interactions (scattering and bremsstrahlung) within the MLC are ignored. The MLC model was tested for 6 MV and 18 MV photon beams by comparing it with measurements and MC simulations that incorporate the full physics and geometry for fields blocked by the MLC and with measurements for fields with the maximum possible tongue-and-groove and tongue-or-groove effects, for static test cases and for sliding windows of various widths. The MLC model predicts the field size dependence of the MLC leakage radiation within 0.1% of the open-field dose. The entrance dose and beam hardening behind a closed MLC are predicted within +/-1% or 1 mm. Dose undulations due to differences in inter- and intra-leaf leakage are also correctly predicted. The MC MLC model predicts leaf-edge tongue-and-groove dose effect within +/-1% or 1 mm for 95% of the points compared at 6 MV and 88% of the points compared at 18 MV. The dose through a static leaf tip is also predicted generally within +/-1% or 1 mm. Tests with sliding windows of various widths confirm the accuracy of the MLC model for dynamic delivery and indicate that accounting for a slight leaf position error (0.008 cm for our MLC) will improve the accuracy of the model. The MLC model developed is applicable to both dynamic MLC and segmental MLC IMRT beam delivery and will be useful for patient IMRT dose calculations, pre-treatment verification of IMRT delivery and IMRT portal dose transmission dosimetry.
ERIC Educational Resources Information Center
Carsey, Thomas M.; Harden, Jeffrey J.
2015-01-01
Graduate students in political science come to the discipline interested in exploring important political questions, such as "What causes war?" or "What policies promote economic growth?" However, they typically do not arrive prepared to address those questions using quantitative methods. Graduate methods instructors must…
Finite element method based Monte Carlo filters for structural system identification
H. A. Nasrellah; C. S. Manohar
2011-01-01
The paper proposes a strategy for combining two powerful computational procedures, namely, the finite element method (FEM) for structural analysis and particle filtering for dynamic state estimation, to tackle the problem of structural system parameter identification based on a set of noisy measurements on static and (or) dynamic structural responses. The proposed identification method automatically inherits the wide ranging capabilities
Tokmakian, Robin
1 Advanced statistical methods The classical way to look at uncertainties is via standard Monte to build a statistical approximation to the model, known as an emulator. Such methods have been of interest of the application of these statistical methods to fairly realistic and complex models. Figure 1: Using a simple
Shi, C. Y.; Xu, X. George; Stabin, Michael G. [Department of Radiation Oncology, University of Texas Health Science Center, San Antonio, Texas 78229 (United States); Nuclear Engineering and Engineering Physics Program, Rensselaer Polytechnic Institute, Room 1-11, NES Building, Tibbits Avenue, Troy, New York 12180 (United States); Department of Radiology and Radiological Sciences, Vanderbilt University, Nashville, Tennessee 37232-2675 (United States)
2008-07-15
Estimates of radiation absorbed doses from radionuclides internally deposited in a pregnant woman and her fetus are very important due to elevated fetal radiosensitivity. This paper reports a set of specific absorbed fractions (SAFs) for use with the dosimetry schema developed by the Society of Nuclear Medicine's Medical Internal Radiation Dose (MIRD) Committee. The calculations were based on three newly constructed pregnant female anatomic models, called RPI-P3, RPI-P6, and RPI-P9, that represent adult females at 3-, 6-, and 9-month gestational periods, respectively. Advanced Boundary REPresentation (BREP) surface-geometry modeling methods were used to create anatomically realistic geometries and organ volumes that were carefully adjusted to agree with the latest ICRP reference values. A Monte Carlo user code, EGS4-VLSI, was used to simulate internal photon emitters ranging from 10 keV to 4 MeV. SAF values were calculated and compared with previous data derived from stylized models of simplified geometries and with a model of a 7.5-month pregnant female developed previously from partial-body CT images. The results show considerable differences between these models for low energy photons, but generally good agreement at higher energies. These differences are caused mainly by different organ shapes and positions. Other factors, such as the organ mass, the source-to-target-organ centroid distance, and the Monte Carlo code used in each study, played lesser roles in the observed differences in these. Since the SAF values reported in this study are based on models that are anatomically more realistic than previous models, these data are recommended for future applications as standard reference values in internal dosimetry involving pregnant females.
Booth, George H; Cleland, Deidre; Thom, Alex J W; Alavi, Ali
2011-08-28
The full configuration interaction quantum Monte Carlo (FCIQMC) method, as well as its "initiator" extension (i-FCIQMC), is used to tackle the complex electronic structure of the carbon dimer across the entire dissociation reaction coordinate, as a prototypical example of a strongly correlated molecular system. Various basis sets of increasing size up to the large cc-pVQZ are used, spanning a fully accessible N-electron basis of over 10(12) Slater determinants, and the accuracy of the method is demonstrated in each basis set. Convergence to the FCI limit is achieved in the largest basis with only O[10(7)] walkers within random errorbars of a few tenths of a millihartree across the binding curve, and extensive comparisons to FCI, CCSD(T), MRCI, and CEEIS results are made where possible. A detailed exposition of the convergence properties of the FCIQMC methods is provided, considering convergence with elapsed imaginary time, number of walkers and size of the basis. Various symmetries which can be incorporated into the stochastic dynamic, beyond the standard abelian point group symmetry and spin polarisation are also described. These can have significant benefit to the computational effort of the calculations, as well as the ability to converge to various excited states. The results presented demonstrate a new benchmark accuracy in basis-set energies for systems of this size, significantly improving on previous state of the art estimates. PMID:21895156
NASA Astrophysics Data System (ADS)
Nakayama, Keiji; Tanaka, Masaaki
2012-12-01
Previously, a microplasma was discovered in the rear gap of a sliding contact. This plasma is called a ‘triboplasma’, the behaviour of which obeys the Paschen law in a gas discharge. The generation of the triboplasma has been explained to be generated by discharging of ambient air due to the intense electric field caused by tribocharging, based on the experimental findings. In this report, the mechanism of triboplasma generation is theoretically analysed using the particle-in-cell/Monte Carlo collision (PIC/MCC) simulation method for the triboplasma generated in the tribosystem, where a diamond pin slides against a sapphire disc in ambient air. Two-dimensional sideward density distributions of the electrons, N_2^+ ions and O_2^+ ions in the rear gap of the sliding contact are obtained theoretically by the PIC/MCC method. These calculated particle distributions coincided well with the triboplasma distributions experimentally observed. The previously proposed triboplasma generation due to gas discharging is verified theoretically using the PIC/MCC simulation method.
NASA Technical Reports Server (NTRS)
Bozzolo, Guillermo H.; Good, Brian; Noebe, Ronald D.; Honecy, Frank; Abel, Phillip
1999-01-01
Large-scale simulations of dynamic processes at the atomic level have developed into one of the main areas of work in computational materials science. Until recently, severe computational restrictions, as well as the lack of accurate methods for calculating the energetics, resulted in slower growth in the area than that required by current alloy design programs. The Computational Materials Group at the NASA Lewis Research Center is devoted to the development of powerful, accurate, economical tools to aid in alloy design. These include the BFS (Bozzolo, Ferrante, and Smith) method for alloys (ref. 1) and the development of dedicated software for large-scale simulations based on Monte Carlo- Metropolis numerical techniques, as well as state-of-the-art visualization methods. Our previous effort linking theoretical and computational modeling resulted in the successful prediction of the microstructure of a five-element intermetallic alloy, in excellent agreement with experimental results (refs. 2 and 3). This effort also produced a complete description of the role of alloying additions in intermetallic binary, ternary, and higher order alloys (ref. 4).
Self-Learning Off-Lattice Kinetic Monte Carlo method as applied to growth on metal surfaces
NASA Astrophysics Data System (ADS)
Trushin, Oleg; Kara, Abdelkader; Rahman, Talat
2007-03-01
We propose a new development in the Self-Learning Kinetic Monte Carlo (SLKMC) method with the goal of improving the accuracy with which atomic mechanisms controlling diffusive processes on metal surfaces may be identified. This is important for diffusion of small clusters (2 - 20 atoms) in which atoms may occupy Off-Lattice positions. Such a procedure is also necessary for consideration of heteroepitaxial growth. The new technique combines an earlier version of SLKMC [1] with the inclusion of off-lattice occupancy. This allows us to include arbitrary positions of adatoms in the modeling and makes the simulations more realistic and reliable. We have tested this new approach for the case of the diffusion of small 2D Cu clusters diffusion on Cu(111) and found good performance and satisfactory agreement with results obtained from previous version of SLKMC. The new method also helped reveal a novel atomic mechanism contributing to cluster migration. We have also applied this method to study the diffusion of Cu clusters on Ag(111), and find that Cu atoms generally prefer to occupy off-lattice sites. [1] O. Trushin, A. Kara, A. Karim, T.S. Rahman Phys. Rev B 2005
Monte Carlo variance reduction
NASA Technical Reports Server (NTRS)
Byrn, N. R.
1980-01-01
Computer program incorporates technique that reduces variance of forward Monte Carlo method for given amount of computer time in determining radiation environment in complex organic and inorganic systems exposed to significant amounts of radiation.
Biotic indices have been used ot assess biological condition by dividing index scores into condition categories. Historically the number of categories has been based on professional judgement. Alternatively, statistical methods such as power analysis can be used to determine the ...
F. Einar Kruis; Jianming Wei; Till van der Zwaag; Stefan Haep
No method is currently available to combine stochastic, particle-based PBE modeling by means of Monte-Carlo simulation of individual particles and CFD. CFD is based on solving numerically partial differential equations, whereas Monte-Carlo simulation of the PBE bases on converting kinetic rate equations into probabilities and selecting the relevant events by means of random numbers. A joint mathematical framework is thus
Matilainen, Kaarina; Mäntysaari, Esa A.; Lidauer, Martin H.; Strandén, Ismo; Thompson, Robin
2013-01-01
Estimation of variance components by Monte Carlo (MC) expectation maximization (EM) restricted maximum likelihood (REML) is computationally efficient for large data sets and complex linear mixed effects models. However, efficiency may be lost due to the need for a large number of iterations of the EM algorithm. To decrease the computing time we explored the use of faster converging Newton-type algorithms within MC REML implementations. The implemented algorithms were: MC Newton-Raphson (NR), where the information matrix was generated via sampling; MC average information(AI), where the information was computed as an average of observed and expected information; and MC Broyden's method, where the zero of the gradient was searched using a quasi-Newton-type algorithm. Performance of these algorithms was evaluated using simulated data. The final estimates were in good agreement with corresponding analytical ones. MC NR REML and MC AI REML enhanced convergence compared to MC EM REML and gave standard errors for the estimates as a by-product. MC NR REML required a larger number of MC samples, while each MC AI REML iteration demanded extra solving of mixed model equations by the number of parameters to be estimated. MC Broyden's method required the largest number of MC samples with our small data and did not give standard errors for the parameters directly. We studied the performance of three different convergence criteria for the MC AI REML algorithm. Our results indicate the importance of defining a suitable convergence criterion and critical value in order to obtain an efficient Newton-type method utilizing a MC algorithm. Overall, use of a MC algorithm with Newton-type methods proved feasible and the results encourage testing of these methods with different kinds of large-scale problem settings. PMID:24339886
A Monte-Carlo study of genetic algorithm initial population generation methods
Raymond R. Hill; Wright-Patterson AFB
1999-01-01
We briefly describe genetic algorithms (GAs) and focus attention on initial population generation methods for two- dimensional knapsack problems. Based on work describing the probability a random solution vector is feasible for 0-1 knapsack problems, we propose a simple heuristic for randomly generating good initial populations for genetic algorithm applications to two-dimensional knapsack problems. We report on an experiment comparing
A New Monte Carlo Filtering Method for the Diagnosis of Mission-Critical Failures
NASA Technical Reports Server (NTRS)
Gay, Gregory; Menzies, Tim; Davies, Misty; Gundy-Burlet, Karen
2009-01-01
Testing large-scale systems is expensive in terms of both time and money. Running simulations early in the process is a proven method of finding the design faults likely to lead to critical system failures, but determining the exact cause of those errors is still time-consuming and requires access to a limited number of domain experts. It is desirable to find an automated method that explores the large number of combinations and is able to isolate likely fault points. Treatment learning is a subset of minimal contrast-set learning that, rather than classifying data into distinct categories, focuses on finding the unique factors that lead to a particular classification. That is, they find the smallest change to the data that causes the largest change in the class distribution. These treatments, when imposed, are able to identify the settings most likely to cause a mission-critical failure. This research benchmarks two treatment learning methods against standard optimization techniques across three complex systems, including two projects from the Robust Software Engineering (RSE) group within the National Aeronautics and Space Administration (NASA) Ames Research Center. It is shown that these treatment learners are both faster than traditional methods and show demonstrably better results.
Improved methods for Monte Carlo estimation of the fisher information matrix
J. C. Spall
2008-01-01
The Fisher information matrix summarizes the amount of information in a set of data relative to the quantities of interest and forms the basis for the Cramer-Rao (lower) bound on the uncertainty in an estimate. There are many applications of the information matrix in modeling, systems analysis, and estimation. This paper presents a resampling-based method for computing the information matrix
On Monte Carlo methods for estimating the fisher information matrix in difficult problems
J. C. Spall
2009-01-01
The Fisher information matrix summarizes the amount of information in a set of data relative to the quantities of interest and forms the basis for the Cramer-Rao (lower) bound on the uncertainty in an estimate. There are many applications of the information matrix in modeling, systems analysis, and estimation. This paper presents a resampling-based method for computing the information matrix
Convergence Proof for a Monte Carlo Method for Combinatorial Optimization Problems
Fidanova, Stefka
(COPs). The Ant Colony Optimization (ACO) is a MC method, created to solve eÆ- ciently COPs. The Ant Colony Optimization (ACO) algorithms are being applied successfully to diverse heavily problems. To show. The sampling is realized con- currently by a collection of di#11;erently instantiated replicas of the same ant
Markov chain Monte Carlo methods for family trees using a parallel processor
Bradford, Russell
of moderate complexity, these become infeasible when either the model, the pedigree or both are more complex. The computational time required grows exponenÂ tially with complexity. The exact methods used are variants a straightforward Gibbs sampler yields observations from an irreducible Markov chain. A single Gibbs step consists
NASA Astrophysics Data System (ADS)
Xu, Yuan; Bai, Ti; Yan, Hao; Ouyang, Luo; Pompos, Arnold; Wang, Jing; Zhou, Linghong; Jiang, Steve B.; Jia, Xun
2015-05-01
Cone-beam CT (CBCT) has become the standard image guidance tool for patient setup in image-guided radiation therapy. However, due to its large illumination field, scattered photons severely degrade its image quality. While kernel-based scatter correction methods have been used routinely in the clinic, it is still desirable to develop Monte Carlo (MC) simulation-based methods due to their accuracy. However, the high computational burden of the MC method has prevented routine clinical application. This paper reports our recent development of a practical method of MC-based scatter estimation and removal for CBCT. In contrast with conventional MC approaches that estimate scatter signals using a scatter-contaminated CBCT image, our method used a planning CT image for MC simulation, which has the advantages of accurate image intensity and absence of image truncation. In our method, the planning CT was first rigidly registered with the CBCT. Scatter signals were then estimated via MC simulation. After scatter signals were removed from the raw CBCT projections, a corrected CBCT image was reconstructed. The entire workflow was implemented on a GPU platform for high computational efficiency. Strategies such as projection denoising, CT image downsampling, and interpolation along the angular direction were employed to further enhance the calculation speed. We studied the impact of key parameters in the workflow on the resulting accuracy and efficiency, based on which the optimal parameter values were determined. Our method was evaluated in numerical simulation, phantom, and real patient cases. In the simulation cases, our method reduced mean HU errors from 44 to 3?HU and from 78 to 9?HU in the full-fan and the half-fan cases, respectively. In both the phantom and the patient cases, image artifacts caused by scatter, such as ring artifacts around the bowtie area, were reduced. With all the techniques employed, we achieved computation time of less than 30?s including the time for both the scatter estimation and CBCT reconstruction steps. The efficacy of our method and its high computational efficiency make our method attractive for clinical use.
Xu, Yuan; Bai, Ti; Yan, Hao; Ouyang, Luo; Pompos, Arnold; Wang, Jing; Zhou, Linghong; Jiang, Steve B; Jia, Xun
2015-05-01
Cone-beam CT (CBCT) has become the standard image guidance tool for patient setup in image-guided radiation therapy. However, due to its large illumination field, scattered photons severely degrade its image quality. While kernel-based scatter correction methods have been used routinely in the clinic, it is still desirable to develop Monte Carlo (MC) simulation-based methods due to their accuracy. However, the high computational burden of the MC method has prevented routine clinical application. This paper reports our recent development of a practical method of MC-based scatter estimation and removal for CBCT. In contrast with conventional MC approaches that estimate scatter signals using a scatter-contaminated CBCT image, our method used a planning CT image for MC simulation, which has the advantages of accurate image intensity and absence of image truncation. In our method, the planning CT was first rigidly registered with the CBCT. Scatter signals were then estimated via MC simulation. After scatter signals were removed from the raw CBCT projections, a corrected CBCT image was reconstructed. The entire workflow was implemented on a GPU platform for high computational efficiency. Strategies such as projection denoising, CT image downsampling, and interpolation along the angular direction were employed to further enhance the calculation speed. We studied the impact of key parameters in the workflow on the resulting accuracy and efficiency, based on which the optimal parameter values were determined. Our method was evaluated in numerical simulation, phantom, and real patient cases. In the simulation cases, our method reduced mean HU errors from 44 to 3?HU and from 78 to 9?HU in the full-fan and the half-fan cases, respectively. In both the phantom and the patient cases, image artifacts caused by scatter, such as ring artifacts around the bowtie area, were reduced. With all the techniques employed, we achieved computation time of less than 30?s including the time for both the scatter estimation and CBCT reconstruction steps. The efficacy of our method and its high computational efficiency make our method attractive for clinical use. PMID:25860299
Peter, Silvia; Modregger, Peter; Fix, Michael K.; Volken, Werner; Frei, Daniel; Manser, Peter; Stampanoni, Marco
2014-01-01
Phase-sensitive X-ray imaging shows a high sensitivity towards electron density variations, making it well suited for imaging of soft tissue matter. However, there are still open questions about the details of the image formation process. Here, a framework for numerical simulations of phase-sensitive X-ray imaging is presented, which takes both particle- and wave-like properties of X-rays into consideration. A split approach is presented where we combine a Monte Carlo method (MC) based sample part with a wave optics simulation based propagation part, leading to a framework that takes both particle- and wave-like properties into account. The framework can be adapted to different phase-sensitive imaging methods and has been validated through comparisons with experiments for grating interferometry and propagation-based imaging. The validation of the framework shows that the combination of wave optics and MC has been successfully implemented and yields good agreement between measurements and simulations. This demonstrates that the physical processes relevant for developing a deeper understanding of scattering in the context of phase-sensitive imaging are modelled in a sufficiently accurate manner. The framework can be used for the simulation of phase-sensitive X-ray imaging, for instance for the simulation of grating interferometry or propagation-based imaging. PMID:24763652
Harvey, J-P; Gheribi, A E; Chartrand, P
2011-08-28
The design of multicomponent alloys used in different applications based on specific thermo-physical properties determined experimentally or predicted from theoretical calculations is of major importance in many engineering applications. A procedure based on Monte Carlo simulations (MCS) and the thermodynamic integration (TI) method to improve the quality of the predicted thermodynamic properties calculated from classical thermodynamic calculations is presented in this study. The Gibbs energy function of the liquid phase of the Cu-Zr system at 1800 K has been determined based on this approach. The internal structure of Cu-Zr melts and amorphous alloys at different temperatures, as well as other physical properties were also obtained from MCS in which the phase trajectory was modeled by the modified embedded atom model formalism. A rigorous comparison between available experimental data and simulated thermo-physical properties obtained from our MCS is presented in this work. The modified quasichemical model in the pair approximation was parameterized using the internal structure data obtained from our MCS and the precise Gibbs energy function calculated at 1800 K from the TI method. The predicted activity of copper in Cu-Zr melts at 1499 K obtained from our thermodynamic optimization was corroborated by experimental data found in the literature. The validity of the amplitude of the entropy of mixing obtained from the in silico procedure presented in this work was analyzed based on the thermodynamic description of hard sphere mixtures. PMID:21895194
NASA Astrophysics Data System (ADS)
Yeh, C. Y.; Lee, C. C.; Chao, T. C.; Lin, M. H.; Lai, P. A.; Liu, F. H.; Tung, C. J.
2014-02-01
This study aims to utilize a measurement-based Monte Carlo (MBMC) method to evaluate the accuracy of dose distributions calculated using the Eclipse radiotherapy treatment planning system (TPS) based on the anisotropic analytical algorithm. Dose distributions were calculated for the nasopharyngeal carcinoma (NPC) patients treated with the intensity modulated radiotherapy (IMRT). Ten NPC IMRT plans were evaluated by comparing their dose distributions with those obtained from the in-house MBMC programs for the same CT images and beam geometry. To reconstruct the fluence distribution of the IMRT field, an efficiency map was obtained by dividing the energy fluence of the intensity modulated field by that of the open field, both acquired from an aS1000 electronic portal imaging device. The integrated image of the non-gated mode was used to acquire the full dose distribution delivered during the IMRT treatment. This efficiency map redistributed the particle weightings of the open field phase-space file for IMRT applications. Dose differences were observed in the tumor and air cavity boundary. The mean difference between MBMC and TPS in terms of the planning target volume coverage was 0.6% (range: 0.0-2.3%). The mean difference for the conformity index was 0.01 (range: 0.0-0.01). In conclusion, the MBMC method serves as an independent IMRT dose verification tool in a clinical setting.
Park, H. J. [Korea Atomic Energy Research Inst., Daedeokdaero 989-111, Yuseong-gu, Daejeon (Korea, Republic of); Shim, H. J.; Joo, H. G.; Kim, C. H. [Dept. of Nuclear Engineering, Seoul National Univ., 1 Gwanak-ro, Gwanak-gu, Seoul (Korea, Republic of)
2012-07-01
The purpose of this paper is to quantify uncertainties of fuel pin cell or fuel assembly (FA) homogenized few group diffusion theory constants generated from the B1 theory-augmented Monte Carlo (MC) method. A mathematical formulation of the first kind is presented to quantify uncertainties of the few group constants in terms of the two major sources of the MC method; statistical and nuclear cross section and nuclide number density input data uncertainties. The formulation is incorporated into the Seoul National Univ. MC code McCARD. It is then used to compute the uncertainties of the burnup-dependent homogenized two group constants of a low-enriched UO{sub 2} fuel pin cell and a PWR FA on the condition that nuclear cross section input data of U-235 and U-238 from JENDL 3.3 library and nuclide number densities from the solution to fuel depletion equations have uncertainties. The contribution of the MC input data uncertainties to the uncertainties of the two group constants of the two fuel systems is separated from that of the statistical uncertainties. The utilities of uncertainty quantifications are then discussed from the standpoints of safety analysis of existing power reactors, development of new fuel or reactor system design, and improvement of covariance files of the evaluated nuclear data libraries. (authors)
Hakimabad, Hashem Miri; Motavalli, Lalle Rafat
2008-01-01
To design a diagnostic or therapeutic irradiation programme, there is a need to estimate the absorbed dose. In this investigation, specific absorbed fractions (SAFs) were calculated based on Cristy and Eckerman's analytical adult phantom, by MCNP4C Monte Carlo code. SAFs were estimated with uncertainty <3%, for about 600 source organ-target organ pairs at 12 photon energies (these data are available at http://www.um.ac.ir/~mirihakim). Then these results were compared with Cristy and Eckerman's, which were based on direct Monte Carlo, reciprocity principle and point source kernel methods. Also, agreements and disagreements between them for different states were discussed. PMID:17951243
Finch, W. Holmes; Bolin, Jocelyn H.; Kelley, Ken
2014-01-01
Classification using standard statistical methods such as linear discriminant analysis (LDA) or logistic regression (LR) presume knowledge of group membership prior to the development of an algorithm for prediction. However, in many real world applications members of the same nominal group, might in fact come from different subpopulations on the underlying construct. For example, individuals diagnosed with depression will not all have the same levels of this disorder, though for the purposes of LDA or LR they will be treated in the same manner. The goal of this simulation study was to examine the performance of several methods for group classification in the case where within group membership was not homogeneous. For example, suppose there are 3 known groups but within each group two unknown classes. Several approaches were compared, including LDA, LR, classification and regression trees (CART), generalized additive models (GAM), and mixture discriminant analysis (MIXDA). Results of the study indicated that CART and mixture discriminant analysis were the most effective tools for situations in which known groups were not homogeneous, whereas LDA, LR, and GAM had the highest rates of misclassification. Implications of these results for theory and practice are discussed. PMID:24904445
Bykov, A V; Priezzhev, A V; Myllylae, Risto A
2011-06-30
Two-dimensional spatial intensity distributions of diffuse scattering of near-infrared laser radiation from a strongly scattering medium, whose optical properties are close to those of skin, are obtained using Monte Carlo simulation. The medium contains a cylindrical inhomogeneity with the optical properties, close to those of blood. It is shown that stronger absorption and scattering of light by blood compared to the surrounding medium leads to the fact that the intensity of radiation diffusely reflected from the surface of the medium under study and registered at its surface has a local minimum directly above the cylindrical inhomogeneity. This specific feature makes the method of spatially-resolved reflectometry potentially applicable for imaging blood vessels and determining their sizes. It is also shown that blurring of the vessel image increases almost linearly with increasing vessel embedment depth. This relation may be used to determine the depth of embedment provided that the optical properties of the scattering media are known. The optimal position of the sources and detectors of radiation, providing the best imaging of the vessel under study, is determined. (biophotonics)
Mallory, Joel D; Mandelshtam, Vladimir A
2015-01-01
The Diffusion Monte Carlo (DMC) method is applied to the water monomer, dimer, and hexamer, using q-TIP4P/F, one of the most simple, empirical water models with flexible monomers. The bias in the time step ($\\Delta\\tau$) and population size ($N_w$) is investigated. For the binding energies, the bias in $\\Delta\\tau$ cancels nearly completely, while a noticeable bias in $N_w$ still remains. However, for the isotope shift, (e.g, in the dimer binding energies between (H$_2$O)$_2$ and (D$_2$O)$_2$) the systematic errors in $N_w$ do cancel. Consequently, very accurate results for the latter (within $\\sim 0.01$ kcal/mol) are obtained with relatively moderate numerical effort ($N_w\\sim 10^3$). For the water hexamer and its (D$_2$O)$_6$ isotopomer the DMC results as a function of $N_w$ are examined for the cage and prism isomers. For a given isomer, the issue of the walker population leaking out of the corresponding basin of attraction is addressed by using appropriate geometric constraints. The population size bias f...
Kuss, M.; Markel, T.; Kramer, W.
2011-01-01
Concentrated purchasing patterns of plug-in vehicles may result in localized distribution transformer overload scenarios. Prolonged periods of transformer overloading causes service life decrements, and in worst-case scenarios, results in tripped thermal relays and residential service outages. This analysis will review distribution transformer load models developed in the IEC 60076 standard, and apply the model to a neighborhood with plug-in hybrids. Residential distribution transformers are sized such that night-time cooling provides thermal recovery from heavy load conditions during the daytime utility peak. It is expected that PHEVs will primarily be charged at night in a residential setting. If not managed properly, some distribution transformers could become overloaded, leading to a reduction in transformer life expectancy, thus increasing costs to utilities and consumers. A Monte-Carlo scheme simulated each day of the year, evaluating 100 load scenarios as it swept through the following variables: number of vehicle per transformer, transformer size, and charging rate. A general method for determining expected transformer aging rate will be developed, based on the energy needs of plug-in vehicles loading a residential transformer.
Washington at Seattle, University of - Department of Physics, Electroweak Interaction Research Group
Towards Monte Carlo Simulations on Large Nuclei Â· August 2014 Towards Monte Carlo Simulations published method to compute properties on neutron matter using variational Monte Carlo simulations published a method of performing variational Monte Carlo calculations on neutron matter comprised of up
Mickael, M.; Gardner, R.P.; Verghese, K.
1988-07-01
An improved method for calculating the total probability of particle scattering within the solid angle subtended by finite detectors is developed, presented, and tested. The limiting polar and azimuthal angles subtended by the detector are measured from the direction that most simplifies their calculation rather than from the incident particle direction. A transformation of the particle scattering probability distribution function (pdf) is made to match the transformation of the direction from which the limiting angles are measured. The particle scattering probability to the detector is estimated by evaluating the integral of the transformed pdf over the range of the limiting angles measured from the preferred direction. A general formula for transforming the particle scattering pdf is derived from basic principles and applied to four important scattering pdf's; namely, isotropic scattering in the Lab system, isotropic neutron scattering in the center-of-mass system, thermal neutron scattering by the free gas model, and gamma-ray Klein-Nishina scattering. Some approximations have been made to these pdf's to enable analytical evaluations of the final integrals. These approximations are shown to be valid over a wide range of energies and for most elements. The particle scattering probability to spherical, planar circular, and right circular cylindrical detectors has been calculated using the new and previously reported direct approach. Results indicate that the new approach is valid and is computationally faster by orders of magnitude.
NASA Astrophysics Data System (ADS)
Kandidov, V. P.; Militsin, V. O.; Bykov, A. V.; Priezzhev, A. V.
2006-11-01
Two ways of simulating statistically the propagation of laser radiation in dispersive media by the Monte-Carlo method are compared. The first approach can be called corpuscular because it is based on the calculation of random photon trajectories, while the second one can be referred to as the wave approach because it is based on the calculation of characteristics of random wave fields. It is shown that, although these approaches are based on different physical concepts of radiation scattering by particles, they yield almost equivalent results for the intensity of a restricted beam in a dispersive medium. However, there exist some differences. The corpuscular Monte-Carlo method does not reproduce the diffraction divergence of the beam, which can be taken into account by introducing the diffraction factor. The wave method does not consider backscattering, which corresponds to the quasi-optical approximation.
NASA Astrophysics Data System (ADS)
Okada, Eiji; Schweiger, Martin; Arridge, Simon R.; Firbank, Michael; Delpy, David T.
1996-07-01
To validate models of light propagation in biological tissue, experiments to measure the mean time of flight have been carried out on several solid cylindrical layered phantoms. The optical properties of the inner cylinders of the phantoms were close to those of adult brain white matter, whereas a range of scattering or absorption coefficients was chosen for the outer layer. Experimental results for the mean optical path length have been compared with the predictions of both an exact Monte Carlo (MC) model and a diffusion equation, with two differing boundary conditions implemented in a finite-element method (FEM). The MC and experimental results are in good agreement despite poor statistics for large fiber spacings, whereas good agreement with the FEM prediction requires a careful choice of proper boundary conditions. measurement, Monte Carlo method, finite-element method.
NASA Astrophysics Data System (ADS)
Chen, X.; Rubin, Y.; Baldocchi, D. D.
2005-12-01
Understanding the interactions between soil, plant, and the atmosphere under water-stressed conditions is important for ecosystems where water availability is limited. In such ecosystems, the amount of water transferred from the soil to the atmosphere is controlled not only by weather conditions and vegetation type but also by soil water availability. Although researchers have proposed different approaches to model the impact of soil moisture on plant activities, the parameters involved are difficult to measure. However, using measurements of observed latent heat and carbon fluxes, as well as soil moisture data, Bayesian inversion methods can be employed to estimate the various model parameters. In our study, actual Evapotranspiration (ET) of an ecosystem is approximated by the Priestley-Taylor relationship, with the Priestley-Taylor coefficient modeled as a function of soil moisture content. Soil moisture limitation on root uptake is characterized in a similar manner as the Feddes' model. The inference of Bayesian inversion is processed within the framework of graphical theories. Due to the difficulty of obtaining exact inference, the Markov chain Monte Carlo (MCMC) method is implemented using a free software package, BUGS (Bayesian inference Using Gibbs Sampling). The proposed methodology is applied to a Mediterranean Oak-Savanna FLUXNET site in California, where continuous measurements of actual ET are obtained from eddy-covariance technique and soil moisture contents are monitored by several time domain reflectometry probes located within the footprint of the flux tower. After the implementation of Bayesian inversion, the posterior distributions of all the parameters exhibit enhancement in information compared to the prior distributions. The generated samples based on data in year 2003 are used to predict the actual ET in year 2004 and the prediction uncertainties are assessed in terms of confidence intervals. Our tests also reveal the usefulness of various types of soil moisture data in parameter estimation, which could be used to guide analyses of available data and planning of field data collection activities.
NASA Astrophysics Data System (ADS)
Dioszegi, I.; Rusek, A.; Dane, B. R.; Chiang, I. H.; Meek, A. G.; Dilmanian, F. A.
2011-06-01
Recent upgrades of the MCNPX Monte Carlo code include transport of heavy ions. We employed the new code to simulate the energy and dose distributions produced by carbon beams in rabbit's head in and around a brain tumor. The work was within our experimental technique of interlaced carbon microbeams, which uses two 90° arrays of parallel, thin planes of carbon beams (microbeams) interlacing to produce a solid beam at the target. A similar version of the method was earlier developed with synchrotron-generated x-ray microbeams. We first simulated the Bragg peak in high density polyethylene and other materials, where we could compare the calculated carbon energy deposition to the measured data produced at the NASA Space Radiation Laboratory (NSRL) at Brookhaven National Laboratory (BNL). The results showed that new MCNPX code gives a reasonable account of the carbon beam's dose up to ˜200 MeV/nucleon beam energy. At higher energies, which were not relevant to our project, the model failed to reproduce the Bragg-peak's extent of increasing nuclear breakup tail. In our model calculations we determined the dose distribution along the beam path, including the angular straggling of the microbeams, and used the data for determining the optimal values of beam spacing in the array for producing adequate beam interlacing at the target. We also determined, for the purpose of Bragg-peak spreading at the target, the relative beam intensities of the consecutive exposures with stepwise lower beam energies, and simulated the resulting dose distribution in the spread out Bragg-peak. The details of the simulation methods used and the results obtained are presented.
Mallory, Joel D; Brown, Sandra E; Mandelshtam, Vladimir A
2015-06-18
The diffusion Monte Carlo (DMC) method is applied to the water monomer, dimer, and hexamer using q-TIP4P/F, one of the most simple empirical water models with flexible monomers. The bias in the time step (??) and population size (Nw) is investigated. For the binding energies, the bias in ?? cancels nearly completely, whereas a noticeable bias in Nw remains. However, for the isotope shift (e.g, in the dimer binding energies between (H2O)2 and (D2O)2), the systematic errors in Nw do cancel. Consequently, very accurate results for the latter (within ?0.01 kcal/mol) are obtained with moderate numerical effort (Nw ? 10(3)). For the water hexamer and its (D2O)6 isotopomer, the DMC results as a function of Nw are examined for the cage and prism isomers. For a given isomer, the issue of the walker population leaking out of the corresponding basin of attraction is addressed by using appropriate geometric constraints. The population size bias for the hexamer is more severe, and to maintain accuracy similar to that of the dimer, Nw must be increased by ?2 orders of magnitude. Fortunately, when the energy difference between the cage and prism is taken, the biases cancel, thereby reducing the systematic errors to within ?0.01 kcal/mol when using a population of Nw = 4.8 × 10(5) walkers. Consequently, a very accurate result for the isotope shift is also obtained. Notably, both the quantum and isotope effects for the prism-cage energy difference are small. PMID:26001418
NASA Astrophysics Data System (ADS)
Churmakov, D. Yu; Kuz'min, V. L.; Meglinskii, I. V.
2006-11-01
The vector Monte-Carlo method is developed and applied to polarisation optical coherence tomography. The basic principles of simulation of the propagation of polarised electromagnetic radiation with a small coherence length are considered under conditions of multiple scattering. The results of numerical simulations for Rayleigh scattering well agree with the Milne solution generalised to the case of an electromagnetic field and with theoretical calculations in the diffusion approximation.
Denis B. Tikhonov; Boris S. Zhorov
1998-01-01
A model of the nicotinic acetylcholine receptor ion channel was elaborated based on the data from electron microscopy, affinity labeling, cysteine scanning, mutagenesis studies, and channel blockade. A restrained Monte Carlo minimization method was used for the calculations. Five identical M2 segments (the sequence EKMTLSISVL10LALTVFLLVI20V) were arranged in five-helix bundles with various geometrical profiles of the pore. For each bundle,
Wang, Dan; Silkie, Sarah S; Nelson, Kara L; Wuertz, Stefan
2010-09-01
Cultivation- and library-independent, quantitative PCR-based methods have become the method of choice in microbial source tracking. However, these qPCR assays are not 100% specific and sensitive for the target sequence in their respective hosts' genome. The factors that can lead to false positive and false negative information in qPCR results are well defined. It is highly desirable to have a way of removing such false information to estimate the true concentration of host-specific genetic markers and help guide the interpretation of environmental monitoring studies. Here we propose a statistical model based on the Law of Total Probability to predict the true concentration of these markers. The distributions of the probabilities of obtaining false information are estimated from representative fecal samples of known origin. Measurement error is derived from the sample precision error of replicated qPCR reactions. Then, the Monte Carlo method is applied to sample from these distributions of probabilities and measurement error. The set of equations given by the Law of Total Probability allows one to calculate the distribution of true concentrations, from which their expected value, confidence interval and other statistical characteristics can be easily evaluated. The output distributions of predicted true concentrations can then be used as input to watershed-wide total maximum daily load determinations, quantitative microbial risk assessment and other environmental models. This model was validated by both statistical simulations and real world samples. It was able to correct the intrinsic false information associated with qPCR assays and output the distribution of true concentrations of Bacteroidales for each animal host group. Model performance was strongly affected by the precision error. It could perform reliably and precisely when the standard deviation of the precision error was small (? 0.1). Further improvement on the precision of sample processing and qPCR reaction would greatly improve the performance of the model. This methodology, built upon Bacteroidales assays, is readily transferable to any other microbial source indicator where a universal assay for fecal sources of that indicator exists. PMID:20822794
NASA Astrophysics Data System (ADS)
Hotta, Kenji; Kohno, Ryosuke; Takada, Yoshihisa; Hara, Yousuke; Tansho, Ryohei; Himukai, Takeshi; Kameoka, Satoru; Matsuura, Taeko; Nishio, Teiji; Ogino, Takashi
2010-06-01
Treatment planning for proton tumor therapy requires a fast and accurate dose-calculation method. We have implemented a simplified Monte Carlo (SMC) method in the treatment planning system of the National Cancer Center Hospital East for the double-scattering beam delivery scheme. The SMC method takes into account the scattering effect in materials more accurately than the pencil beam algorithm by tracking individual proton paths. We confirmed that the SMC method reproduced measured dose distributions in a heterogeneous slab phantom better than the pencil beam method. When applied to a complex anthropomorphic phantom, the SMC method reproduced the measured dose distribution well, satisfying an accuracy tolerance of 3 mm and 3% in the gamma index analysis. The SMC method required approximately 30 min to complete the calculation over a target volume of 500 cc, much less than the time required for the full Monte Carlo calculation. The SMC method is a candidate for a practical calculation technique with sufficient accuracy for clinical application.
Hendricks, J. S. (John S.)
2003-01-01
MCNPX is a Fortran 90 Monte Carlo radiation transport computer code that transports all particles at all energies. It is a superset of MCNP4C3, and has many capabilities beyond MCNP4C3. These capabilities are summarized along with their quality guarantee and code availability. Then the user interface changes from MCNP are described. Finally, the n.ew capabilities of the latest version, MCNPX 2.5.c, are documented. Future plans and references are also provided.
Wang, L; Fourkal, E; Hayes, S; Jin, L; Ma, C [Fox Chase Cancer Center, Philadelphia, PA (United States)
2014-06-01
Purpose: To study the dosimetric difference resulted in using the pencil beam algorithm instead of Monte Carlo (MC) methods for tumors adjacent to the skull. Methods: We retrospectively calculated the dosimetric differences between RT and MC algorithms for brain tumors treated with CyberKnife located adjacent to the skull for 18 patients (total of 27 tumors). The median tumor sizes was 0.53-cc (range 0.018-cc to 26.2-cc). The absolute mean distance from the tumor to the skull was 2.11 mm (range - 17.0 mm to 9.2 mm). The dosimetric variables examined include the mean, maximum, and minimum doses to the target, the target coverage (TC) and conformality index. The MC calculation used the same MUs as the RT dose calculation without further normalization and 1% statistical uncertainty. The differences were analyzed by tumor size and distance from the skull. Results: The TC was generally reduced with the MC calculation (24 out of 27 cases). The average difference in TC between RT and MC was 3.3% (range 0.0% to 23.5%). When the TC was deemed unacceptable, the plans were re-normalized in order to increase the TC to 99%. This resulted in a 6.9% maximum change in the prescription isodose line. The maximum changes in the mean, maximum, and minimum doses were 5.4 %, 7.7%, and 8.4%, respectively, before re-normalization. When the TC was analyzed with regards to target size, it was found that the worst coverage occurred with the smaller targets (0.018-cc). When the TC was analyzed with regards to the distance to the skull, there was no correlation between proximity to the skull and TC between the RT and MC plans. Conclusions: For smaller targets (< 4.0-cc), MC should be used to re-evaluate the dose coverage after RT is used for the initial dose calculation in order to ensure target coverage.
A Guide to Monte Carlo Simulations in Statistical Physics
NASA Astrophysics Data System (ADS)
Landau, David P.; Binder, Kurt
2014-11-01
1. Introduction; 2. Some necessary background; 3. Simple sampling Monte Carlo methods; 4. Importance sampling Monte Carlo methods; 5. More on importance sampling Monte Carlo methods for lattice systems; 6. Off-lattice models; 7. Reweighting methods; 8. Quantum Monte Carlo methods; 9. Monte Carlo renormalization group methods; 10. Non-equilibrium and irreversible processes; 11. Lattice gauge models: a brief introduction; 12. A brief review of other methods of computer simulation; 13. Monte Carlo simulations at the periphery of physics and beyond; 14. Monte Carlo studies of biological molecules; 15. Outlook; Appendix: listing of programs mentioned in the text; Index.
John M Hickey; Roel F Veerkamp; Mario PL Calus; Han A Mulder; Robin Thompson
2009-01-01
Calculation of the exact prediction error variance covariance matrix is often computationally too demanding, which limits its application in REML algorithms, the calculation of accuracies of estimated breeding values and the control of variance of response to selection. Alternatively Monte Carlo sampling can be used to calculate approximations of the prediction error variance, which converge to the true values if
I. Dioszegi; A. Rusek; B. R. Dane; I. H. Chiang; A. G. Meek; F. A. Dilmanian
2011-01-01
Recent upgrades of the MCNPX Monte Carlo code include transport of heavy ions. We employed the new code to simulate the energy and dose distributions produced by carbon beams in rabbit's head in and around a brain tumor. The work was within our experimental technique of interlaced carbon microbeams, which uses two 90° arrays of parallel, thin planes of carbon
I. Dioszegi; A. Rusek; B. R. Dane; I. H. Chiang; A. G. Meek; F. A. Dilmanian
2011-01-01
Recent upgrades of the MCNPX Monte Carlo code include transport of heavy ions. We employed the new code to simulate the energy and dose distributions produced by carbon beams in rabbit’s head in and around a brain tumor. The work was within our experimental technique of interlaced carbon microbeams, which uses two 90° arrays of parallel, thin planes of carbon
Wada, Takao; Ueda, Noriaki
2013-01-01
The process of low pressure organic vapor phase deposition (LP-OVPD) controls the growth of amorphous organic thin films, where the source gases (Alq3 molecule, etc.) are introduced into a hot wall reactor via an injection barrel using an inert carrier gas (N2 molecule). It is possible to control well the following substrate properties such as dopant concentration, deposition rate, and thickness uniformity of the thin film. In this paper, we present LP-OVPD simulation results using direct simulation Monte Carlo-Neutrals (Particle-PLUS neutral module) which is commercial software adopting direct simulation Monte Carlo method. By estimating properly the evaporation rate with experimental vaporization enthalpies, the calculated deposition rates on the substrate agree well with the experimental results that depend on carrier gas flow rate and source cell temperature. PMID:23674843
NASA Astrophysics Data System (ADS)
Ródenas, José; Gallardo, Sergio; Ballester, Silvia; Primault, Virginie; Ortiz, Josefina
2007-10-01
A gamma spectrometer including an HP Ge detector is commonly used for environmental radioactivity measurements. The efficiency of the detector should be calibrated for each geometry considered. Simulation of the calibration procedure with a validated computer program is an important auxiliary tool for environmental radioactivity laboratories. The MCNP code based on the Monte Carlo method has been applied to simulate the detection process in order to obtain spectrum peaks and determine the efficiency curve for each modelled geometry. The source used for measurements was a calibration mixed radionuclide gamma reference solution, covering a wide energy range (50-2000 keV). Two measurement geometries - Marinelli beaker and Petri boxes - as well as different materials - water, charcoal, sand - containing the source have been considered. Results obtained from the Monte Carlo model have been compared with experimental measurements in the laboratory in order to validate the model.
Wu, Yunzhao; Tang, Zesheng
2014-01-01
In this paper, we model the reflectance of the lunar regolith by a new method combining Monte Carlo ray tracing and Hapke's model. The existing modeling methods exploit either a radiative transfer model or a geometric optical model. However, the measured data from an Interference Imaging spectrometer (IIM) on an orbiter were affected not only by the composition of minerals but also by the environmental factors. These factors cannot be well addressed by a single model alone. Our method implemented Monte Carlo ray tracing for simulating the large-scale effects such as the reflection of topography of the lunar soil and Hapke's model for calculating the reflection intensity of the internal scattering effects of particles of the lunar soil. Therefore, both the large-scale and microscale effects are considered in our method, providing a more accurate modeling of the reflectance of the lunar regolith. Simulation results using the Lunar Soil Characterization Consortium (LSCC) data and Chang'E-1 elevation map show that our method is effective and useful. We have also applied our method to Chang'E-1 IIM data for removing the influence of lunar topography to the reflectance of the lunar soil and to generate more realistic visualizations of the lunar surface. PMID:24526892
Monte Carlo data association for multiple target tracking Rickard Karlsson
Gustafsson, Fredrik
Monte Carlo data association for multiple target tracking Rickard Karlsson Dept. of Electrical, these estimation methods may lead to non-optimal solutions. The sequential Monte Carlo methods, or particle filters chose the number of particles. 2 Sequential Monte Carlo methods Monte Carlo techniques have been
Monte Carlo data association for multiple target tracking Rickard Karlsson
Gustafsson, Fredrik
Monte Carlo data association for multiple target tracking Rickard Karlsson Dept. of Electrical, these estimation methods may lead to nonoptimal solutions. The sequential Monte Carlo methods, or particle filters chose the number of particles. 2 Sequential Monte Carlo methods Monte Carlo techniques have been
NASA Astrophysics Data System (ADS)
Määttänen, Anni; Douspis, Marian
2015-04-01
In the last years several datasets on deposition mode ice nucleation in Martian conditions have showed that the effectiveness of mineral dust as a condensation nucleus decreases with temperature (Iraci et al., 2010; Phebus et al., 2011; Trainer et al., 2009). Previously, nucleation modelling in Martian conditions used only constant values of this so-called contact parameter, provided by the few studies previously published on the topic. The new studies paved the way for possibly more realistic way of predicting ice crystal formation in the Martian environment. However, the caveat of these studies (Iraci et al., 2010; Phebus et al., 2011) was the limited temperature range that inhibits using the provided (linear) equations for the contact parameter temperature dependence in all conditions of cloud formation on Mars. One wide temperature range deposition mode nucleation dataset exists (Trainer et al., 2009), but the used substrate was silicon, which cannot imitate realistically the most abundant ice nucleus on Mars, mineral dust. Nevertheless, this dataset revealed, thanks to measurements spanning from 150 to 240 K, that the behaviour of the contact parameter as a function of temperature was exponential rather than linear as suggested by previous work. We have tried to combine the previous findings to provide realistic and practical formulae for application in nucleation and atmospheric models. We have analysed the three cited datasets using a Monte Carlo Markov Chain (MCMC) method. The used method allows us to test and evaluate different functional forms for the temperature dependence of the contact parameter. We perform a data inversion by finding the best fit to the measured data simultaneously at all points for different functional forms of the temperature dependence of the contact angle m(T). The method uses a full nucleation model (Määttänen et al., 2005; Vehkamäki et al., 2007) to calculate the observables at each data point. We suggest one new and test several m(T) dependencies. Two of these may be used to avoid unphysical behaviour (m > 1) when m(T) is implemented in heterogeneous nucleation and cloud models. However, more measurements are required to fully constrain the m(T) dependencies. We show the importance of large temperature range datasets for constraining the asymptotic behaviour of m(T), and we call for more experiments in a large temperature range with well-defined particle sizes or size distributions, for different IN types and nucleating vapours. This study (Määttänen and Douspis, 2014) provides a new framework for analysing heterogeneous nucleation datasets. The results provide, within limits of available datasets, well-behaving m(T) formulations for nucleation and cloud modelling. Iraci, L. T., et al. (2010). Icarus 210, 985-991. Määttänen, A., et al. (2005). J. Geophys. Res. 110, E02002. Määttänen, A. and Douspis, M. (2014). GeoResJ 3-4 , 46-55. Phebus, B. D., et al. (2011). J. Geophys. Res. 116, 4009. Trainer, M. G., et al. (2009). J. Phys. Chem C 113 , 2036-2040. Vehkamäki, H., et al. (2007). Atmos. Chem. Phys. 7, 309-313.
Lin Fu; Jinyi Qi
2008-01-01
Positron range is one of the fundamental factors that limit the spatial resolution of positron emission tomography (PET). While empirical expressions are available to describe positron range in homogenous media, analytical calculation of positron range in biological objects where complex bone\\/tissue\\/air boundaries exist is extremely difficult. One solution is to use Monte Carlo (MC) simulation. However, on-the-fly MC simulation of
Monte Carlo Neutrino Oscillations
James P. Kneller; Gail C. McLaughlin
2005-09-29
We demonstrate that the effects of matter upon neutrino propagation may be recast as the scattering of the initial neutrino wavefunction. Exchanging the differential, Schrodinger equation for an integral equation for the scattering matrix S permits a Monte Carlo method for the computation of S that removes many of the numerical difficulties associated with direct integration techniques.
Park, H. [Los Alamos National Laboratory, Los Alamos, NM 87545 (United States); Densmore, J. D. [Bettis Atomic Power Laboratory, West Mifflin, PA 15122 (United States); Wollaber, A. B.; Knoll, D. A.; Rauenzahn, R. M. [Los Alamos National Laboratory, Los Alamos, NM 87545 (United States)
2013-07-01
We have developed a moment-based scale-bridging algorithm for thermal radiative transfer problems. The algorithm takes the form of well-known nonlinear-diffusion acceleration which utilizes a low-order (LO) continuum problem to accelerate the solution of a high-order (HO) kinetic problem. The coupled nonlinear equations that form the LO problem are efficiently solved using a preconditioned Jacobian-free Newton-Krylov method. This work demonstrates the applicability of the scale-bridging algorithm with a Monte Carlo HO solver and reports the computational efficiency of the algorithm in comparison to the well-known Fleck-Cummings algorithm. (authors)
Monte Carlo Integration Lecture 2 The Problem
Liang, Faming
Monte Carlo Integration Lecture 2 The Problem Let be a probability measure over the Borel -field X S and h(x) = 0 otherwise. #12;Monte Carlo Integration Lecture 2 When the problem appears to be intractable, Press et al (1992) and reference therein). For high dimensional problems, Monte Carlo methods have
Lecture 15 Monte Carlo integration Weinan E1,2
Li, Tiejun
@pku.edu.cn No.1 Science Building, 1575 #12;Monte Carlo methods: basics Variance reduction methods An introduction to Markov chain Outline Monte Carlo methods: basics Variance reduction methods An introduction to Markov chain #12;Monte Carlo methods: basics Variance reduction methods An introduction to Markov chain
Extra Chance Hybrid Monte Carlo$ Cdric M. Campos
Sanz-Serna , J M
Extra Chance Hybrid Monte Carlo$ CÃ©dric M. Campos , J. M. Sanz-Serna Dept. MatemÃ¡tica Aplicada e Chance Generalized Hybrid Monte Carlo) to avoid rejections in the Hybrid Monte Carlo (HMC) method of the quality of the samples generated. Keywords: sampling methods, hybrid Monte Carlo, detailed balance
NASA Astrophysics Data System (ADS)
Misawa, Taichi; Okanaga, Takuya; Mohamad, Aizuddin; Sakai, Tadashi; Awano, Yuji
2015-05-01
We developed a novel Monte Carlo simulation model to investigate the line width dependence of the transport properties of multi-layered graphene nanoribbon (GNR) interconnects with edge roughness. We reported that the line width dependence of carrier mobility decreases significantly as the magnitude of the edge roughness gets smaller, which agrees well with experiments. We also discussed the influence of the inelasticity of edge roughness scatterings, inter-layer tunneling, and line width dependent band structures on the line width of the GNR interconnects.
Han, Tao; Mikell, Justin K.; Salehpour, Mohammad; Mourtada, Firas
2011-01-01
Purpose: The deterministic Acuros XB (AXB) algorithm was recently implemented in the Eclipse treatment planning system. The goal of this study was to compare AXB performance to Monte Carlo (MC) and two standard clinical convolution methods: the anisotropic analytical algorithm (AAA) and the collapsed-cone convolution (CCC) method. Methods: Homogeneous water and multilayer slab virtual phantoms were used for this study. The multilayer slab phantom had three different materials, representing soft tissue, bone, and lung. Depth dose and lateral dose profiles from AXB v10 in Eclipse were compared to AAA v10 in Eclipse, CCC in Pinnacle3, and EGSnrc MC simulations for 6 and 18 MV photon beams with open fields for both phantoms. In order to further reveal the dosimetric differences between AXB and AAA or CCC, three-dimensional (3D) gamma index analyses were conducted in slab regions and subregions defined by AAPM Task Group 53. Results: The AXB calculations were found to be closer to MC than both AAA and CCC for all the investigated plans, especially in bone and lung regions. The average differences of depth dose profiles between MC and AXB, AAA, or CCC was within 1.1, 4.4, and 2.2%, respectively, for all fields and energies. More specifically, those differences in bone region were up to 1.1, 6.4, and 1.6%; in lung region were up to 0.9, 11.6, and 4.5% for AXB, AAA, and CCC, respectively. AXB was also found to have better dose predictions than AAA and CCC at the tissue interfaces where backscatter occurs. 3D gamma index analyses (percent of dose voxels passing a 2%?2 mm criterion) showed that the dose differences between AAA and AXB are significant (under 60% passed) in the bone region for all field sizes of 6 MV and in the lung region for most of field sizes of both energies. The difference between AXB and CCC was generally small (over 90% passed) except in the lung region for 18 MV 10?×?10 cm2 fields (over 26% passed) and in the bone region for 5?×?5 and 10?×?10 cm2 fields (over 64% passed). With the criterion relaxed to 5%?2 mm, the pass rates were over 90% for both AAA and CCC relative to AXB for all energies and fields, with the exception of AAA 18 MV 2.5?×?2.5 cm2 field, which still did not pass. Conclusions: In heterogeneous media, AXB dose prediction ability appears to be comparable to MC and superior to current clinical convolution methods. The dose differences between AXB and AAA or CCC are mainly in the bone, lung, and interface regions. The spatial distributions of these differences depend on the field sizes and energies. PMID:21776802
Yoshida, Kenichiro; Nishidate, Izumi
2014-01-01
To rapidly derive a result for diffuse reflectance from a multilayered model that is equivalent to that of a Monte-Carlo simulation (MCS), we propose a combination of a layered white MCS and the adding-doubling method. For slabs with various scattering coefficients assuming a certain anisotropy factor and without absorption, we calculate the transition matrices for light flow with respect to the incident and exit angles. From this series of precalculated transition matrices, we can calculate the transition matrices for the multilayered model with the specific anisotropy factor. The relative errors of the results of this method compared to a conventional MCS were less than 1%. We successfully used this method to estimate the chromophore concentration from the reflectance spectrum of a numerical model of skin and in vivo human skin tissue. PMID:25426319
Quantum Gibbs ensemble Monte Carlo
Fantoni, Riccardo, E-mail: rfantoni@ts.infn.it [Dipartimento di Scienze Molecolari e Nanosistemi, Università Ca’ Foscari Venezia, Calle Larga S. Marta DD2137, I-30123 Venezia (Italy); Moroni, Saverio, E-mail: moroni@democritos.it [DEMOCRITOS National Simulation Center, Istituto Officina dei Materiali del CNR and SISSA Scuola Internazionale Superiore di Studi Avanzati, Via Bonomea 265, I-34136 Trieste (Italy)
2014-09-21
We present a path integral Monte Carlo method which is the full quantum analogue of the Gibbs ensemble Monte Carlo method of Panagiotopoulos to study the gas-liquid coexistence line of a classical fluid. Unlike previous extensions of Gibbs ensemble Monte Carlo to include quantum effects, our scheme is viable even for systems with strong quantum delocalization in the degenerate regime of temperature. This is demonstrated by an illustrative application to the gas-superfluid transition of {sup 4}He in two dimensions.
NASA Astrophysics Data System (ADS)
Ródenas, J.; Gallardo, S.
2007-09-01
The dose reduction program at a Nuclear Power Plant (NPP) includes the utilization of a gamma spectrometry device for on-site measurements. The equipment uses a heavily shielded HP Germanium detector coupled with high count-rate pulse-height counting electronics. The on-site spectra acquisition is an essential tool to monitor the radionuclide concentration deposited onto the inner side of the process pipes of the reactor. The determination of this activity is very important to plan the ALARA actions that must be taken in order to decrease the collective doses received by exposed professional workers. Nevertheless, the analysis using direct measurements presents the problem of detector calibration, because they have to be done in places with difficult access and high dose rates. It is convenient to apply a computer solution, such as the Monte Carlo method, for the detector efficiency calibration. A model has been developed using the MCNP code, based on the Monte Carlo method. Results have been compared with experimental measurements in order to validate the model. Validation for point and disk sources as well as for volumetric sources has been published in previous works. In this paper, the model developed for the calibration of the detector to perform contamination measurements in a pipeline is presented. Results of simulation should be compared with experimental measurements using a portion of pipeline contaminated at its inner surface with some radionuclides covering a wide energy range. The validation of the model will permit to extend it to other situations throughout the plant.
1-D EQUILIBRIUM DISCRETE DIFFUSION MONTE CARLO
T. EVANS; ET AL
2000-08-01
We present a new hybrid Monte Carlo method for 1-D equilibrium diffusion problems in which the radiation field coexists with matter in local thermodynamic equilibrium. This method, the Equilibrium Discrete Diffusion Monte Carlo (EqDDMC) method, combines Monte Carlo particles with spatially discrete diffusion solutions. We verify the EqDDMC method with computational results from three slab problems. The EqDDMC method represents an incremental step toward applying this hybrid methodology to non-equilibrium diffusion, where it could be simultaneously coupled to Monte Carlo transport.
Chen, X; Xing, L; Luxton, G; Bush, K [Stanford University, Palo Alto, CA (United States); Azcona, J [Clinica Universidad de Navarra, Pamplona (Spain)
2014-06-01
Purpose: Patient-specific QA for VMAT is incapable of providing full 3D dosimetric information and is labor intensive in the case of severe heterogeneities or small-aperture beams. A cloud-based Monte Carlo dose reconstruction method described here can perform the evaluation in entire 3D space and rapidly reveal the source of discrepancies between measured and planned dose. Methods: This QA technique consists of two integral parts: measurement using a phantom containing array of dosimeters, and a cloud-based voxel Monte Carlo algorithm (cVMC). After a VMAT plan was approved by a physician, a dose verification plan was created and delivered to the phantom using our Varian Trilogy or TrueBeam system. Actual delivery parameters (i.e., dose fraction, gantry angle, and MLC at control points) were extracted from Dynalog or trajectory files. Based on the delivery parameters, the 3D dose distribution in the phantom containing detector were recomputed using Eclipse dose calculation algorithms (AAA and AXB) and cVMC. Comparison and Gamma analysis is then conducted to evaluate the agreement between measured, recomputed, and planned dose distributions. To test the robustness of this method, we examined several representative VMAT treatments. Results: (1) The accuracy of cVMC dose calculation was validated via comparative studies. For cases that succeeded the patient specific QAs using commercial dosimetry systems such as Delta- 4, MAPCheck, and PTW Seven29 array, agreement between cVMC-recomputed, Eclipse-planned and measured doses was obtained with >90% of the points satisfying the 3%-and-3mm gamma index criteria. (2) The cVMC method incorporating Dynalog files was effective to reveal the root causes of the dosimetric discrepancies between Eclipse-planned and measured doses and provide a basis for solutions. Conclusion: The proposed method offers a highly robust and streamlined patient specific QA tool and provides a feasible solution for the rapidly increasing use of VMAT treatments in the clinic.
NASA Astrophysics Data System (ADS)
Sepehri, Aliasghar; Loeffler, Troy D.; Chen, Bin
2014-08-01
A new method has been developed to generate bending angle trials to improve the acceptance rate and the speed of configurational-bias Monte Carlo. Whereas traditionally the trial geometries are generated from a uniform distribution, in this method we attempt to use the exact probability density function so that each geometry generated is likely to be accepted. In actual practice, due to the complexity of this probability density function, a numerical representation of this distribution function would be required. This numerical table can be generated a priori from the distribution function. This method has been tested on a united-atom model of alkanes including propane, 2-methylpropane, and 2,2-dimethylpropane, that are good representatives of both linear and branched molecules. It has been shown from these test cases that reasonable approximations can be made especially for the highly branched molecules to reduce drastically the dimensionality and correspondingly the amount of the tabulated data that is needed to be stored. Despite these approximations, the dependencies between the various geometrical variables can be still well considered, as evident from a nearly perfect acceptance rate achieved. For all cases, the bending angles were shown to be sampled correctly by this method with an acceptance rate of at least 96% for 2,2-dimethylpropane to more than 99% for propane. Since only one trial is required to be generated for each bending angle (instead of thousands of trials required by the conventional algorithm), this method can dramatically reduce the simulation time. The profiling results of our Monte Carlo simulation code show that trial generation, which used to be the most time consuming process, is no longer the time dominating component of the simulation.
Astrakharchik, G. E.; Boronat, J.; Casulleras, J. [Departament de Fisica i Enginyeria Nuclear, Campus Nord B4-B5, Universitat Politecnica de Catalunya, E-08034 Barcelona (Spain); Kurbakov, I. L.; Lozovik, Yu. E. [Institute of Spectroscopy, 142190 Troitsk, Moscow Region (Russian Federation)
2009-05-15
The equation of state of a weakly interacting two-dimensional Bose gas is studied at zero temperature by means of quantum Monte Carlo methods. Going down to as low densities as na{sup 2}{proportional_to}10{sup -100} permits us to obtain agreement on beyond mean-field level between predictions of perturbative methods and direct many-body numerical simulation, thus providing an answer to the fundamental question of the equation of state of a two-dimensional dilute Bose gas in the universal regime (i.e., entirely described by the gas parameter na{sup 2}). We also show that the measure of the frequency of a breathing collective oscillation in a trap at very low densities can be used to test the universal equation of state of a two-dimensional Bose gas.
NASA Astrophysics Data System (ADS)
Bashkatov, A. N.; Genina, Elina A.; Kochubei, V. I.; Tuchin, Valerii V.
2006-12-01
Based on the digital image analysis and inverse Monte-Carlo method, the proximate analysis method is deve-loped and the optical properties of hairs of different types are estimated in three spectral ranges corresponding to three colour components. The scattering and absorption properties of hairs are separated for the first time by using the inverse Monte-Carlo method. The content of different types of melanin in hairs is estimated from the absorption coefficient. It is shown that the dominating type of melanin in dark hairs is eumelanin, whereas in light hairs pheomelanin dominates.
NASA Astrophysics Data System (ADS)
Cortés-Giraldo, M. A.; Carabe, A.
2015-04-01
We compare unrestricted dose average linear energy transfer (LET) maps calculated with three different Monte Carlo scoring methods in voxelized geometries irradiated with proton therapy beams with three different Monte Carlo scoring methods. Simulations were done with the Geant4 (Geometry ANd Tracking) toolkit. The first method corresponds to a step-by-step computation of LET which has been reported previously in the literature. We found that this scoring strategy is influenced by spurious high LET components, which relative contribution in the dose average LET calculations significantly increases as the voxel size becomes smaller. Dose average LET values calculated for primary protons in water with voxel size of 0.2?mm were a factor ~1.8 higher than those obtained with a size of 2.0?mm at the plateau region for a 160?MeV beam. Such high LET components are a consequence of proton steps in which the condensed-history algorithm determines an energy transfer to an electron of the material close to the maximum value, while the step length remains limited due to voxel boundary crossing. Two alternative methods were derived to overcome this problem. The second scores LET along the entire path described by each proton within the voxel. The third followed the same approach of the first method, but the LET was evaluated at each step from stopping power tables according to the proton kinetic energy value. We carried out microdosimetry calculations with the aim of deriving reference dose average LET values from microdosimetric quantities. Significant differences between the methods were reported either with pristine or spread-out Bragg peaks (SOBPs). The first method reported values systematically higher than the other two at depths proximal to SOBP by about 15% for a 5.9?cm wide SOBP and about 30% for a 11.0?cm one. At distal SOBP, the second method gave values about 15% lower than the others. Overall, we found that the third method gave the most consistent performance since it returned stable dose average LET values against simulation parameter changes and gave the best agreement with dose average LET estimations from microdosimetry calculations.
Cortés-Giraldo, M A; Carabe, A
2015-04-01
We compare unrestricted dose average linear energy transfer (LET) maps calculated with three different Monte Carlo scoring methods in voxelized geometries irradiated with proton therapy beams with three different Monte Carlo scoring methods. Simulations were done with the Geant4 (Geometry ANd Tracking) toolkit. The first method corresponds to a step-by-step computation of LET which has been reported previously in the literature. We found that this scoring strategy is influenced by spurious high LET components, which relative contribution in the dose average LET calculations significantly increases as the voxel size becomes smaller. Dose average LET values calculated for primary protons in water with voxel size of 0.2?mm were a factor ~1.8 higher than those obtained with a size of 2.0?mm at the plateau region for a 160?MeV beam. Such high LET components are a consequence of proton steps in which the condensed-history algorithm determines an energy transfer to an electron of the material close to the maximum value, while the step length remains limited due to voxel boundary crossing. Two alternative methods were derived to overcome this problem. The second scores LET along the entire path described by each proton within the voxel. The third followed the same approach of the first method, but the LET was evaluated at each step from stopping power tables according to the proton kinetic energy value. We carried out microdosimetry calculations with the aim of deriving reference dose average LET values from microdosimetric quantities. Significant differences between the methods were reported either with pristine or spread-out Bragg peaks (SOBPs). The first method reported values systematically higher than the other two at depths proximal to SOBP by about 15% for a 5.9?cm wide SOBP and about 30% for a 11.0?cm one. At distal SOBP, the second method gave values about 15% lower than the others. Overall, we found that the third method gave the most consistent performance since it returned stable dose average LET values against simulation parameter changes and gave the best agreement with dose average LET estimations from microdosimetry calculations. PMID:25768028
Cortes-Giraldo, M A [Universidad de Sevilla, Seville (Spain); Carabe-Fernandez, A [Hospital of the University of Pennsylvania, Philadelphia, PA (United States)
2014-06-01
Purpose: To evaluate the differences in dose-averaged linear energy transfer (LETd) maps calculated in water by means of different strategies found in the literature in proton therapy Monte Carlo simulations and to compare their values with dose-mean lineal energy microdosimetry calculations. Methods: The Geant4 toolkit (version 9.6.2) was used. Dose and LETd maps in water were scored for primary protons with cylindrical voxels defined around the beam axis. Three LETd calculation methods were implemented. First, the LETd values were computed by calculating the unrestricted linear energy transfer (LET) associated to each single step weighted by the energy deposition (including delta-rays) along the step. Second, the LETd was obtained for each voxel by computing the LET along all the steps simulated for each proton track within the voxel, weighted by the energy deposition of those steps. Third, the LETd was scored as the quotient between the second momentum of the LET distribution, calculated per proton track, over the first momentum. These calculations were made with various voxel thicknesses (0.2 – 2.0 mm) for a 160 MeV proton beamlet and spread-out Bragg Peaks (SOBP). The dose-mean lineal energy was calculated in a uniformly-irradiated water sphere, 0.005 mm radius. Results: The value of the LETd changed systematically with the voxel thickness due to delta-ray emission and the enlargement of the LET distribution spread, especially at shallow depths. Differences of up to a factor 1.8 were found at the depth of maximum dose, leading to similar differences at the central and distal depths of the SOBPs. The third LETd calculation method gave better agreement with microdosimetry calculations around the Bragg Peak. Conclusion: Significant differences were found between LETd map Monte Carlo calculations due to both the calculation strategy and the voxel thickness used. This could have a significant impact in radiobiologically-optimized proton therapy treatments.
Vargas, M Jurado; Díaz, N Cornejo; Sánchez, D Pérez
2003-06-01
Monte Carlo simulation was applied to the efficiency transfer exercise described in the EUROMET428 project (Appl. Radiat. Isot. 55 (2001) 493), evaluating the peak efficiencies in the energy range 60-2000 keV for a typical coaxial p-type HpGe detector and several types of source configuration: point sources located at various distances from the detector and a cylindrical box containing three matrices. The efficiency values were derived in two ways: (a) by direct calculation taking into account the physical dimensions of the detector provided by the supplier, and (b) by means of relative computation (efficiency transfer) taking also into consideration the known efficiency values for a reference point source. As expected, some significant discrepancies between the calculated and experimental values were found when a direct computation was made using the data provided by the supplier. On the contrary, the results for the peak efficiency derived by relative calculation by means of an efficiency transfer were in good agreement with the experimental values. The deviations found with this last procedure were generally below 5% for all the geometries considered, which is entirely satisfactory for the purposes of routine measurements. PMID:12798381
NASA Astrophysics Data System (ADS)
Brunetti, Antonio; Golosio, Bruno; Melis, Maria Grazia; Mura, Stefania
2015-02-01
X-ray fluorescence (XRF) is a well known nondestructive technique. It is also applied to multilayer characterization, due to its possibility of estimating both composition and thickness of the layers. Several kinds of cultural heritage samples can be considered as a complex multilayer, such as paintings or decorated objects or some types of metallic samples. Furthermore, they often have rough surfaces and this makes a precise determination of the structure and composition harder. The standard quantitative XRF approach does not take into account this aspect. In this paper, we propose a novel approach based on a combined use of X-ray measurements performed with a polychromatic beam and Monte Carlo simulations. All the information contained in an X-ray spectrum is used. This approach allows obtaining a very good estimation of the sample contents both in terms of chemical elements and material thickness, and in this sense, represents an improvement of the possibility of XRF measurements. Some examples will be examined and discussed.
A. Putze; L. Derome; D. Maurin; L. Perotto; R. Taillet
2009-01-21
Propagation of charged cosmic-rays in the Galaxy depends on the transport parameters, whose number can be large depending on the propagation model under scrutiny. A standard approach for determining these parameters is a manual scan, leading to an inefficient and incomplete coverage of the parameter space. We implement a Markov Chain Monte Carlo (MCMC), which is well suited to multi-parameter determination. Its specificities (burn-in length, acceptance, and correlation length) are discussed in the phenomenologically well-understood Leaky-Box Model. From a technical point of view, a trial function based on binary-space partitioning is found to be extremely efficient, allowing a simultaneous determination of up to nine parameters, including transport and source parameters, such as slope and abundances. Our best-fit model includes both a low energy cut-off and reacceleration, whose values are consistent with those found in diffusion models. A Kolmogorov spectrum for the diffusion slope (delta=1/3) is excluded. The marginalised probability-density function for delta and alpha (the slope of the source spectra) are delta~0.55-0.60 and alpha~2.14-2.17, depending on the dataset used and the number of free parameters in the fit. All source-spectrum parameters (slope and abundances) are positively correlated among themselves and with the reacceleration strength, but are negatively correlated with the other propagation parameters. A forthcoming study will extend our analysis to more physical diffusion models.
NASA Astrophysics Data System (ADS)
Perez, J. A.; Olson, R. E.
1999-06-01
We have developed a classical trajectory Monte Carlo code for use in the study of collisions between highly charged ions and systems with multiple targets, such as surfaces. We have simulated a collision between the bare ions C6+, Kr36+, Ne10+, Ar18+, and Xe54+, and a configuration of approximately 400 individual atoms. The projectile has an initial energy of 0.25 keV/u with the velocity perpendicular to the surface. To simulate a simplified surface, the target atoms are held in a simple cubic lattice arrangement by the use of Morse potentials between target nuclei. Each target nucleus has one electron with a binding energy of 12 eV initially localized about it. Initial conditions of the electrons are restricted to represent the 2p electrons of LiF anions. The forces between all particles are calculated at each step in the simulation and the trajectory of every particle is followed. Results for the critical radius of capture, and the principal numbers are shown. Details of the capture of the first three electrons by Ar18+ as it approaches the surface are given.
Perez, J.A.; Olson, R.E. [Department of Physics, University of Missouri-Rolla, Rolla, Missouri 65401 (United States)
1999-06-01
We have developed a classical trajectory Monte Carlo code for use in the study of collisions between highly charged ions and systems with multiple targets, such as surfaces. We have simulated a collision between the bare ions C{sup 6+}, Kr{sup 36+}, Ne{sup 10+}, Ar{sup 18+}, and Xe{sup 54+}, and a configuration of approximately 400 individual atoms. The projectile has an initial energy of 0.25 keV/u with the velocity perpendicular to the surface. To simulate a simplified surface, the target atoms are held in a simple cubic lattice arrangement by the use of Morse potentials between target nuclei. Each target nucleus has one electron with a binding energy of 12 eV initially localized about it. Initial conditions of the electrons are restricted to represent the 2p electrons of LiF anions. The forces between all particles are calculated at each step in the simulation and the trajectory of every particle is followed. Results for the critical radius of capture, and the principal numbers are shown. Details of the capture of the first three electrons by Ar{sup 18+} as it approaches the surface are given. {copyright} {ital 1999 American Institute of Physics.}
Stanford University
Chapter 2 Monte Carlo Integration This chapter gives an introduction to Monte Carlo integration useful in computer graphics. Good references on Monte Carlo methods include Kalos & Whitlock [1986 for Monte Carlo applications to neutron transport problems; Lewis & Miller [1984] is a good source
NASA Astrophysics Data System (ADS)
Rafiee, Mohammad; Barrau, Axel; Bayen, Alexandre M.
2013-06-01
This article investigates the performance of Monte Carlo-based estimation methods for estimation of flow state in large-scale open channel networks. After constructing a state space model of the flow based on the Saint-Venant equations, we implement the optimal sampling importance resampling filter to perform state estimation in a case in which measurements are available at every time step. Considering a case in which measurements become available intermittently, a random-map implementation of the implicit particle filter is applied to estimate the state trajectory in the interval between the measurements. Finally, some heuristics are proposed, which are shown to improve the estimation results and lower the computational cost. In the first heuristics, considering the case in which measurements are available at every time step, we apply the implicit particle filter over time intervals of a desired size while incorporating all the available measurements over the corresponding time interval. As a second heuristic method, we introduce a maximum a posteriori (MAP) method, which does not require sampling. It will be seen, through implementation, that the MAP method provides more accurate results in the case of our application while having a smaller computational cost. All estimation methods are tested on a network of 19 tidally forced subchannels and 1 reservoir, Clifton Court Forebay, in Sacramento-San Joaquin Delta in California, and numerical results are presented.
NASA Astrophysics Data System (ADS)
Kivel, Niko; Potthast, Heiko-Dirk; Günther-Leopold, Ines; Vanhaecke, Frank; Günther, Detlef
The interface between the atmospheric pressure plasma ion source and the high vacuum mass spectrometer is a crucial part of an inductively coupled plasma-mass spectrometer. It influences the efficiency of the mass transfer into the mass spectrometer, it also contributes to the formation of interfering ions and to mass discrimination. This region was simulated using the Direct Simulation Monte Carlo method with respect to the formation of shock waves, mass transport and mass discrimination. The modeling results for shock waves and mass transport are in overall agreement with the literature. Insights into the effects and geometrical features causing mass discrimination could be gained. The overall observed collision based mass discrimination is lower than expected from measurements on real instruments, supporting the assumptions that inter-particle collisions play a minor role in this context published earlier. A full representation of the study, for two selected geometries, is given in form of a movie as supplementary data.
Evaluating the radiation detection of the RbGd 2Br 7:Ce scintillator by Monte Carlo methods
NASA Astrophysics Data System (ADS)
Liaparinos, Panagiotis; Kandarakis, Ioannis; Cavouras, Dionisis; Delis, Harry; Panayiotakis, George
2006-12-01
The purpose of this study was to investigate the radiation detection efficiency of the recently introduced RbGd 2Br 7:Ce (RGB) scintillator material by a custom developed Monte Carlo simulation code. Considering its fast principal decay constant (45 ns) and its high light yield (56 000 photons/MeV), RbGd 2Br 7:Ce appears to be a quite promising scintillator for applications in nuclear medical imaging systems. In this work, gamma-ray interactions, within the scintillator mass were studied. In addition, the effect of K-characteristic fluorescence radiation emission, re-absorption or escape, as well as the effect of scattering events on the spatial distribution of absorbed energy was examined. Various scintillator crystal thicknesses (5-25 mm), used in positron emission imaging, were considered to be irradiated by 511 keV photons. Similar simulations were performed on the well known Lu 2SiO 5:Ce (LSO) scintillator for comparison purposes. Simulation results allowed the determination of the quantum detection efficiency as well as the fraction of the energy absorbed due to the K-characteristic radiation. Results were obtained as a function of scintillator crystal thickness. The Lu 2SiO 5:Ce scintillator material showed to exhibit better radiation absorption properties in comparison with RbGd 2Br 7:Ce. However, RGB showed to be less affected by the production of K-characteristic radiation. Taking into account its very short decay time and its high light yield, this material could be considered to be employed in positron imaging (PET) detectors.
NASA Astrophysics Data System (ADS)
Wilson, Robert H.; Vishwanath, Karthik; Mycek, Mary-Ann
2009-02-01
Monte Carlo (MC) simulations are considered the "gold standard" for mathematical description of photon transport in tissue, but they can require large computation times. Therefore, it is important to develop simple and efficient methods for accelerating MC simulations, especially when a large "library" of related simulations is needed. A semi-analytical method involving MC simulations and a path-integral (PI) based scaling technique generated time-resolved reflectance curves from layered tissue models. First, a zero-absorption MC simulation was run for a tissue model with fixed scattering properties in each layer. Then, a closed-form expression for the average classical path of a photon in tissue was used to determine the percentage of time that the photon spent in each layer, to create a weighted Beer-Lambert factor to scale the time-resolved reflectance of the simulated zero-absorption tissue model. This method is a unique alternative to other scaling techniques in that it does not require the path length or number of collisions of each photon to be stored during the initial simulation. Effects of various layer thicknesses and absorption and scattering coefficients on the accuracy of the method will be discussed.
Chen, Jinsong
) developed an integrated method to map occurrence probabilities of different lithofacies and fluid properties. They first defined seismic lithofacies and then used them as the link for tying fluid properties to seismic on the existence of seismic lithofacies and distinction in fluid properties among those facies. The method is site
NASA Astrophysics Data System (ADS)
Grimbergen, T. W. M.; van Dijk, E.; de Vries, W.
1998-11-01
A new method is described for the determination of x-ray quality dependent correction factors for free-air ionization chambers. The method is based on weighting correction factors for mono-energetic photons, which are calculated using the Monte Carlo method, with measured air kerma spectra. With this method, correction factors for electron loss, scatter inside the chamber and transmission through the diaphragm and front wall have been calculated for the NMi free-air chamber for medium-energy x-rays for a wide range of x-ray qualities in use at NMi. The newly obtained correction factors were compared with the values in use at present, which are based on interpolation of experimental data for a specific set of x-ray qualities. For x-ray qualities which are similar to this specific set, the agreement between the correction factors determined with the new method and those based on the experimental data is better than 0.1%, except for heavily filtered x-rays generated at 250 kV. For x-ray qualities dissimilar to the specific set, differences up to 0.4% exist, which can be explained by uncertainties in the interpolation procedure of the experimental data. Since the new method does not depend on experimental data for a specific set of x-ray qualities, the new method allows for a more flexible use of the free-air chamber as a primary standard for air kerma for any x-ray quality in the medium-energy x-ray range.
Bouchard, Hugo; Seuntjens, Jan; Kawrakow, Iwan
2011-04-21
During experimental procedures, an adequate evaluation of all sources of uncertainty is necessary to obtain an overall uncertainty budget. In specific radiation dosimetry applications where a single detector is used, common methods to evaluate uncertainties caused by setup positioning errors are not applicable when the dose gradient is not known a priori. This study describes a method to compute these uncertainties using the Monte Carlo method. A mathematical formalism is developed to calculate unbiased estimates of the uncertainties. The method is implemented in egs_chamber, an EGSnrc-based code that allows for the efficient calculation of detector doses and dose ratios. The correct implementation of the method into the egs_chamber code is validated with an extensive series of tests. The accuracy of the developed mathematical formalism is verified by comparing egs_chamber simulation results to the theoretical expectation in an ideal situation where the uncertainty can be computed analytically. Three examples of uncertainties are considered for realistic models of an Exradin A12 ionization chamber and a PTW 60012 diode, and results are computed for parameters representing nearly realistic positioning error probability distributions. Results of practical examples show that uncertainties caused by positioning errors can be significant during IMRT reference dosimetry as well as small field output factor measurements. The method described in this paper is of interest in the study of single-detector response uncertainties during nonstandard beam measurements, both in the scope of daily routine as well as when developing new dosimetry protocols. It is pointed out that such uncertainties should be considered in new protocols devoted to single-detector measurements in regions with unpredictable dose gradients. The method is available within the egs_chamber code in the latest official release of the EGSnrc system. PMID:21454927
NASA Astrophysics Data System (ADS)
Nikolopoulos, Dimitrios; Kandarakis, Ioannis; Tsantilas, Xenophon; Valais, Ioannis; Cavouras, Dionisios; Louizi, Anna
2006-12-01
The radiation detection efficiency of four scintillators employed, or designed to be employed, in positron emission imaging (PET) was evaluated as a function of the crystal thickness by applying Monte Carlo Methods. The scintillators studied were the LuSiO 5 (LSO), LuAlO 3 (LuAP), Gd 2SiO 5 (GSO) and the YAlO 3 (YAP). Crystal thicknesses ranged from 0 to 50 mm. The study was performed via a previously generated photon transport Monte Carlo code. All photon track and energy histories were recorded and the energy transferred or absorbed in the scintillator medium was calculated together with the energy redistributed and retransported as secondary characteristic fluorescence radiation. Various parameters were calculated e.g. the fraction of the incident photon energy absorbed, transmitted or redistributed as fluorescence radiation, the scatter to primary ratio, the photon and energy distribution within each scintillator block etc. As being most significant, the fraction of the incident photon energy absorbed was found to increase with increasing crystal thickness tending to form a plateau above the 30 mm thickness. For LSO, LuAP, GSO and YAP scintillators, respectively, this fraction had the value of 44.8, 36.9 and 45.7% at the 10 mm thickness and 96.4, 93.2 and 96.9% at the 50 mm thickness. Within the plateau area approximately (57-59)%, (59-63)%, (52-63)% and (58-61)% of this fraction was due to scattered and reabsorbed radiation for the LSO, GSO, YAP and LuAP scintillators, respectively. In all cases, a negligible fraction (<0.1%) of the absorbed energy was found to escape the crystal as fluorescence radiation.
NASA Astrophysics Data System (ADS)
Brochart, David; Andréassian, Vazken
2015-04-01
Precipitation is known to exhibit a high spatial variability. For this reason, raingage measurements, which only provide a local information about rainfall, may not be appropriate to estimate areal rainfall. On the other hand, catchments have the ability to aggregate rainfall over their area and route it to a unique point - the outlet - where it can be easily measured. A catchment can thus be viewed as a large raingage, with the difference that what is measured at the outlet is a complex transformation of the rainfall. In this communication, we propose to use a model of this transformation (a so-called rainfall-runoff model) and to infer rainfall from an observed streamflow using a Monte Carlo method. We apply the method to 202 catchments in France and compare the inferred rainfall with the areal raingage-based rainfall measurements. We show that the inferred rainfall accuracy directly depends on the accuracy of the rainfall-runoff model. Potential applications of this method include rainfall estimation in poorly gaged areas, correction of uncertain rainfall estimates (e.g. satellite-based rainfall estimates), as well as historical reconstitution of rainfall based on streamflow measurements.
Bergstrom, Paul M. (Livermore, CA); Daly, Thomas P. (Livermore, CA); Moses, Edward I. (Livermore, CA); Patterson, Jr., Ralph W. (Livermore, CA); Schach von Wittenau, Alexis E. (Livermore, CA); Garrett, Dewey N. (Livermore, CA); House, Ronald K. (Tracy, CA); Hartmann-Siantar, Christine L. (Livermore, CA); Cox, Lawrence J. (Los Alamos, NM); Fujino, Donald H. (San Leandro, CA)
2000-01-01
A system and method is disclosed for radiation dose calculation within sub-volumes of a particle transport grid. In a first step of the method voxel volumes enclosing a first portion of the target mass are received. A second step in the method defines dosel volumes which enclose a second portion of the target mass and overlap the first portion. A third step in the method calculates common volumes between the dosel volumes and the voxel volumes. A fourth step in the method identifies locations in the target mass of energy deposits. And, a fifth step in the method calculates radiation doses received by the target mass within the dosel volumes. A common volume calculation module inputs voxel volumes enclosing a first portion of the target mass, inputs voxel mass densities corresponding to a density of the target mass within each of the voxel volumes, defines dosel volumes which enclose a second portion of the target mass and overlap the first portion, and calculates common volumes between the dosel volumes and the voxel volumes. A dosel mass module, multiplies the common volumes by corresponding voxel mass densities to obtain incremental dosel masses, and adds the incremental dosel masses corresponding to the dosel volumes to obtain dosel masses. A radiation transport module identifies locations in the target mass of energy deposits. And, a dose calculation module, coupled to the common volume calculation module and the radiation transport module, for calculating radiation doses received by the target mass within the dosel volumes.
Baptista, A M; Martel, P J; Soares, C M
1999-01-01
A new method is presented for simulating the simultaneous binding equilibrium of electrons and protons on protein molecules, which makes it possible to study the full equilibrium thermodynamics of redox and protonation processes, including electron-proton coupling. The simulations using this method reflect directly the pH and electrostatic potential of the environment, thus providing a much closer and realistic connection with experimental parameters than do usual methods. By ignoring the full binding equilibrium, calculations usually overlook the twofold effect that binding fluctuations have on the behavior of redox proteins: first, they affect the energy of the system by creating partially occupied sites; second, they affect its entropy by introducing an additional empty/occupied site disorder (here named occupational entropy). The proposed method is applied to cytochrome c3 of Desulfovibrio vulgaris Hildenborough to study its redox properties and electron-proton coupling (redox-Bohr effect), using a continuum electrostatic method based on the linear Poisson-Boltzmann equation. Unlike previous studies using other methods, the full reduction order of the four hemes at physiological pH is successfully predicted. The sites more strongly involved in the redox-Bohr effect are identified by analysis of their titration curves/surfaces and the shifts of their midpoint redox potentials and pKa values. Site-site couplings are analyzed using statistical correlations, a method much more realistic than the usual analysis based on direct interactions. The site found to be more strongly involved in the redox-Bohr effect is propionate D of heme I, in agreement with previous studies; other likely candidates are His67, the N-terminus, and propionate D of heme IV. Even though the present study is limited to equilibrium conditions, the possible role of binding fluctuations in the concerted transfer of protons and electrons under nonequilibrium conditions is also discussed. The occupational entropy contributions to midpoint redox potentials and pKa values are computed and shown to be significant. PMID:10354425
Yukito Iba
2001-01-01
``Extended Ensemble Monte Carlo'' is a generic term that indicates a set of algorithms, which are now popular in a variety of fields in physics and statistical information processing. Exchange Monte Carlo (Metropolis-Coupled Chain, Parallel Tempering), Simulated Tempering (Expanded Ensemble Monte Carlo) and Multicanonical Monte Carlo (Adaptive Umbrella Sampling) are typical members of this family. Here, we give a cross-disciplinary
Discrete Diffusion Monte Carlo for grey Implicit Monte Carlo simulations.
Densmore, J. D. (Jeffery D.); Urbatsch, T. J. (Todd J.); Evans, T. M. (Thomas M.); Buksas, M. W. (Michael W.)
2005-01-01
Discrete Diffusion Monte Carlo (DDMC) is a hybrid transport-diffusion method for Monte Carlo simulations in diffusive media. In DDMC, particles take discrete steps between spatial cells according to a discretized diffusion equation. Thus, DDMC produces accurate solutions while increasing the efficiency of the Monte Carlo calculation. In this paper, we extend previously developed DDMC techniques in several ways that improve the accuracy and utility of DDMC for grey Implicit Monte Carlo calculations. First, we employ a diffusion equation that is discretized in space but is continuous time. Not only is this methodology theoretically more accurate than temporally discretized DDMC techniques, but it also has the benefit that a particle's time is always known. Thus, there is no ambiguity regarding what time to assign a particle that leaves an optically thick region (where DDMC is used) and begins transporting by standard Monte Carlo in an optically thin region. In addition, we treat particles incident on an optically thick region using the asymptotic diffusion-limit boundary condition. This interface technique can produce accurate solutions even if the incident particles are distributed anisotropically in angle. Finally, we develop a method for estimating radiation momentum deposition during the DDMC simulation. With a set of numerical examples, we demonstrate the accuracy and efficiency of our improved DDMC method.
Khosravi, H.; Hashemi, B.; Mahdavi, S. R.; Hejazi, P.
2015-01-01
Background Gel polymers are considered as new dosimeters for determining radiotherapy dose distribution in three dimensions. Objective The ability of a new formulation of MAGIC-f polymer gel was assessed by experimental measurement and Monte Carlo (MC) method for studying the effect of gold nanoparticles (GNPs) in prostate dose distributions under the internal Ir-192 and external 18MV radiotherapy practices. Method A Plexiglas phantom was made representing human pelvis. The GNP shaving 15 nm in diameter and 0.1 mM concentration were synthesized using chemical reduction method. Then, a new formulation of MAGIC-f gel was synthesized. The fabricated gel was poured in the tubes located at the prostate (with and without the GNPs) and bladder locations of the phantom. The phantom was irradiated to an Ir-192 source and 18 MV beam of a Varian linac separately based on common radiotherapy procedures used for prostate cancer. After 24 hours, the irradiated gels were read using a Siemens 1.5 Tesla MRI scanner. The absolute doses at the reference points and isodose curves resulted from the experimental measurement of the gels and MC simulations following the internal and external radiotherapy practices were compared. Results The mean absorbed doses measured with the gel in the presence of the GNPs in prostate were 15% and 8 % higher than the corresponding values without the GNPs under the internal and external radiation therapies, respectively. MC simulations also indicated a dose increase of 14 % and 7 % due to presence of the GNPs, for the same experimental internal and external radiotherapy practices, respectively. Conclusion There was a good agreement between the dose enhancement factors (DEFs) estimated with MC simulations and experiment gel measurements due to the GNPs. The results indicated that the polymer gel dosimetry method as developed and used in this study, can be recommended as a reliable method for investigating the DEF of GNPs in internal and external radiotherapy practices. PMID:25973406
Arie J. Noordwijk; Jacobus Noordwijk
1988-01-01
A reliable but not necessarily precise indication of the toxicity of a chemical product is frequently needed for the determination of its class of toxicity. Estimations of the LD50 carried out for this purpose often have a precision which is higher than necessary and so is the number of laboratory animals used. Alternative methods estimating an approximate lethal dose (ALD)
Baccouche, S; Al-Azmi, D; Karunakara, N; Trabelsi, A
2012-01-01
Gamma-ray measurements in terrestrial/environmental samples require the use of high efficient detectors because of the low level of the radionuclide activity concentrations in the samples; thus scintillators are suitable for this purpose. Two scintillation detectors were studied in this work; CsI(Tl) and NaI(Tl) with identical size for measurement of terrestrial samples for performance study. This work describes a Monte Carlo method for making the full-energy efficiency calibration curves for both detectors using gamma-ray energies associated with the decay of naturally occurring radionuclides (137)Cs (661keV), (40)K (1460keV), (238)U ((214)Bi, 1764keV) and (232)Th ((208)Tl, 2614keV), which are found in terrestrial samples. The magnitude of the coincidence summing effect occurring for the 2614keV emission of (208)Tl is assessed by simulation. The method provides an efficient tool to make the full-energy efficiency calibration curve for scintillation detectors for any samples geometry and volume in order to determine accurate activity concentrations in terrestrial samples. PMID:21852143
To, Gary; Mahfouz, Mohamed R
2013-11-01
In recent years, wireless positioning and tracking devices based on semiconductor micro electro-mechanical system (MEMS) sensors have successfully integrated into the consumer electronics market. Information from the sensors is processed by an attitude estimation program. Many of these algorithms were developed primarily for aeronautical applications. The parameters affecting the accuracy and stability of the system vary with the intended application. The performance of these algorithms occasionally destabilize during human motion tracking activities, which does not satisfy the reliability and high accuracy demand in biomedical application. A previous study accessed the feasibility of using semiconductor based inertial measurement units (IMUs) for human motion tracking. IMU hardware has been redesigned and an attitude estimation algorithm using sequential Monte Carlo (SMC) methods, or particle filter, for quaternions was developed. The method presented in this paper uses von Mises-Fisher and a nonuniform simulation to provide density estimation of the rotation group SO(3). Synthetic signal simulation, robotics applications, and human applications have been investigated. PMID:23674420
NASA Astrophysics Data System (ADS)
Islamuddin Shah, Syed; Nandipati, Giridhar; Kara, Abdelkader; Rahman, Talat S.
2012-02-01
We have applied a modified Self-Learning Kinetic Monte Carlo (SLKMC) method [1] to examine the self-diffusion of small Ag and Ni islands, containing up to 10 atom, on the (111) surface of the respective metal. The pattern recognition scheme in this new SLKMC method allows occupancy of the fcc, hcp and top sites on the fcc(111) surface and employs them to identify the local neighborhood around a central atom. Molecular static calculations with semi empirical interatomic potential and reliable techniques for saddle point search revealed several new diffusion mechanisms that contribute to the diffusion of small islands. For comparison we have also evaluated the diffusion characteristics of Cu clusters on Cu(111) and compared results with previous findings [2]. Our results show a linear increase in effective energy barriers scaling almost as 0.043, 0.051 and 0.064 eV/atom for the Cu/Cu(111), Ag/Ag(111), and Ni/Ni(111) systems, respectively. For all three systems, diffusion of small islands proceeds mainly through concerted motion, although several multiple and single atom processes also contribute. [1] Oleg Trushin et al. Phys. Rev. B 72, 115401 (2005) [2] Altaf Karim et al. Phys. Rev. B 73, 165411 (2006)
Monte Carlo technique in modeling ground motion coherence in sedimentary filled valleys
Cerveny, Vlastislav
Monte Carlo technique in modeling ground motion coherence in sedimentary filled valleys Arrigo propagation Monte Carlo numerical simulations Site effects a b s t r a c t Using a Monte Carlo method based
NASA Astrophysics Data System (ADS)
Bernardin, Frederick; Rutledge, Gregory
2006-03-01
The use of the SGMC as a generalized descriptive tool for interpreting experimental data obtained from non-equilibrium systems will be summarized. The usefulness of the method will be demonstrated specifically by interpreting the orientation distribution functions (odf's) of polymer melts which have been uniaxially oriented. Using SGMC, we identify the thermodynamic variables that serve as chemical potentials in a polydisperse system of orientations, and then generate the ensemble of configurations that minimizes the free energy subject to the constraints set by the odf. In this demonstration, the axial symmetry leads to the use of Legendre polynomials as the basis set for the odf. We apply our approach to obtain molecular ensembles corresponding to different values of P2 (the first non-zero Legendre term), which are obtainable through measurements by light scattering or birefringence. Comparisons will be made to a related method by Mavrantzas and Theodorou (Macromolecules, 31, 6310 1998).
Tynes, H H; Kattawar, G W; Zege, E P; Katsev, I L; Prikhach, A S; Chaikovskaya, L I
2001-01-20
For single scattering in a turbid medium, the Mueller matrix is the 4 x 4 matrix that multiplies the incident Stokes vector to yield the scattered Stokes vector. This matrix contains all the information that can be obtained from an elastic-scattering system. We have extended this concept to the multiple-scattering domain where we can define an effective Mueller matrix that, when operating on any incident state of light, will yield the output state. We have calculated this matrix using two completely different computational methods and compared the results for several simple two-layer turbid systems separated by a dielectric interface. We have shown that both methods give reliable results and therefore can be used to accurately predict the scattering properties of turbid media. PMID:18357013
M. Hikita; K. Yamada; A. Nakamura; T. Mizutani; A. Oohasi; M. Ieda
1990-01-01
The mechanism of partial discharge (PD) occurring in the CIGRE Method II (CM-II) electrode system, which is a representative closed-void model system is discussed in the context of a computer-aided PD measuring system. Measurements of PD are made for the CM-II electrode system. Effects of the pressure and gas inside the void on the PD are examined. Taking into account
John von Neumann Institute for Computing Monte Carlo Protein Folding
Hsu, Hsiao-Ping
John von Neumann Institute for Computing Monte Carlo Protein Folding: Simulations of Met://www.fz-juelich.de/nic-series/volume20 #12;#12;Monte Carlo Protein Folding: Simulations of Met-Enkephalin with Solvent-Accessible Area difficulties in applying Monte Carlo methods to protein folding. The solvent-accessible area method, a popular
Fang Yuan; Badal, Andreu; Allec, Nicholas; Karim, Karim S.; Badano, Aldo [Division of Imaging and Applied Mathematics, Office of Science and Engineering Laboratories, Center for Devices and Radiological Health, U.S. Food and Drug Administration, 10903 New Hampshire Avenue, Silver Spring, Maryland 20993-0002 (United States) and Department of Electrical and Computer Engineering, University of Waterloo, Waterloo, Ontario N2L3G1 (Canada); Division of Imaging and Applied Mathematics, Office of Science and Engineering Laboratories, Center for Devices and Radiological Health, U.S. Food and Drug Administration, 10903 New Hampshire Avenue, Silver Spring, Maryland 20993-0002 (United States); Department of Electrical and Computer Engineering, University of Waterloo, Waterloo, Ontario N2L3G1 (Canada); Division of Imaging and Applied Mathematics, Office of Science and Engineering Laboratories, Center for Devices and Radiological Health, U.S. Food and Drug Administration, 10903 New Hampshire Avenue, Silver Spring, Maryland 20993-0002 (United States)
2012-01-15
Purpose: The authors describe a detailed Monte Carlo (MC) method for the coupled transport of ionizing particles and charge carriers in amorphous selenium (a-Se) semiconductor x-ray detectors, and model the effect of statistical variations on the detected signal. Methods: A detailed transport code was developed for modeling the signal formation process in semiconductor x-ray detectors. The charge transport routines include three-dimensional spatial and temporal models of electron-hole pair transport taking into account recombination and trapping. Many electron-hole pairs are created simultaneously in bursts from energy deposition events. Carrier transport processes include drift due to external field and Coulombic interactions, and diffusion due to Brownian motion. Results: Pulse-height spectra (PHS) have been simulated with different transport conditions for a range of monoenergetic incident x-ray energies and mammography radiation beam qualities. Two methods for calculating Swank factors from simulated PHS are shown, one using the entire PHS distribution, and the other using the photopeak. The latter ignores contributions from Compton scattering and K-fluorescence. Comparisons differ by approximately 2% between experimental measurements and simulations. Conclusions: The a-Se x-ray detector PHS responses simulated in this work include three-dimensional spatial and temporal transport of electron-hole pairs. These PHS were used to calculate the Swank factor and compare it with experimental measurements. The Swank factor was shown to be a function of x-ray energy and applied electric field. Trapping and recombination models are all shown to affect the Swank factor.
Sequential Monte Carlo for Bayesian Computation
Pierre Del Moral; Arnaud Doucet; Ajay Jasra
Summary Sequential Monte Carlo (SMC) methods are a class of importance sampling and resampling techniques designed to simulate from a sequence of probability distributions. These approaches have become very popular over the last few years to solve sequential Bayesian inference problems (e.g. Doucet et al. 2001). However, in comparison to Markov chain Monte Carlo (MCMC), the applica- tion of SMC
Jiang, Yongshuai; Zhang, Lanying; Kong, Fanwu; Zhang, Mingming; Lv, Hongchao; Liu, Guiyou; Liao, Mingzhi; Feng, Rennan; Li, Jin; Zhang, Ruijie
2014-01-01
Traditional permutation (TradPerm) tests are usually considered the gold standard for multiple testing corrections. However, they can be difficult to complete for the meta-analyses of genetic association studies based on multiple single nucleotide polymorphism loci as they depend on individual-level genotype and phenotype data to perform random shuffles, which are not easy to obtain. Most meta-analyses have therefore been performed using summary statistics from previously published studies. To carry out a permutation using only genotype counts without changing the size of the TradPerm P-value, we developed a Monte Carlo permutation (MCPerm) method. First, for each study included in the meta-analysis, we used a two-step hypergeometric distribution to generate a random number of genotypes in cases and controls. We then carried out a meta-analysis using these random genotype data. Finally, we obtained the corrected permutation P-value of the meta-analysis by repeating the entire process N times. We used five real datasets and five simulation datasets to evaluate the MCPerm method and our results showed the following: (1) MCPerm requires only the summary statistics of the genotype, without the need for individual-level data; (2) Genotype counts generated by our two-step hypergeometric distributions had the same distributions as genotype counts generated by shuffling; (3) MCPerm had almost exactly the same permutation P-values as TradPerm (r?=?0.999; P<2.2e-16); (4) The calculation speed of MCPerm is much faster than that of TradPerm. In summary, MCPerm appears to be a viable alternative to TradPerm, and we have developed it as a freely available R package at CRAN: http://cran.r-project.org/web/packages/MCPerm/index.html. PMID:24586601
Reboredo, F A; Hood, R Q; Kent, P C
2009-01-06
We develop a formalism and present an algorithm for optimization of the trial wave-function used in fixed-node diffusion quantum Monte Carlo (DMC) methods. The formalism is based on the DMC mixed estimator of the ground state probability density. We take advantage of a basic property of the walker configuration distribution generated in a DMC calculation, to (i) project-out a multi-determinant expansion of the fixed node ground state wave function and (ii) to define a cost function that relates the interacting-ground-state-fixed-node and the non-interacting trial wave functions. We show that (a) locally smoothing out the kink of the fixed-node ground-state wave function at the node generates a new trial wave function with better nodal structure and (b) we argue that the noise in the fixed-node wave function resulting from finite sampling plays a beneficial role, allowing the nodes to adjust towards the ones of the exact many-body ground state in a simulated annealing-like process. Based on these principles, we propose a method to improve both single determinant and multi-determinant expansions of the trial wave function. The method can be generalized to other wave function forms such as pfaffians. We test the method in a model system where benchmark configuration interaction calculations can be performed and most components of the Hamiltonian are evaluated analytically. Comparing the DMC calculations with the exact solutions, we find that the trial wave function is systematically improved. The overlap of the optimized trial wave function and the exact ground state converges to 100% even starting from wave functions orthogonal to the exact ground state. Similarly, the DMC total energy and density converges to the exact solutions for the model. In the optimization process we find an optimal non-interacting nodal potential of density-functional-like form whose existence was predicted in a previous publication [Phys. Rev. B 77 245110 (2008)]. Tests of the method are extended to a model system with a conventional Coulomb interaction where we show we can obtain the exact Kohn-Sham effective potential from the DMC data.
Practical Markov Chain Monte Carlo
Charles J. Geyer
1992-01-01
Markov chain Monte Carlo using the Metropolis-Hastings algorithm is a general method for the simulation of stochastic processes having probability densities known up to a constant of proportionality. Despite recent advances in its theory, the practice has remained controversial. This article makes the case for basing all inference on one long run of the Markov chain and estimating the Monte
Introduction to Monte Carlo algorithms
NASA Astrophysics Data System (ADS)
Krauth, Werner
These lectures that I gave in the summer of 1996 at the Beg-Rohu (France) and Budapest summer schools discuss the fundamental principles of thermodynamic and dynamic Monte Carlo methods in a simple and light-weight fashion. The key-words are Markov chains, sampling, detailed balance, a priori probabilities, rejections, ergodicity, "Faster than the clock algorithms".
Monte Carlo radiative transfer in protoplanetary disks
C. Pinte; F. Ménard; G. Duchêne; P. Bastien
2006-01-01
Aims.We present a new continuum 3D radiative transfer code, MCFOST, based on a Monte-Carlo method. MCFOST can be used to calculate (i) monochromatic images in scattered light and\\/or thermal emission; (ii) polarisation maps; (iii) interferometric visibilities; (iv) spectral energy distributions; and (v) dust temperature distributions of protoplanetary disks. Methods: .Several improvements to the standard Monte Carlo method are implemented in